Personal Breach Notification Guidelines Analysed

December 20, 2017

This article summarises
guidance from EU regulators WP29 on breach notification to supervisory authorities
and data subjects under the GDPR (Arts.33-34). The Annex highlights some areas
where the guidance may increase confusion about technical security matters. For
some specific problems under WP250, please see
my IAPP article. 


– unsurprisingly,
WP250 urges controllers’ and processors’
implementation of incident response plans for handling personal data breaches (PDBs).

Scope – plans should cover allocating internal
operational responsibilities (with processes to funnel incident-related info to
the right people), detecting/establishing PDBs, incident management including appropriate
internal escalation/reporting, containment/recovery/ remediation, assessing
likely risk to individuals (see later – likelihood of no risk, risk or high
risk) and other actions, notably making any necessary notifications to the
supervisory authority (SA) and
affected individuals, to which SA, etc. All this should happen “soon after” the
initial alert, save in “exceptional cases”. It’s “useful” for GDPR compliance
to show employees were told about these plans and know how to respond.

DPIA – a prior data protection impact assessment
might have considered potential risks from PDBs, but each actual PDB requires specific risk assessment.

DPO – any data protection officer should be
involved in PDB notifications and subsequent SA investigations.

Lawyers – not mentioned in WP250 (why would it?),
but involve ASAP data protection/ security lawyers (ideally, covered in the
plan) to help preserve legal privilege for forensic reports etc. in case of future
claims/litigation/regulatory sanctions, and advise on notifications, damage mitigation
and fines reduction.

(Art.32) must
include measures to “detect, address and report a breach in a timely manner” (also
Rec.87). If encrypting, consider “quality” [algorithm strength, key size?],
implementation, outdatedness; change “default keys” [default admin passwords?],
etc. If a DPIA suggests particular security software, but a vulnerability
becomes known, reassess the software “as part of an ongoing DPIA”. Take backups, as
well as encrypting whenever possible.

“double whammy”
– organisations
can be fined for not notifying personal data breaches, and for inadequate security measures: “…they are two separate

? – SAs can order controllers to notify individuals (Arts.34(4), 58(2)(e)). Disobey
on pain of a 4%/€20m fine (Art.83(6)).

encryption good
– organisations
can escape notifying, not only individuals, but maybe even SAs, of PDBs affecting
encrypted data, as the breach is “unlikely” to result in risks to individuals,
e.g. losing a securely-encrypted mobile – but only if the encryption applied was appropriate, properly
implemented, and not e.g. using an outdated algorithm.

investigation vital
– plans
should enable prompt investigation of incidents to determine whether any PDB
occurred and, if so, remediate and notify. Otherwise, regulators may assume earlier
“awareness” of breaches (and possible Art.33 infringement). Therefore,
controllers must act on “any initial
alert” (from whatever source), not ignore them.

Awareness – a “short” period of
investigating an incident is allowed before controllers are considered “aware”.
But as soon as your processor is “aware”, “in principle” you’re aware (and your 72-hour
notification clock begins running) from that same point in time!

at processors
– Art.33(2) already
requires processors to notify their controllers. However, WP250 recommends specific
arrangements: controller-processor contracts’ mandatory terms on processors
assisting controllers with breach notification (Art.28(3)(f)) should specify
how processors should notify controllers.

“without undue delay”

consider processors should give “immediate” notification of PDBs to all their
affected controllers, with further info in phases when available.

If controllers
notify affected individuals (when that’s required) “without undue delay”, that
means “as soon as possible”, or as soon as “reasonably feasible”.

forget Rec.87: what’s not “undue delay” in one situation may be in another,
depending on the PDB’s nature, gravity, and consequences/adverse effects for

knowing precise info, e.g. exact numbers of individuals affected, is no excuse
for delayed notification.

If an
incident affects multiple individuals similarly, strictly each is reportable. However, WP29 will accept a meaningful “bundled”
notification covering similar personal data breached in a similar manner over a
short period, whether after or within 72 hours.

can notify for their controllers
– if the controller-processor contract authorises it. But controllers
retain legal responsibility. So, they might not want to delegate notifications
to processors.

– under WP250:

“categories” of data subjects means e.g. children/other vulnerable groups, disabled
people, employees, customers; categories of records means health data,
educational records, social care info, financial details, bank account numbers,
passport numbers etc.

“likely consequences” includes indicating risk categories: identity theft,
fraud, financial loss, threat to professional secrecy. Focus on adverse

approximate numbers if still unknown, then details later.

If more info will follow, WP250 recommends
telling the SA so. However, “The
supervisory authority should agree how and when additional information should
be provided” seems unrealistic (and isn’t required by the GDPR). SAs
shouldn’t dictate how quickly forensics investigators must discover info, investigators
should be allowed to take whatever time they need to do their jobs properly.

more info than Art.33(3) specifies, if desired.

“Shop the processor?” – Processors may want controller-processor
contracts to require controller notifications to be accurate and fair (some
controllers may just blame processors even if unwarranted?), and give processors
some say/role in controllers’ notification processes!

SAs can
request further info.

purpose of notifying SAs within 72 hours is for advice on whether to notify
individuals (see below).

– update the SA
on ascertaining that the incident was contained and no breach occurred. There’s
no penalty for reporting something that transpires not to be a breach. Overall,
WP250 may drive organisations to notify SAs of all incidents within 72 hours “just
in case”. Will overwhelmed SAs “take back” some of WP250 in a year…?

individuals if “likely” “high risk”
– must be considered independently of SA notification. Exceptionally, notify
individuals before SAs.

A “description”
of measures to address/mitigate (Art.34(2)) may include:

that “after having notified the breach to the relevant supervisory authority,
the controller has received advice on managing the breach and lessening its

appropriate, specific advice to individuals to protect themselves e.g.
resetting passwords where access credentials were compromised.

more info than Art.33(3) specifies, if desired.

Notify affected
individuals directly; if that involves “disproportionate effort”, use public
communication or similar: email, SMS, “direct message”, website banners/notification,
post, prominent print ads (a press release/corporate blog alone isn’t good enough).
WP250 recommends using several channels (but not one compromised by attackers!)

formats/languages may be needed.

consulting the SA on appropriate notification content/channels.

from notifying individuals

must be able to demonstrate they meet a “get-out”, including as risks change
over time.

2%/€10m fine if individuals aren’t notified when the SA thinks they should be.

data being “unintelligible” to unauthorised persons is said to include
“state-of-the-art” encryption – suggesting that outdated/poor encryption won’t be enough to avoid notification.

to ensure “high risk” is no longer likely – include identifying and acting
against whoever accessed data before they could “do anything with it”.

“Disproportionate effort” – e.g. where contact
details were lost in the breach or never known. “Technical arrangements” could make on-demand information about the breach
available to individuals that the controller can’t otherwise contact.

Risk assessment – vital following
awareness of breach. Knowing likelihood
and potential severity of impacts on
individuals helps (1) containment [mitigating risks to individuals? Cf.
breach containment], and (2) determining whether/who to notify. Risks are
higher with greater severity and/or likelihood. “If in doubt, the controller should err on the side of caution and

– is where the breach
may lead to physical, material or non-material damage, e.g. discrimination,
identity theft/fraud, financial loss or reputational damage. Damage “should be”
considered “likely” if breached data are “special category” or criminal
convictions/offences data.

Factors include:

Breach type e.g. confidentiality or loss; nature
(name/address vs. adoptive parent’s address), sensitivity (more sensitive is
higher risk), volume of data, number of affected individuals (greater data
volume/number, higher risk; but, “a
small amount of highly sensitive personal data can have a high impact on an
individual, and a large range of details can reveal a greater range of
information about that individual”). Combined data’s more sensitive than
one “piece” (e.g. health data plus identity documents/credit card details).
Stopped deliveries (holidays) may indicate vulnerable homes. A little highly sensitive
data can be very impactful; a large “range” of data, very revealing.

– how easily can someone accessing compromised data identify individuals e.g.
by matching with other info? Much depends on context, public availability of
related details, etc. Pseudonymisation not just encryption can reduce

of consequences – affected by nature of data e.g. special categories;
vulnerable individuals; whoever accessed data (if known, e.g. disclosed in
error, or otherwise trusted to return/destroy data, cf. actors with
malicious/unknown intentions). EU security agency ENISA’s severity assessment methodology may be useful when formulating response
plans. Longer-term consequences have greater impact.

of individuals/controllers – e.g. risks may be greater with children/vulnerable
individuals; and with medical controllers with health data (cf. newspaper
mailing lists).

– notify the lead SA.
Ideally, indicate which other Member States’ individuals are affected and
notify SAs there too.

and record-keeping
– controllers
must document even unnotifiable breaches (Art.33(5)), e.g. an internal breach
register (or in general processing records, provided all breach info is easily
extractable on SA request) – including cause, what happened, affected data,
effects/consequences, remediation implemented. 2%/€10m fine (and regulatory
orders) possible for not documenting breaches. WP250 recommends recording
reasons for post-breach [“incident”?] decisions, justifications for not
notifying (including why it’s considered unlikely to result in a risk, proof of
meeting any “get-outs”), reasons for delayed notification, proof of
notifications to affected individuals.

notification under other laws

– other requirements may also apply e.g. EIDAS Regulation, NIS Directive,
professional duties and (though not mentioned) financial services regulation. 

To conclude, I
believe that, given WP250, in practice organisations will be taking the
approach of “If in doubt, shout it out!” There’s no penalty for crying wolf,
but there is for not notifying when regulators consider you should have.

Will regulators
change their minds if they get “too many” notifications, many of which may be
about only minor breaches? We shall see.

Dr W Kuan Hon, Director, Privacy, Security &
Information, Fieldfisher, licensed under

(please link to this article). This article represents Kuan’s personal opinions
and are not necessarily shared by any organisation with whom Kuan may be

Annex – Other
technical security issues

WP250 could exacerbate
misinformation about security issues, if it remains unchanged. In particular, it
seemingly refers to everything as “breaches”, but it should address incidents
and breaches separately.

Examples are below. 



WP250 p.6 outlines
the three well known types of security breaches, the “CIA triad”, as follows,
noting that one breach (or incident) could affect any combination of them:

“Confidentiality breach” – unauthorised
or accidental disclosure of, or access to, personal data.

“Availability breach” – accidental or
unauthorised loss of access to, or destruction of, personal data.

“Integrity breach” – unauthorised or
accidental alteration of personal data.


Permanent loss/destruction, including accidental or unauthorised deletion or, for securely encrypted
data, losing the decryption key, certainly affects availability.


But clearly, and
perhaps more importantly, it also affects integrity. “Integrity breach”
includes data loss or destruction – e.g. see the US FISMA or FIPS199
(detailed CIA comparisons are in
, table 7.1).
Also, Art.5(1)(f), described as “integrity and confidentiality” (not
“availability”), mentions “loss” and “destruction”, strongly suggesting that
lawmakers think loss/destruction affects integrity.


Accordingly, “or
destruction of” falls better under Integrity than Availability.

“Furthermore, it should be noted that although a loss of availability
of a controller’s systems might be only temporary and may not have an impact
on individuals, the fact that there has been a network intrusion could still
be considered a potential confidentiality breach and notification might be
required. Therefore, it is important for the controller to consider all
possible consequences of a breach.” (p.7)

DDOS attacks, interfering with authorised users’ availability, are often used as a smokescreen to distract IT/security staff, enabling
criminals to infiltrate the network to steal data while staff are occupied
dealing with the DDOS.


A DOS/DDOS attack, alone, does not normally affect confidentiality.
It may be followed by a
confidentiality attack through some other means (e.g. malware), but a DOS
attack does not itself involve a confidentiality breach.

“The focus of the notification requirement is to encourage
controllers to act promptly on a breach, contain it and, if possible, recover
the compromised personal data, and to seek relevant advice from the
supervisory authority.” (p.13)

It depends on the type of incident. An organisation can “recover”
compromised data from any backup if someone destroyed or corrupted data
(integrity breach). “Recovering” stolen data, to ensure unauthorised persons
can no longer access it (confidentiality breach), is much harder, especially if
the attacker has already published data online.

“WP29 also explained this would similarly be the case if personal
data, such as passwords, were securely hashed and salted, the hashed value
was calculated with a state of the art cryptographic keyed hash function, the
key used to hash the data was not compromised in any breach, and the key used
to hash the data has been generated in a way that it cannot be ascertained by
available technological means by any person who is not authorised to access
it.” (p.15)

Hashing involves a “one-way” scrambling of data using an algorithm. It
doesn’t generally use keys.


The main exception is keyed hashing of messages (HMAC). But the aim
there isn’t confidentiality, it’s integrity – verifying and proving that a message
(e.g. email) hasn’t been altered during transmission.


Furthermore, it’s unclear what “similarly” would “be the case” here –
that a breach in such circumstances needn’t be notified?

“Consequently, if personal data have been made essentially
unintelligible to unauthorised parties and where the data are a copy or a
backup exists, a confidentiality breach involving properly encrypted personal
data may not need to be notified to the supervisory authority. This is
because such a breach is unlikely to pose a risk to individuals’ rights and
freedoms…” (p.16)

This WP250 sentence confuses the issues.


Unauthorised access to data affects confidentiality (not integrity or
availability). Data destruction integrity and availability (not
confidentiality, unless the intruder also accessed intelligible data before
destroying the data. Unauthorised access does not necessarily involve
destruction, and vice versa (again, please see
, Chapter 7).
But the WP250 sentence conflates the two.


Encryption address confidentiality, while backups address integrity
and availability (again see my book, Chapter 7).


A breach involving properly encrypted data may not even be a
confidentiality breach. This depends on the efficacy of the encryption etc.,
not whether a backup exists.


Having backups does not
make a breach affecting unintelligible personal data a non-notifiable confidentiality
breach, as this WP250 sentence implies. Lack of backups affects integrity and
availability, not confidentiality.


To avoid confusion, this sentence should probably read, “unauthorised
access to properly encrypted personal data may not…” (instead of, “a
confidentiality breach involving properly encrypted personal data”), and
“where the data are a copy or a backup exists” should be deleted.

“Furthermore, it should be noted that if there is a breach where
there are no backups of the encrypted personal data then there will have been
an availability breach, which could pose risks to individuals and therefore
may require notification.” (p.16)

It depends on the nature of the breach. Sometimes intruders just
access/copy data. Sometimes, they also interfere with it, e.g. ransomware


This WP250 sentence doesn’t specify what kind of incident it envisages, unfortunately.


Unauthorised access to securely-encrypted data (with no key access),
would not involve a notifiable confidentiality breach.


The reference to “no backups” makes sense only if the breach involved
data loss, destruction or alteration – not access alone. But that’s an
integrity breach, notifiable as such (no need to discuss availability breaches).
Mere unauthorised access to encrypted data, without any data destruction
etc., would not involve an integrity

“A breach that would not require notification to the supervisory
authority would be the loss of a securely encrypted mobile device, utilised
by the controller and its staff. Provided the encryption key remains within
the secure possession of the controller and this is not the sole copy of the
personal data then the personal data would be inaccessible to an attacker.”

Again, this confuses confidentiality and integrity.


Inaccessibility to an attacker depends on the encryption key
remaining secure. If the attacker doesn’t have the key, there’s no
confidentiality breach.


If the mobile did not contain the controller’s sole copy of the data,
because it had been backed up, then there’s no integrity breach.


Therefore, the last sentence should instead read, “Provided the
encryption key remains within the secure possession of the controller then
the personal data would be inaccessible to an attacker (so this would not be
notifiable as there was no confidentiality breach), and provided this is not
the sole copy of the personal data the data would remain accessible to the
controller (so this would not be notifiable as an integrity breach)”.

“Identification may be directly or indirectly possible from the
breached data, but it may also depend on the specific context of the breach,
and public availability of related personal details. This may be more
relevant for confidentiality and availability breaches.” (p.21)

Why is identifiability considered relevant to availability breaches?

 Inconsistency on p.14
– “Depending on the circumstances, it may take the controller some time to
establish the extent of the breaches and, rather than notify each breach
individually, the controller instead organises a meaningful notification that represents
several very similar breaches,
possible different causes
. This could lead to notification to the
supervisory authority being delayed by more than 72 hours after the controller
first becomes aware of these breaches.”

Cf. “However, to
avoid being overly burdensome, the controller may be able to submit a “bundled”
notification representing all these breaches, provided that they concern the same type of personal data breached in the
same way
, over a relatively short space of time.”

The italicised words
contradict each other – “different” is the opposite of “same”. It should
probably read “possibly similar causes”.

Annex B is meant to
provide “a non-exhaustive list of examples of when a breach may be likely to
result in high risk to individuals and consequently instances when a controller
will have to notify a breach to those affected.” However, most (not all) the
examples in Annex B simply state, “if…”, and/or “…depending…”. This approach seems
circular. Annex B would be more helpful if WP29 explained which situations it thinks
is a “risk” or a “high risk”, rather than referring back to “if”, etc.
Otherwise, Annex B only gives examples of some possible types of incidents or breaches,
without providing guidance on when
notification should be made to SAs and/or individuals. E.g.:

Example ii – report “depending… and if the
severity… is high”, and “If the risk is not high…”.

Example iv – report to SA “if there are potential
consequences”, report to individuals “depending on the nature…”

Example v – “if there is a high risk”, etc. etc. 

P.26 “However if it is later compromised” would be clearer
if it read “However if the key is later compromised”.