Book Review: Queer Privacy

May 30, 2017

I read.

I read a lot.

I read books, academic journals, and conference
proceedings. I read blogposts, regulatory consultations and responses,
technical specs and RFCs, Twitter threads and more.

And, of the many tens of thousands of words
I have read recently, those of ‘Queer Privacy’ stand out.

In the words that follow, I’ll outline why
this is the case, and I’ll set the scene for the book. But I am not going to
attempt to summarise it, or reveal too much in the way of its content, for two
reasons:

First, as I will discuss, the book is
predominantly about experiences, often upsetting but all personal. I would do
the authors a disservice in trying to re-tell their experiences in my own
words.

Secondly, if you are reading this, you are
probably a lawyer. This book is not going to stretch your resources budget. You
can set the price you want to pay for it, anywhere between $7 and $30, with a
recommended price of $13. For that, you are supporting an independent publisher
(it was funded by the editor, Sarah Jamie Lewis), who paid each author for
their contribution. Heck, buy it, read it and log it as CPD — if you care about
privacy (real privacy, not just tick-in-the-box-compliance privacy), you’ll get
so much more from this than yet another piece about the GDPR.

So, with that out the way, let’s talk about
Queer Privacy’.

Privacy and marginalised groups

The essence of this book is how
technologies and services, which many of us take for granted, cause harm to
those in marginalised communities. Lewis has used the term ‘queer’ to describe
those margins, encompassing those who are not heterosexual, or whose gender
identity differs from the sex assigned to them at birth (‘non-cisgender’).

Through a series of essays, the book
explores and explains how design or commercial decisions taken by
non-marginalised people contribute to, or even directly cause, harm to those in
the queer community, largely through a failure to consider the needs of others.
How a decision about a feature or policy position which may be entirely
acceptable to non-queer users might render the service unusable, or even
unsafe, for queer users, not through any malicious intent but through a simple
failure to consider the broadness of the user base of the service in question.

The book contains essays about those forced
to live separated lives, hiding themselves and censoring their expression for
their own protection. About hate speech directed at queer users. About the
dangers of ‘real name’ or ‘forced identification’ policies. About surveillance
by abusive partners or third parties, and the consequences of lack of
compartmentalisation and leakage of information. Essays about life under stress
and, ultimately, death. It is a distressing book.

It is not a book of answers, although some
essays hint at technical options. Mostly, it is a book of opportunities, if I
can put it like that. And not ‘opportunities’ in the ‘here’s an idea for a
shiny app’ sense, but a far more fundamental ‘take this chance to think of us,’
type of opportunity, ‘because, if you don’t, you harm us’.

And that is an opportunity we lawyers and
privacy advisors should be taking.

But we comply with the law

No essay in the book suggests that the
service providers in question do not comply with the law. This is not a book
about ‘the law’, although laws are mentioned occasionally. But it is a stark
exposition of how the law can let down those who most need its protection: the
vulnerable, the fearful, and the marginalised. And if the law does not do
enough to protect, the burden falls on those involved in the development of
products and services — from the magical to the mundane — to offer that
protection.

Take, for example, the concept of a privacy
impact assessment. It’s a concept which has been around for years, and which is
going to see an increased prominence under the GDPR. A privacy impact
assessment, one would think, would be an ideal place to identify and foreground
the type of issues discussed in this book, and afford an opportunity to address
them.

But that’s probably not the case.

Most privacy impact assessments — at least,
most which I have seen — have been a quite mechanistic tool, working through
what a service does, how the service will address the legislative framework,
and identifying gaps for rectification. Even the Article 29 Working Party’s
view of a privacy impact assessment is ‘a process for building and
demonstrating compliance’ with the legal framework (see Article 29 WP’s draft
guidance on data protection impact assessments under the GDPR,  p 4). The ICO’s code of practice envisages
something broader, to its credit, but stops short of explicitly considering
risks to marginalised groups.

Perhaps we need something slightly
different.

The concept of a ‘human impact assessment’
has come up a few times in the pages of Computers & Law over the last few
years. I described it in ‘Is
Luddism the Answer to “Keeping Humans at the Heart”?
’ (August/September
2015) in the following terms:

‘Fundamentally, a human impact assessment
would aim to bring consideration of the question which the Luddites asked: is
the innovation before us hurtful to commonality, or beneficial to it? It
requires us to take a step back from the minutiae of a given technology, away
from specific legal problems, and look at the bigger picture.’

As we consider new technologies and
services, can we try to distance ourselves from our own biases, and think about
the development from the perspective of others, especially those who are likely
to be most harmed by a poorly-made decision?

For those who do not already do so, should
our privacy impact assessments have more of a human impact assessment element
to them?

Who should be doing this

I’m very mindful of the fact that, after
reading one book about queer privacy, I am no expert on the subject. I doubt
that, no matter how many books I read, I could gain an expertise in it. And
that’s for a very simple reason: as a straight, white, male, solicitor, I am
unlikely to experience first-hand much, if any, of what the authors describe.

Part of me questions whether I am the right
person to be attempting to tackle this kind of issue in the advice I give. A
far better outcome would be attained through the involvement of someone who has
suffered even just some of these experiences. Someone who knows what it is
really like; someone who has had to live with these challenges day-to-day.

Another part of me thinks that, for better
or worse, I am in a position of advising. I have the opportunity of making a
difference, and so should take it. If that is even as little as encouraging
those making the decisions to think about the broader implications of their
actions — a nudge to change perspective — that may be better than nothing. Perhaps
there will be occasions when the best possible advice will be to seek the
guidance of those in marginalised communities, as part of the process of
gathering user requirements.

I very much doubt that anyone involved in
privacy could read this book and think that nothing needs to be done.

Conclusion

Unlike most legal books, ‘Queer Privacy’ is
relatively short: I saw it on Twitter, bought it, downloaded it and read it
cover to cover after dinner one evening. On my iPad’s screen, it was just over
90 pages. It covers a lot of ground in that space, some of it really rather
upsetting.

How has it left me feeling?

First, bloody lucky.

Second, moved, as I hadn’t appreciated what
friends, neighbours, and colleagues might have faced.

Third, more aware. There is plenty more for
me to think about here, and this is but the tip of an iceberg. But at least I’m
aware that the iceberg is even there.

Some of you may already be considering
these issues in their day-to-day advice, and I would very much welcome thoughts
and comments and suggestions as to how you have done this.

You can buy ‘Queer Privacy’ online at https://leanpub.com/queerprivacy/.

Neil Brown is an experienced Internet,
telecoms and technology lawyer and managing director of law firm decoded:Legal
(https://decodedlegal.com)