This Week’s Techlaw News Round-Up

September 8, 2023

UK law

House of Lords completes third reading of Online Safety Bill

The House of Lords has completed the examination of the Online Safety Bill during its third reading on 6 September 2023. The UK government put forward various amendments before third reading on subjects which included parliamentary scrutiny, remote participants in Ofcom and age assurances. Following completion of third reading, the bill now returns to the Commons for consideration of Lords amendments. The government has changed its approach on the thorny issue of encryption, amending the wording which allows Ofcom to access encrypted messages.

Guidance issued on regulation use of generative AI in advertising

CAP has issued guidance about how the UK Code of Non-Broadcast Advertising and Direct & Promotional Marketing (CAP Code) and the Advertising Standards Authority regulate the use of generative AI in advertising. It says that the CAP Code applies if an ad falls within the ASA’s scope, regardless of how the ad was created. It regulates ads based on how the consumers will interpret them, which is not affected by the means used to generate that specific piece of content. However, the ASA states that there could be times when the way in which an ad was created will be relevant in determining if an ad complies with the CAP Code, especially when AI generated images are used in ads to make efficacy claims as there is a potential these images will mislead if they do not accurately reflect the efficacy of the product. It also points out that there is also a risk of some AI models amplifying biases already present in the data they are trained on, which could potentially lead to socially irresponsible ads.

ICO to review period and fertility tracking apps as many women are concerned over data security

The ICO is reviewing period and fertility apps as new figures show more than half of women have concerns over data security. A poll revealed women said transparency over how their data was used (59%) and how secure it was (57%) were bigger concerns than cost (55%) and ease of use (55%) when it came to choosing an app. The research also showed over half of people who use the apps believed they had noticed an increase in baby or fertility-related adverts since signing up. While some found the adverts positive, 17% described receiving these adverts as distressing. The ICO has now launched a call for evidence. It has also contacted companies who provide period and fertility tracking apps, including some of the most popular apps available to UK users, to find out how they are not processing users’ personal information. A focus of the ICO’s work is to identify whether there is the potential for harm and negative impact on users as a result. These harms could include unnecessarily complicated and confusing privacy policies, leaving users in the dark as to what they have consented to, apps requesting or storing unnecessary volumes of data, or users receiving upsetting targeted advertising that they did not sign up to. The call for evidence ends on 5 October 2023.

Biometrics and Surveillance Camera Commissioner issues guidance on third party certification scheme

The Biometrics and Surveillance Camera Commissioner has issued guidance on its third party certification scheme. Certification enables organisations to demonstrate to communities that they use their CCTV systems transparently, effectively and proportionately. The guidance provides information on why third-party certification was introduced; who can apply for certification; the criteria for applying; the benefits; and how to apply.

Biometrics and Surveillance Camera Commissioner publishes report on the use of uncrewed aerial vehicles

Police forces in the UK routinely use drones equipped with high-definition cameras, night vision or thermal imaging capability. Some record sound as well as images. The Biometrics and Surveillance Camera Commissioner has published a report on the use of uncrewed aerial vehicles (UAVs) by law enforcement organisations, especially the police. The report identified a number of concerns including: lack of awareness of risks to the security of data recorded when drones are deployed and how, or whether, such risks are mitigated; lack of awareness of risks associated with applying software updates to drone systems; lack of consistency of approach to how police use of drones is scrutinised to try to ensure it is appropriate and ethical, with several forces having no external scrutiny mechanism, and others using a variety of outside bodies including Civil Aviation Authority (CAA) and the Office of the Biometrics and Surveillance Camera Commissioner. It includes the following recommendations: guidance needs to be made available to forces on the procurement and deployment of surveillance technology from companies whose trading history and engagement with accountability frameworks has raised significant concern and the attendant risks; guidance is needed on how to mitigate UAV-specific security risks, such as hacking and the use of counter-UAV technology; and chief officer should seek a single, overarching approach from their local elected policing bodies to ensure that they have agreed mechanisms for holding them to account publicly. They also need to ensure that the procurement and deployment of UAVs is demonstrably ethical. Chief officers should also consider a standardised and documented procedure for assessing sensitivity, whether that relates to a geographical site or a more transient operation involving the use of UAV. Guidance on the assessment and measurement of sensitivity is also needed.

DSIT publishes first Frontier AI Taskforce progress report

The Department for Science, Innovation and Technology has published the first progress report on the Frontier AI Taskforce. The report includes information about a new expert advisory board for AI research and national security, recruiting a new team of expert AI researchers, building the technical foundations for AI research inside government, and the AI Safety Summit, among other topics.

EU law

Ada Lovelace Institute makes recommendations for amendments to AI Act

The Ada Lovelace Institute has made recommendations for amendments to the AI Act, in light of the developments in AI since it made its initial report. It recommends that the AI Act should include a definition of “affected persons” and rename “users” to “deployers” to reflect the AI lifecycle more accurately. It also says that an obligation for a pre-deployment fundamental rights to impact assessment (FRIA) should be introduced, as proposed by the European Parliament. In addition, the AI Act, should include a comprehensive remedies framework for affected persons based on effective avenues for redress, including a right to lodge a complaint with a supervisory authority, judicial remedy and an explanation of individual decision-making – also proposed by the European Parliament. It should also enhance multi-stakeholder participation in standards development processes and enhance transparency over these processes. The European Commission should consult with the AI Office’s Advisory Forum when drafting standardisation requests, or when approving harmonised standards. Finally, it should include a standing panel of representative users or a citizen’s assembly as a permanent sub-group of the Office’s Advisory Forum. The panel should be consulted on key questions such as updating the high-risk list, release of large-scale foundation models, or secondary legislation.

European Blockchain Sandbox announces the selected projects for the first cohort

The European Commission, Bird and Bird and OXYGY have announced 20 use-cases for the first cohort of the European Blockchain Sandbox. One use case for the European Blockchain Services Infrastructure has been approved by the EBP for participation in the lot for public entities, and 19 additional use cases were selected in accordance with a carefully curated selection process. The selected use cases span all EU/EEA regions and represent a wide range of industry sectors and regulatory topics. The European Regulatory Sandbox for Blockchain was launched in February 2023. Its goal is to provide a controlled environment for companies to test their products and services, while engaging with relevant regulators. It will run from 2023-2026, supporting 20 projects each year.