ICO approves legal services certification scheme, ICO issues second Tech Horizons report, Provisional deal reached on EU-wide rules for platform workers and other techlaw UK and EU news not covered elsewhere on the SCL website.
UK law
ICO approves legal services certification scheme
The Information Commissioner’s Office (ICO) has approved a certification scheme aimed at legal service providers who process personal data. Certification schemes were introduced under the UK GDPR to help organisations demonstrate compliance with data protection requirements and in turn, inspire trust and confidence in the people who use their products, processes and services. The Legal Services Operational Privacy Certification Scheme is the fifth set of UK GDPR certification criteria that the ICO has approved. The scheme applies to legal service providers (both controllers and processors), including law firms, solicitors, barristers’ chambers, barristers, and other providers, for their processing of personal data in relation to the legal services provided, held in the “client file”. It follows four others that have been successfully approved and published on the ICO website; one?offering secure re-use and disposal of IT assets, two looking at areas including?age assurance?and?children’s online privacy and one aimed at training and qualification service providers.
ICO issues second Tech Horizons report
The ICO has issued a second Tech Horizons report. The first edition of the Tech horizons report covered four emerging technologies in depth: the Internet of Things, immersive technologies, health-tech applications and decentralised finance. The new report considers eight further technologies: genomics, immersive virtual worlds, neurotechnologies, quantum computing, commercial use of drones, personalised AI, next-generation search and central bank digital currencies.
IAB Tech Lab publishes analysis of Google’s Privacy Sandbox
The IAB Technology Laboratory (Tech Lab) has published analysis of Google’s plan to eliminate third-party cookie-based tracking from its Chrome browser while replacing it with the Privacy Sandbox. The Tech Lab says that adopting the Privacy Sandbox has created certain problems including with essential event-based metrics, brand safety concerns, on-browser computing implications and a lack of consideration for commercial requirements. The report also highlighted that the changes required by the Privacy Sandbox involve substantial development and infrastructure investment costs for both buy and sell-side technology companies. In addition, operational, business, financial, and legal processes for brands, agencies, and media companies will need extensive reworking. The analysis is open for public comment until 22 March 2024.
UK government publishes introduction to AI assurance
The government has published a guide to AI assurance. It aims to support organisations to better understand how AI assurance techniques can be used to ensure the safe and responsible development and deployment of AI systems. It introduces key AI assurance concepts and terms and situates them within the wider AI governance landscape. The introduction supports the UK’s March 2023 white paper, A pro-innovation approach to AI regulation that outlines five regulatory principles underpinning AI regulation, and the subsequent consultation response to bring the principles into practice. As AI becomes increasingly prevalent across all sectors of the economy, it is essential that we ensure it is well governed. AI governance refers to a range of mechanisms including laws, regulations, policies, institutions, and norms that can all be used to outline processes for making decisions about AI. This guidance aims to provide an accessible introduction to both assurance mechanisms and global technical standards, to help industry and regulators better understand how to build and deploy responsible AI systems. The guidance will be regularly updated to reflect feedback from stakeholders, the changing regulatory environment and emerging global best practices.
AI Safety Institute issues approach to evaluations
The AISI was launched at the AI Safety Summit in November 2023. It established three core functions: to develop and conduct evaluations on advanced AI systems; drive foundational AI safety research; and facilitate information exchange. It will assess the capabilities of advanced AI systems using a range of different techniques, including automated capability assessments, red-teaming (deploying domain experts to interact with a model to test its capabilities and break model safeguards), human uplift evaluations (assessing how advanced AI systems might be used by bad actors to carry out real-life harmful tasks, compared to the use of existing tools such as internet search) and AI agent evaluations. It will consider misuse, societal impacts, autonomous systems and safeguards. It will also be considering capabilities elicitation and jailbreaking, AI model explainability, interventions on model behaviour, and novel approaches to AI alignment.
EU law
Provisional deal reached on EU-wide rules for platform workers
The European Parliament and Council have reached a provisional deal on the Platform Work Directive. It aims to ensure that people performing platform work have their employment status classified correctly and to correct bogus self-employment. It also includes rules on algorithmic management and the use of AI in the workplace. The new law introduces a presumption of an employment relationship (as opposed to self-employment) that is triggered when facts indicating control and direction are present, according to national law and collective agreements in place, as well as considering EU case law. Member states will be required to establish a rebuttable legal presumption of employment at national level, aiming to correct the imbalance of power between the platform and the person performing platform work. By establishing an effective presumption, member states will make it easier to correct bogus self-employment. The burden of proof lies with the platform. The new rules also aim to ensure that a person performing platform work cannot be fired or dismissed based on a decision taken by an algorithm or an automated decision-making system, with platforms having to ensure human oversight on important decisions that directly affect the persons performing platform work. The Directive also introduces rules about personal data: platforms will be forbidden to process certain types of personal data, such as on personal beliefs and private exchanges with colleagues. It also aims to improve transparency by obliging platforms to inform workers and their representatives on how their algorithms work and how The agreed text will now have to be formally adopted by both Parliament and Council to enter into force.
Commission closes market investigations on Microsoft’s and Apple’s services under the Digital Markets Act
The European Commission has adopted decisions closing four market investigations under the DMA. It has found that finding that Apple and Microsoft should not be designated as gatekeepers for the following core platform services: Apple’s messaging service iMessage, Microsoft’s online search engine Bing, web browser Edge and online advertising service Microsoft Advertising. The Commission will continue to monitor the developments on the market with respect to these services, should any substantial changes arise. The decisions do not affect the designation of Apple and Microsoft as gatekeepers on 5 September 2023 as regards their other core platform services.
New measures to boost the rollout of gigabit networks
Political agreement has been reached between the European Parliament and the Council on the Gigabit Infrastructure Act (GIA). It was proposed by the Commission on 23 February 2023. The agreement accompanies a Recommendation on the regulatory promotion of gigabit connectivity. The GIA aims to simplify and speed up the deployment of very high-capacity networks, such as fibre and 5G, by reducing the administrative burden and the costs of deployment. The Recommendation provides guidance on how to design access remedy obligations for operators with significant market power, to guarantee fair competition and at the same time to foster the rollout of gigabit networks by ensuring that all operators can have access to existing network infrastructures. The agreement reached on the GIA now needs to be formally adopted by the European Parliament and the Council. The new rules will be directly applicable in all member states 18 months after its entry into force, with certain provisions applying slightly later. The new rules will replace the Broadband Cost Reduction Directive.