Artificial Intelligence (Regulation) Bill has first reading in the House of Lords

November 27, 2023

The Artificial Intelligence (Regulation) Bill received its first reading in the House of Lords on 22 November 2023.

It provides that the UK government will create an AI Authority which will monitor the use and regulation of AI in the UK.

It further sets out that regulation of AI should deliver safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

In addition, any organisation developing, deploying or using AI is required to be transparent about its use; carry out transparent and thorough tests; and comply with applicable laws, including data protection, privacy intellectual property laws.

AI and its applications should comply with equalities legislation; to be inclusive by design; be designed so as neither to discriminate unlawfully among individuals nor, to the extent reasonably practicable, to perpetuate unlawful discrimination arising from input data; meet the needs of those from lower socio-economic groups, older people and disabled people; and generate data that is findable, accessible, interoperable and reusable.

The Bill also provides that a burden or restriction which is imposed on a person, or on the carrying on of an activity, regarding AI should be in proportion to the benefits.

The Bill also provides that the AI authority must collaborate with other regulators develop regulatory AI sandboxes. Further, it must implement a programme for public engagement about the opportunities and risks presented by AI.

An organisation which develops, deploys, or uses AI must have a designated AI officer, with the aim of ensuring the safe, ethical, unbiased and non-discriminatory use of AI by the business (including by ensuring that data used by the business in any AI technology is unbiased). In addition, information about the development, deployment, or use of AI by that business must be included in the company’s strategic report under section 414(C) of the Companies Act 2006.

A person involved in training AI must:

  • supply a record of all third-party data and IP used in that training to the AI Authority; and
  • assure the AI Authority that all data and IP is used with consent (either express or implied) and complying with relevant IP and copyright obligations.

Any person supplying a product or service involving AI must give customers clear and unambiguous health warnings, labelling and opportunities to give or withhold consent (either express or implied) in advance.

A business which develops, deploys, or uses AI must allow independent third parties accredited by the AI Authority to audit its processes and systems.

The Bill has some notable gaps – for example if any fines will apply for breach of the Bill’s provisions, or how the use of AI by bad actors would be monitored and dealt with.

The date for second reading has yet to be announced. As it is a Private Member’s Bill, it remains to be seen if it will become law (unlikely) or if the government will take account of its proposals (perhaps a little less unlikely).