European Parliament committee says use of consumer’s personal information can cross the line into manipulation

The European Parliament’s internal market and consumer committee has published a study on new aspects and challenges in consumer protection. Study says that use of consumer’s personal information can cross the line into manipulation and has consequences beyond online commerce.

The European Parliament’s internal market and consumer committee has issued a report, the aim of which is to identify key issues regarding the relationships between consumers and traders and the ways in which AI is affecting relationships and interactions. The report also proposes possible solutions.

The study addresses the new challenges and opportunities for digital services that are provided by artificial intelligence, including consumer protection, data protection, and providers’ liability. It also considers how digital services rely on AI for processing consumer data and for targeting consumers with ads and other messages. It discussed the risks to consumer privacy and autonomy, as well as the opportunity to develop consumer-friendly AI applications. The report also covers the relevance of AI for the liability of service providers in connection with using AI systems to detect and respond to unlawful and harmful content.

Online consumers find themselves in an unbalanced relation to service providers and traders. AI has provided technologies with which to exploit the wealth of consumers’ information to better target individuals. This targeting can cross the line into manipulation, as consumers’ responses could be based on irrational aspects of their psychology, on a lack of information, or on a situation of need.

A personal data economy is emerging where personal information is collected and exchanged, its value consisting in possible uses to anticipate and modify individuals’ behaviour. The business model based on providing “free” services paid through advertising has an effect that goes beyond e-commerce. To expose consumers to ads, platforms have to attract and keep consumers on their websites and the methods used may please or excite users, confirm their prejudices, trigger negative feelings and provide for addictive symbolic rewards and punishments. Moreover, individuals tend to be served with kinds of content and messaging that have attracted or pleased similar people in the past which may lead to separation and polarisation in the public sphere.

AI technologies are also increasingly used by online service providers, to detect and react to unlawful and inappropriate online behaviour. While AI technologies can contribute to effective moderation, enabling providers to cope with the huge growth and accelerated dynamic of user-generated content, they may also deliver inaccurate, biased or discriminatory responses, to the detriment of freedom of speech and users’ rights.

Two models

The report says there are two models to deal with the dilemma of personal data being extracted from online services at no cost, and then used and exchanged to the benefit of providers and traders.

The first is to accept that personal information is a tradeable commodity, but ensuring that data subjects draw some benefit from the use made of their data, while also enabling them to exercise some control over the data. 

The second option is to bar suppliers from offering services or benefits in exchange for personal data.  This would mean that personal data should be used only when necessary to deliver a service requested by consumers, not as something given in exchange for a different service. 

Consumer choice can play an important role, whichever approach is adopted, but effective protection of consumer privacy requires that consumers should not be deceive by “design tricks” or “dark patterns” that stealthily induce them to consent to the processing of their data.

Consumer law

Consumer law also has a role to play as the AI-based processing of consumer data is relevant to the main goals of consumer protection law, such as protection of the weaker party, regulated autonomy, and non-discrimination. 

Opportunities of AI

The report also covers the opportunities AI presents. AI can support consumers in protecting themselves from unwanted ads and spam; it can enable consumers to identify cases where unnecessary or excessive data is being collected or where fake and untrustworthy information is provided; it can enable and support consumers and their organisations to detect breaches of the law, assess compliance, and obtain redress. 

Policy recommendations

The report includes some policy recommendations as follows:

a) Consumers should have the option not to be tracked and (micro)-targeted and should have an easy way to express their preferences;

b) The grounds should be specified under which service providers and traders cannot price-discriminate;

c) Consideration should be given as to how discrimination in ad targeting is to be addressed;

d) Guidance should be given concerning what algorithmic practices count as instances of aggressive advertising;

e) Guidance should be given concerning cases in which consumers have a right to contest a decision that undermines their interests;

f) Consumers should be provided with information on whether and for what purposes they are tracked and if  they are receiving information for advertising purposes; 

g) Protection of consumer privacy requires preventive risk-mitigation measures in combination with collective redress;

h) The development of consumer-friendly AI-technologies should be encouraged and supported. Service providers should be prevented from blocking legitimate tools for exercising of consumer rights;

i) Liability limitations for online providers should also apply to “active” providers, such as search engines, online repositories, and social networks, regardless of whether user-generated content is organised, presented and moderated by humans, by algorithms or both;

j) Limitations on providers’ secondary liability should not apply when providers have failed to adopt reasonable precautionary measures that could have prevented that behaviour or mitigated its effects. This failure may also depend on not having adopted the most effective AI technologies;

k) The availability of AI technologies for detecting unlawful online content and behaviour should be encouraged, in combination with human judgement; and

l) Third-party filtering/moderation should be encouraged so as to broaden access, and so should the sharing of datasets (to train AI classifiers) and software, so that both are accessible to small companies as well.

Published: 2020-05-12T12:00:00

    Please wait...