The regulatory landscape surrounding the use of bot technologies – Dentons
TMT analysis: Dr. Kuan Hon, Of counsel, at Dentons considers the regulatory landscape for the use of bot technologies. It looks at what a bot is, its regulation within the UK, the impact of the EU Artificial Intelligence Act, and potential compliance issues to consider.
This analysis was first published on Lexis®PSL on 12/07/2022.
A ‘bot’, abbreviated from ‘robot’, is the term commonly used for an automated software ‘agent’ that, once programmed and run, performs certain tasks for the individual or computer program that deployed the bot. The bot operates autonomously without requiring further human intervention, often travelling around a network like the internet. Bots are typically used to automate tasks or processes that software can perform more quickly and efficiently than humans, particularly repetitive, iterative, voluminous tasks. Sometimes, ‘bot’ just refers to an online tool that provides output for users based on input, such as ‘legal bots’ for generating contracts.
Software bots should not be confused with physical machines like the humanoid ’robots’ of Isaac Asimov fame. Also, bots should not be equated with artificial intelligence (AI) or machine learning (ML). Some bots do involve use of AI, such as the well-known DoNotPay bot for contesting parking tickets and more, self-described as a ‘robot lawyer’. However, other bots make no use of AI or ML, and instead operate from pre-programmed sets of rules that do not change with ‘learning’. However, ‘bot’ is not a term of art, and there is no definitive definition.
There are many different types of bots. Internet bots that perform functions in relation to websites were the most well-known early bots. Still in use today are ‘web crawlers’ or ‘spiders’, that automatically visit and ‘crawl’ websites, indexing their webpages and/or documents for search engine providers, like the well-known Googlebot. ‘Scraper’ bots ‘scrape’ and download website content for other purposes. For instance, a US court recently ruled that business-social media company LinkedIn could not prevent a competitor scraping LinkedIn users’ publicly-available data.
Increasingly ubiquitous are website ‘chatbots’, intended to answer customer queries without human involvement.
Like any tool, bots can be used for good or ill. For example, social media bots that automatically post on Twitter could provide useful information by alerting users to housing law cases or, alternatively, could deliberately spread misinformation and fake news for political or other unsavoury ends. ‘Spambots’ may grab email addresses from websites and send spam emails, while other spambots could post comments in message boards or blogs with links to drive traffic to identified websites. Malicious bots can conduct distributed denial-of-service (DDoS) attacks on websites, or keep trying to login on different websites using username/passwords previously stolen and often available on the dark web.
Yet robotic business process automation (BPA), using ‘transactional bots’ to automate specific processes, is part of a potential ‘hyperautomation’ market that analyst Gartner estimates could reduce operational costs by 30% by 2024.
Bots are not regulated as such in the UK. Bot technologies, like other kinds of technologies, are just tools. Generally, it is the use of a technology that is regulated, for instance, the purposes for which a bot is used and/or how it is used, rather than the technology itself being regulated.
For example, Ticketmaster’s £125m fine in 2020 for security breaches was related to its use of a third party chatbot. However, the breaches were not caused by its use of a chatbot as such. Rather, Ticketmaster had integrated a third party’s chatbot script on its own website, including its payment page (which the third party Inbenta said should not have been included). Hackers attacking the third party inserted malicious code into its script, thereby obtaining Ticketmaster customers’ card details from its payment page. Here, the breach was not caused by the chatbot use as such, but the security measures and decisions taken. Any script insecurely used on a payment page, bot-related or not, would raise similar risks.
Bots are specifically mentioned in the Online Safety Bill (OSB) currently undergoing the UK legislative process. This will impose duties on certain service providers hosting user-generated content to, broadly, police the content. Bots (not defined) will be treated as ‘users’ if the bot’s functions include interacting with user-generated content and if the bot is not operated by, or for, the service provider. Service providers’ duties under the OSB will extend to user-generated content created, uploaded or shared by a non-human, third party ‘bot’ or other automated software tool.
Equally, bots can be, and already are being, used by some service providers as a pro-active tool for finding and flagging illegal or abusive content on their hosting platforms.
Bots are software applications, so regulations that apply to software and software services generally, are relevant to bot use. For example, as with any software, questions to consider include:
When considering the relevant regulatory landscape, the bot’s intended use or purpose must also be considered, as flagged above. For instance, use of a bot for ticket scalping for UK recreational, sporting or cultural events. The Breaching of Limits on Ticket Sales Regulations 2018 criminalises the use of software (typically bots) to buy more tickets online than the sales limit, to on-sell at a profit (in the EU, resale of tickets acquired via bots is also now considered an unfair commercial practice under Directive2005/29/EC as amended, also known as the EU Unfair Commercial Practices Directive). A bill to similar effect was introduced in the US to ‘crack down on cyber Grinches using “bot” technology to quickly buy up whole inventories of popular holiday toys and resell them to parents at higher prices’.
As another example, the Computer Misuse Act 1990 would equally criminalise unauthorised access to computers by bots or humans, including ethical hackers’ bots that seek vulnerabilities, although there of course the necessary ‘intention’ or ‘knowledge’ of lack of authorisation would be attributable to the person behind the bot, rather than the bot itself.
Generally, it is important to consider, in the individual context, who provides the particular bot or bot service, who programs or configures it, and accordingly who exactly is responsible and liable for a bot’s actions/inactions, and related matters such as security. That should all be covered contractually as far as possible (obviously statutory obligations cannot be excluded by contract).
It is also important to consider, in context, who is or should be legally responsible for detecting and/or dealing with bots, how responsibility arises, and to address that contractually where feasible.
The proposed EU AI Act is a rare example of lawmakers trying to regulate specific technologies as such by imposing legislative constraints on the use of ‘artificial intelligence systems’ (AI systems), as defined. If a bot is caught by the definition, it will be regulated as an AI system. If a bot is not classified as an AI system, or at least as part of an AI system, then the EU AI Act will not apply to it. It is foreseeable that there may be forthcoming scope debates about what is a ‘system’, what could be caught as an ‘AI system’, whether specific components are considered part of an ‘AI system’ or not, and indeed conversely whether an ‘AI system’ is part of a bot.
The EU AI Act is still being debated, so its final text is not yet known. However, one interesting aspect is that it will require transparency for AI systems used for certain purposes. For example, with AI systems intended to interact with people, like AI-based chatbots, those people must be told that they are interacting with an AI system (unless it is obvious to a reasonable person).
The EU AI Act will also prohibit altogether the marketing or use of certain types of AI systems, so again bot use would be prohibited to the extent an AI system for one of those prohibited purposes is involved, for example, AI chatbots harmfully exploiting vulnerable people. Certain AI systems will be considered ‘high risk’, again based on their purpose rather than whether they involve the use of bots. High-risk AI systems are subject to a long and detailed set of requirements.
It remains to be seen which actors involved with an AI system will be responsible and liable for exactly what aspects –providers, users, importers, distributors, product manufacturers (although it seems bots will not be considered ‘products’ under the EU AI Act).
Note that the EU AI Act will not apply to the UK, so it is only relevant to UK businesses that have EU operations or customers. Nonetheless, the UK government’s white paper on AI is due in 2022, so we will find out soon about any planned UK AI-related legislation.
If human staff are to be replaced by bots, employment law issues must of course always be considered. Otherwise, governance and compliance issues when using bots to automate business processes are largely the same as when using any other technology to automate business processes.
To reiterate, what is important is not the use of bot technology as such, but what it is to be used for, and why/how. Accordingly, it is not possible to take a one-size-fits-all approach to bot governance. To give just one example, bots process digital information electronically, so if any of that information is personal data, they will always be ‘processing’ personal data, and privacy laws must be complied with, including the UK General Data Protection Regulation, Retained Regulation (EU) 2016/679 (UK GDPR). As discussed above, if bot use will involve an AI system, then the EU AI Act will be relevant once it is effective, including which aspects of the Act apply to the intended bot use and what measures should be taken for compliance.
The above, however, are no different to the general considerations arising in connection with the use of other more traditional types of technology or software.
Interviewed by Diego Salinas for Lexis®PSL
Email me
kuan.hon@dentons.com
© 2022 Dentons. All rights reserved. Attorney Advertising. Dentons is a global legal practice providing client services worldwide through its member firms and affiliates. This website and its publications are not designed to provide legal or other advice and you should not take, or refrain from taking, action based on its content.
Unsolicited emails and other information sent to Dentons will not be considered confidential, may be disclosed to others, may not receive a response, and do not create a lawyer-client relationship. If you are not already a client of Dentons, please do not send us any confidential information.
You are switching to another language. Please click Confirm below to continue.
You will now be taken from the global Dentons website to the $redirectingsite website. To proceed, please click Accept.