Navigating the New Frontier: EU Regulations and the Future of AI Integration
Background: Reams and reams have been written already about the powers and transformation expected to be delivered by Al in the not-too-distant future. Artificial intelligence is now recognised as essential for the digital transformation of society, making it a primary global focus. High-quality digital infrastructure and a robust regulatory framework underpinning this seismic change are critical, especially in the context of data. Data privacy in AI has more nuanced challenges, especially in the context of data-driven cognitive behavioural manipulation of people (e.g., vulnerable groups, social scoring, and Real-time and remote biometric identification systems), covered in more detail in this article. However, before we delve deeper into the specific challenges and constraints posed by Al and how regulation could help build the guard rails to help navigate this vast new domain and its humungous potential, let us spend a little bit of time understanding what Al is at a basic level.
What is Artificial Intelligence (AI)?
- Al refers to systems that display intelligent behaviour by analysing their environment and acting autonomously to achieve specific goals.
- Al is a machine’s ability to display human-like capabilities such as reasoning, learning, planning and creativity.
- Al enables technical systems to perceive their environment, deal with what they perceive, solve problems and act to achieve a specific goal. For example, the system receives data already prepared or gathered through its sensors, such as a camera, processes the information and responds.
- Al systems can adapt their behaviour to a certain degree by analysing the effects of previous actions and working autonomously.
Is Al a new concept/development?
Some Al technologies have been around for over 50 years, but advances in computing power, the availability of enormous quantities of data and new algorithms have led to major Al breakthroughs in recent years. Future applications are expected to bring about enormous changes, but Al is already present in our everyday lives.
Al Waves thus far
- Symbolic Al
The Al wave we have seen thus far can be best described as early Al techniques, described as ‘Symbolic Al’. While these solutions can appear dated, they remain very relevant. They are still successfully applied in several domains to develop intelligent machines by encoding the knowledge and experience of experts into sets of rules that the machine can execute. This Al is described as symbolic because it uses symbolic reasoning (e.g., if X=Y and Y=Z, then X=Z) to represent and solve problems. Symbolic AI was the primary method of Artificial Intelligence utilised from the 1950s to the 1990s. Symbolic Al can be said to ‘keep the human in the loop’ because the decision-making process is closely aligned to how human experts make decisions. Symbolic Al is at its best in constrained environments which do not change much over time, where the rules are strict, and the variables are unambiguous and quantifiable. One such example is calculating tax liability.
- Machine learning and data-driven artificial intelligence.
Machine learning (ML) is a technique that automates algorithms’ learning processes. This differs from the first-wave approaches, where performance improvements are only achieved by humans adjusting or adding to the expertise coded directly into the algorithm. While the concepts behind these approaches are just as old as symbolic Al, they were not applied extensively until after the turn of the century when they achieved the current resurgence of the field. In ML, the algorithm usually improves by training itself on data. For this reason, we talk about it as data-driven Al. Practical applications of these approaches have taken off over the last decade. While the methods are not particularly new, the critical factor in recent advances in ML is the massive increase in data availability. The tremendous growth of data-driven Al is, itself, data-driven.
Al examples we already see around us:
- Symbolic Al: virtual assistants, image analysis software, search engines, speech and face recognition systems
- Machine learning and data-driven artificial intelligence: robots, autonomous cars, drones, Internet of Things (IOT), artificial neural networks and deep learning, data mining, big data and data in the wild
Speculative future waves: towards artificial superintelligence?
The Al Waves thus far are described as ‘weak’ or ‘narrow’ Al in the sense that they can behave intelligently in domain-specific niches such as chess or face recognition. ‘Strong’ or ‘general’ Al (AGI), on the other hand, is closer to our understanding of human intelligence as it refers to algorithms that can exhibit intelligence in a wide range of contexts and problem spaces. AGI will provide a new capability paradigm if weak Al is already relatively strong. However, since it does not yet exist, it belongs to the realm of speculative Al.
A second key term from the speculative domain is Artificial Superintelligence (ASI), with higher levels of general intelligence than typical humans. A third is a singularity, which refers to the moment when Al becomes sufficiently intelligent and autonomous to generate even more intelligent and autonomous AIS, breaking free from human control and embarking on a process of runaway development.
There is debate about whether these speculative Als could be achieved by incremental development of existing technologies and techniques. Some experts cite Moore’s law of continual exponential advancement in computer power, suggesting that today’s Al could be deployed to produce the next generation of Al. However, most experts agree that there are fundamental limits to Moore’s law and the capabilities of the current Al paradigm. Some of these critical thinkers argue that paradigm-shifting development could make AGI possible and perhaps even inevitable, while others are sceptical.
Social & Regulatory Concerns
Al technologies and their associated business practices also present legal, social, ethical and economic challenges, as detailed below.
- Deploying Al to achieve profound social value is plausible, but real-world developments are dominated by less aspirational applications, including a disproportionate focus on chatbots and efficiency tools.
- Today’s Al presents a range of different transparency challenges. The first and, perhaps, most salient is the lack of explainability of ML algorithms. That is, how their internal decision-making logic can be understood and described in human terms. This challenge is a function of ML methods.
- Public opposition is often described as a significant challenge for Al. However, critical voices come from experts and stakeholders more than concerned citizens. It is possible that Al has not provoked significant visible public opposition because people broadly accept what they understand to be the costs, benefits and uncertainties. Some citizens may acquiesce to development or feel powerless to change it meaningfully.
- ML can be deployed to generate highly realistic fake videos, audio, text and images known as ‘deepfakes’. The availability of data and algorithms makes it increasingly -easy and cheap to produce deepfakes, bringing them within reach of individuals with relatively modest skills and resources. Deepfakes are only one side of the problem, as powerful ML-based dissemination platforms can quickly spread these materials. Together, these applications present financial risks, reputational threats and challenges to the decision-making processes of individuals, organisations and broader society.
- There may only be some opportunities for Al to help overcome bias. ML algorithms have been applied to look for patterns and anomalies in past decisions of judges. By highlighting the identified biases influenced by politics, race, birthdays, weather, and sporting results, such algorithms create opportunities to respond to them. It has also been suggested that systematically applying the same algorithm to a range of cases would make it possible to ensure that the same decision-making logic is being applied consistently. This may be the case with transparent and well-designed symbolic Al systems in specific contexts, but algorithms can also reinforce and scale up the worst excesses of human bias and prejudice. Generally speaking, Al engineers do not deliberately produce prejudiced algorithms, but there are a few unintentional mechanisms by which they can be produced.
- Just as algorithms have biases and worldviews, they also have values that they continually reproduce and reinforce. Deviation from broadly held social values can lead to opposition and controversy, as seen with some current Al applications such as facial recognition. In response, specific values such as privacy and non-discrimination can be deliberately embedded into technologies ‘by-design’.
- Several concerns have been raised about informed consent for data to be stored, processed or shared for particular purposes. For personal data, the General Data Protection Regulation (GDPR) already requires informed consent for collection and processing. Finding meaningful ways of gaining informed consent can be problematic, as illustrated by the routine acceptance of terms and conditions in exchange for access to information or services. To be truly informed, consent requires that individuals fully understand what they are consenting to, so giving it is conditional upon an adequate response to the perceived transparency challenges.
- While armies of robotic humanoid soldiers remain a speculative vision without basis in technical capabilities, Al has several physical combat applications. Before continuing, it is worth noting that military-grade autonomous weapons systems are much more sophisticated and robust than many of the Al applications that citizens encounter daily. Missiles are not guided to their targets in the same way as advertisements are delivered to theirs. While critical military Al applications may incorporate elements of ML, they are better understood as highly advanced expert systems. These technologies tend to be described as autonomous systems, rather than Al to avoid confusion.
- Knowledge of how Al works can have a demystifying effect, revealing that some fears about Al are unfounded but also that its true capabilities can be overestimated. While the first challenge of today’s Al was its unnecessary underuse, the last was its damaging overuse. Example: The use of ML for tasks where it does not excel, such as 1. identifying causal relationships and predicting individual outcomes in complex social systems and 2. those they are incapable of, such as substituting for human companionship in a relationship of empathy and trust.
Regulatory Edicts thus far
It is often suggested that technologies such as Al should not be regulated, as it could hamper innovation. However, technology policy is often used to support innovation, and whether policy measures are taken or avoided, this should be a deliberate and informed strategic choice to achieve a specific objective rather than received wisdom. The prevailing regulatory and economic context constrains and enables certain forms of Al development.
- Proposal for a Regulation of “The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”. The policy context includes primary law, such as the Charter of Fundamental Rights of the European Union and secondary law, such as the GDPR and Police Directive (link).
- Bank of England (BOE) Discussion Paper (DP5/22) – Artificial Intelligence and Machine Learning (link).
- Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector (link).
- The OECD (Organization for Economic Cooperation and Development) Al Principles focus on how governments and other actors can shape a humancentric approach to trustworthy Al (link).
- Internet Information Service Algorithmic Recommendation Management Provisions Effective March 1, 2022, issued by the People’s Republic of China (link).
Guard Rails expected to be built in
The specific options for further shaping the regulatory and economic context of Al development are being considered at a holistic level, but there are also abstract policy debates about the overall regulatory approach. This includes questions about whether to have regulation specifically targeting Al or to regulate it by applying and updating tech-neutral mechanisms, such as directives that apply to all products and services. Similarly, institutional debates exist about whether to set up dedicated committees and agencies for Al or use existing ones. The EU Better Regulation Guidelines require examining several options, including a baseline of doing nothing. Artificial intelligence: How does it work, why does it matter, and what can we do? Another broad question concerns where to regulate, e.g. at the Member State level, EU level, through other institutions such as the OECD and UN, or self-regulation by actors in the Al sector. Each approach has advantages and disadvantages, and a balance needs to be found by looking at specific challenges and the broader picture. The broad approaches and considerations from an EU perspective are listed below for reference.
- Unacceptable risk Al systems are systems considered a threat to people and will be banned. They include:
- Cognitive behavioural manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behaviour in children,
- Social scoring: classifying people based on behaviour, socioeconomic status or personal characteristics,
- Real-time and remote biometric identification systems, such as facial recognition to manipulate or coerce.
Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes only after court approval.
2. High-risk Al systems that negatively affect safety or fundamental rights will be considered high-risk and will be divided into two categories:
- Al systems used in products fall under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
- Al systems falling into eight specific areas will have to be registered in an EU database:
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum and border control management
- Assistance in legal interpretation and application of the law.
All high-risk Al systems will be assessed before being put on the market and throughout their lifecycle.
3. Generative Al, like ChatGPT, would have to comply with transparency requirements:
- Disclosing that Al generated the content
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
4. Limited risk Al systems should comply with minimal transparency requirements, allowing users to make informed decisions. After interacting with the applications, the user can decide whether to continue using them. Users should be made aware when they are interacting with Al. This includes Al systems that generate or manipulate image, audio or video content, for example, deepfakes.
Conclusion
The scope and breadth of the utility of Al, especially Generative Al in 2024 and beyond, will be truly transformational and will result in a seismic shift across industries and sectors. However, to unleash the true potential of Al, significant thought leadership is needed to tackle the various practical challenges and constraints and to build the consensus and empathy needed to secure the buy-in of the wider population and regulators across the globe. Also, the ethical and data privacy nuances need to be well thought through and enshrined within regulations and edicts, which ensure that the relevant guard rails are built-in and, at the same time, incentivise path-breaking solutions and innovations in this space. The overall consensus is that this journey is just getting started and we are at the early stages of this generation’s “Gold Rush” and most likely the next too!
References:
- EU AI act: first regulation on artificial intelligence (link)
- EU legislation in progress – Artificial Intelligence Act (link)
Summarised by: