icon-download-alt
Go to the homepage of Stories of Purpose
The Hague takes charge to ensure AI is applied at the service of safe, just and inclusive societies

Interview Cybersecurity

The Hague takes charge to ensure AI is applied at the service of safe, just and inclusive societies

In recent years The Hague has invited international leaders to the city to discuss the central question: how can we make sure that AI is used only at the service of safe, just and inclusive societies?

3 min 24 Nov Download text

Public safety is vital for the functioning of societies. It is an important condition for health and wellbeing, personal development, creativity and prosperity. In todays digital world, sensoring and Artificial Intelligence (AI) are increasingly implemented in public systems to enhance public safety. But as these technologies continue to advance into our systems, communities and lives, its speed of development surpasses our ability to control their consequences.

The urgency arises to take charge and reflect on the purposeful use of the technology. This is particularly relevant in the field of security and defence, where the possibilities provided by AI go beyond what is generally considered morally acceptable, let alone socially desirable.

background

Recent examples that demonstrate the need for thorough consideration before implementing AI are privacy breaches by surveillance systems, data loss, remote warfare and digital espionage. To enforce the needed balance between public safety, personal privacy and ethical issues, The Hague in the Netherlands has provided a central stage for the subject of human-centred AI. In recent years, work groups and real-life experimentation sites have been established and conferences have been organised, inviting international leaders to discuss the central question: how can we make sure that AI is used only at the service of safe, just and inclusive societies?

“In essence, it was only natural that the Dutch government raised concerns about the ethical side of technology, and feels the urgency to take that discussion to the international arena.” says Joris den Bruinen, chair of the working group Security, Peace & Justice of the Netherlands AI Coalition (NLAIC) and director of Security Delta (HSD).

“For centuries, international law and policy has been shaped here in The Hague, and I believe the international community also appreciates the leadership of the city on this subject.”

Joris den Bruinen

In the global context, where the US leave AI applications nearly completely up to the commercial market while in China the government controls virtually all AI applications but uses it to concentrate and enforce power, Europe takes a more moderate approach. The Netherlands’ focus on human-centred AI, especially in the security and defence industries, fits well within this context.

background

To advance the development of human-centred AI, in early 2022 the Dutch government announced an investment of nearly €11 mln in five ELSA- labs: projects focusing on applications of human-centred AI. ELSA is short for Ethical, Legal, Social Aspects and the labs are based on a scientific concept, which involves collaboration between government representatives, academic researchers and experts from the business community as well as civil society organisations and citizens.

“The Dutch have been very effective in organising innovation in this pragmatic way throughout history,” Joris explains, “this ‘extended triple helix’ approach is similar to how we successfully managed the water centuries ago already.”

In order to keep social interests at the center of their efforts, the concepts developed in the ELSA Labs are put to the test in public environments, and involve citizen stakeholders. That pragmatic approach is also why the participation of the different parties is necessary; Academic researchers make sure the most advanced knowledge, technology and processes, are applied, the government can make public space and permits available and companies can create commercially viable and scalable services out of the concepts.

Joris: “with the ELSA labs we have a unique Dutch concept that already gets international traction. By applying our double-loop learning as well as security & privacy by design principles, committing to high ethical standards and with open and transparent procedures and communication, we make sure that we benefit both socially and economically from the right use of AI.”

background

Living Lab Scheveningen, which works according to the ELSA principles and is indirectly affiliated with the ELSA Labs, is an example of such a public environment. On this boulevard area, a digital crowd management tool was studied in 2022. The tool is a 3D dashboard displaying a digital map with current crowd congestions, but it also predicts where crowding tends to happen. This enables the municipality, the police, emergency services and event organisations to better and faster anticipate on the developments.

The 3D dashboard works by combining a number of anonymous data sources. For example, the system counts the number of people on the boulevard and looks at information from public transport, parking data, weather forecasts and traffic counts. Here too, ethical and legal concerns are weighed against the safety benefits of surveillance technology. On top of that, policies for transparency are developed to warrant accountability and responsibility of applied technology in public spaces.

“Safe societies require a certain degree of trust in the government. If we don’t curate accountability over surveillance technology, that trust is inevitably at stake” explains Joris.

The findings of this study are currently being evaluated, and will result in a scalable tool and hands-on guidelines for use by all Dutch municipalities and emergency services. “We see the urgency of these tools in disasters such as most recently at the Halloween party in Seoul, South Korea. Unfortunately, these disasters still happen too often. Therefore we are prioritising the development of this tool and guidelines, to help prevent such disasters while warranting secure data storage, retention, sharing, application and privacy.

“Ultimately, the technology and guidelines that are developed here are relevant and useful for many other countries that are concerned with ethical use of technology.” Explains Joris. “What is developed here in The Hague can benefit citizens around the world”

Get in touch!

Want to learn more about how Dutch security policy is influenced by human-centred AI? Our experts can give you access to experts and policy makers who can give you many more examples and in-depth stories.

Download the article

You can easily download the text of this article as a Word file to use freely for your own article or story. The images can be downloaded separately from the article.

Go to top