Interview with Katarzyna Szymielewicz from Panoptykon

 

 

Panoptykon Foundation is a Polish NGO defending human rights in surveillance society. Their mission is to keep surveillance policies and practices under social control. Panoptykon monitors legislative process in Poland and at the EU level, takes legal interventions (incl. strategic litigation), works with the media and engages in educational activity.

Katarzyna Szymielewicz co-founded Panoptykon Foundation and serves as the head for strategy and advocacy. From 2012 to 2020 she also served as a vice-president of European Digital Rights. In recent years her contributions to the EU legislative process and public debate focused on algorithmic accountability, fighting exploitation of behavioural data and revealing societal costs of large online platforms’ business model.

How has the recent advancement of AI and proliferation of generative AI shaped the digital surveillance? And why we should be talking about it?

Right now, we have no intelligence showing that large language models (LLMs) are used for surveillance purposes in Poland. Having said that, it seems inevitable that AI systems will be employed by law enforcement to convert voice to text, analyse and tag messy internet content or CCTV footages. LLMs trained for computer vision and pattern recognition will bring real time remote biometrics and drone operations to a new level.  LLMs are also very efficient in integrating and organising huge datasets. It means that systems based on large language models can be employed to merge and analyse data on citizens from different, so far not compatible sources. We should expect these developments sooner rather than later.

From a human rights perspective it will be challenging to make sure that such operations are carried out within limits set by the law and supervised by independent bodies, to prevent abuse. Abuses happening inside “intelligent” systems that are glorified as black-boxes will be extremely difficult to detect and prevent. This is why we need strict regulation, requiring essential levels of transparency and oversight, for AI systems employed in the law enforcement or justice context, before they become the new normal.

In July the European Court of Human Rights published a judgement that facial-recognition technology breached rights of an underground protestor in Moscow, as Russian authorities used the technology to identify and arrest the protestor. What does this judgement mean for the use of facial recognition technology across other countries in Europe?

The European Court of Human Rights confirmed that not only privacy is at stake when facial-recognition technology is employed to discipline citizens. This technology, when used for political objectives, without human rights safeguards and independent oversight, also threatens freedom of assembly and creates chilling effects for public participation.

This case also shows the urgency of putting legal boundaries on the use of remote surveillance powered by AI, both in national and public security context. If the EU fails to do so – by not including relevant provisions in the final version of the AI Act – we will find ourselves in a similar position to Russian activists. Either Member States will define their own rules for law enforcement and intelligence agencies using AI systems or we will have to fight for them, bringing similar cases to European tribunals. Let’s hope we won’t have to use that path.

You are one of the driving forces behind a collective effort among civil society organisations on enforcing the Digital Services Act, a new law setting responsibilities for online platforms and services across European Union. Can you please tell us more about what needs to happen now for this law to be impactful and to curb the negative effects of algorithms from social media platforms?

As in the case of every new legislation, consistent enforcement will be everything. With large online platforms in the role of regulated entities, that part won’t be easy. The DSA is not even fully operational and the European Commission already has to face legal challenges from Amazon and Zalando. I am confident that DG Connect, which is responsible for making the DSA operational, has prepared for this scenario. But we should not leave the Commission without support.

Civil society organisations have vast experience, when it comes to documenting algorithmic harms, and already mature ideas on how to curb negative effects of social media platforms. This is why we are calling for the Commission to establish a structured advisory mechanism for civil society interested in supporting DSA implementation. In particular we see an opportunity in obligatory risk assessments, which will be conducted by companies themselves, but then subject to independent audits and assessed by the Commission.

According to art. 34 of the DSA very large online platforms (VLOPs) and very large online search engines (VLOSEs) now have to “identify, analyse and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services.” If this is done properly, we will finally learn something about the causes of harms observed and experienced by social media users: where these effects are caused or exacerbated by the design of large online platforms and where we are facing natural patterns in human behaviour.

What are some of the success factors to navigate the international advocacy landscape, such as the work you mention above, and how can funders be effective in supporting this work?

Civil society’s resources and capacity are no match for industry, but we have credibility, independence and legitimacy, which our opponents in the public debate can’t claim. I am looking forward to joint advocacy and campaigning, which will not only expose toxic mechanisms used by large online platforms to monetize our attention at the cost of individual and societal harms but also push for concrete solutions. We are now looking into the design of social media interfaces and recommendation algorithms to uncover their manipulative and addictive features. The next step will be to say “what the good looks like” and deliver this message to the regulator.

All this is hard work. Not only because we need a mix of expertise covering sophisticated tech, design and social science, but also because we need to speak with one voice to get heard and sometimes coordination is everything. I see a big role of funders in helping this movement organise, set common objectives and manage limited resources. International funders can also help by bringing timely expertise from academia and former industry practitioners, which smaller civil society players can’t afford.

In autumn this year Poland will hold a Parliamentary election. What can we expect from Panoptykon in the run up to the elections and what is at stake in this election for digital rights of people in Poland?

Everything seems possible in Polish politics. Bets on who has a bigger chance to win shift from the ruling party to the opposition and back almost every month. And for me this the scariest part of the political game: it’s volatility. Everybody knows that it will be an emotional, not a rational decision people take at electoral booths. So the campaigners won’t take hostages. It will run on fear and hope. The fuel will come from the East, where Mr. Putin is always happy to help with new conspiracy theories and information chaos. But also from the internal Polish war, which feeds on threats constructed by spin-doctors.

For us on the digital rights front election campaigns are always important case studies, showing what’s new in the business of manipulation and emotional targeting. I am really curious to see whether and how large online platforms and their political clients will adjust to transparency standards imposed by the Digital Services Act. Even without Political Ads Regulation, which will probably take another year to get adopted, we should not see targeting based on sensitive characteristics, which has been done quietly until the Cambridge Analytica scandal.

Stakes in elections are always high. The next Polish government will shape new institutions responsible for DSA enforcement (including out-of-court settlement bodies for content-related disputes) and AI Act implementation. It will also face unresolved problems in law enforcement and intelligence services, which are not supervised by an independent body and therefore remain subject to political influence. The list of areas that are calling for an urgent reform is longer and includes key judiciary bodies, such as the Constitutional Tribunal, as well as the Data Protection Authority. This last body is deeply politicised, under-resourced and ineffective after years of bad management, while real challenges related to GDPR enforcement are waiting around the corner.

What are you reading or listening to these days that you would like to share with our community?

If you ask me personally, I can’t really afford to read or listen to anything that would not be my work. Especially since I started digging in research on algorithmic harms, I keep discovering more and more.. and more. It seems that as a network we are producing far more knowledge than we can process as readers 😉 But I asked my team for recommendations and they say “Severance” is good (and does not really feel that sci-fi). Talking about things that were supposed to be sci-fi but appeared closer to reality than we would have wished for, there is of course “Black Mirror”. For those who came late I can recommend especially old episodes, released on Channel 4. And for podcast lovers there is “Your undivided attention” by Tristan Harris – probably the most professional production I came across in our field.

Recent articles
Launching our new Build & Breakthrough initiatives to meet the AI challenge

The European AI & Society Fund is launching Build & Breakthrough, two new initiatives to empower public interest advocates across Europe to drive the agenda on AI. Bringing an additional €10m into the field over the next 18 months, we aim to scale philanthropy’s response.

Announcing €2m further grantmaking to support work on AI & Society  

We are pleased to be able to support 21 organisations within our community with further funding until December 2025. This commitment totals over €2 million and will ensure that diverse public interest organisations have the capacity to shape Artificial Intelligence.