Interview with Jasmina Ploštajner from Danes je nov dan

Danes je nov dan (Today is a new day – from Slovene) is a Slovenian NGO that uses digital technology and the internet to create dialogue on public issues, facilitate participation and push for transparency and responsible use of technology. The European AI & Society Fund chatted with Jasmina Ploštajner about taking action for transparency of AI systems, mobilising civil society in Slovenia and her latest book picks.

Jasmina is a co-founder of the organisation, primarily in charge of all things design, while also working on everything else, from organisational development, fundraising, and project management, among others, to recently also dipping her toes in the AI advocacy and monitoring waters.

What are the latest developments in artificial intelligence in Slovenia, and what are you focusing on in your own work?

Discussions and excitement around artificial intelligence in Slovenia are in full swing, especially in the research field with the establishment of UNESCO’s  International Research Institute on Artificial Intelligence, and in politics with the new Ministry for Digital Transformation. The Government’s National Programme for the Promotion of Development and Use of Artificial Intelligence from 2021 will be in full swing in the next few years. We are expecting the rollout of the first significant use cases of AI and automated decision making (ADM) in the public sector, which will set the ethical and legal foundation for future use. But sadly, most discussions focus on the perspectives of researchers, engineers, and politicians, excluding “non-technical” expert views. We are convinced that if civil society does not organise and devote substantial resources to this topic, we will be left on the margins of the debate and will see potentially dangerous uses of AI and ADM.

To address this issue, we are aiming to build the capacity of a wider Slovenian civil society. We will do this by organising lectures and workshops with different organisations from Europe sharing their experience and advice as well as other experts in the field, and we will prepare materials that will inform the civil society and general public about the basics of AI, how it can potentially affect human rights, and what tools are available for us to fight the harms. It will be important to design these activities and materials in a way that will be accessible to the non-technical organisations with expertise in various fields of human rights protection, whose voices will be invaluable in future debates, advocacy activities, and policy-making processes. It is important to emphasise that these organisations are already experts in their respective fields and know best how to protect and support their stakeholders but are excluded from discussions around AI because of their lack of basic knowledge of AI and their inherent fear of technology.

Danes je nov dan works with civic participation. Are people in Slovenia ready to demand accountability for AI caused harms?

I think people in general are very sensitive to harm and care about social justice and the protection of human rights. But there are two big problems with this specific topic. As I already mentioned, there is a substantial lack of capacities and knowledge concerning AI and ADM in civil society and among the public to quickly, expertly, and confidently respond to the challenges of AI and ADM legislation and implementation. I think one of our major challenges is to educate people and raise awareness. There is a lot of media attention on AI and ADM now due to the myriad of general-purpose AI, like ChatGPT-4, and I see it as an opportunity for us to jump on the bandwagon and try to steer the conversation away from the empty hype and more towards an understanding of the effects AI can have on society and our human rights.

The second problem is trickier. People in Slovenia are tired of fighting. The political situation is quite dire, best summed up by the election revolving doors between the established right (I would honestly call it the far right) and new centrist parties, emerging from the desperation of the people and their bizarre need for new faces (saviours). So all this contributes to constant fighting for our rights during the right-wing government and then feeling disappointed during the centrist government. Last year, with the substantial efforts of civil society that was fighting for the right to protest, upholding democracy, and respecting human rights, Janez Janša’s government lost the election to a (of course) new Svoboda party. People were ecstatic to finally see a change after democracy was crumbling under Janša, but it soon turned out that the new government is not upholding the pre-election promises that were made. So we, in the coalition of more than 100 civil society organisations, have to keep monitoring the work of the government, prepare advocacy and awareness-raising actions, and keep fighting for the health system, social provisions, secure housing, fair pay, and migrant rights. How do we meaningfully include issues related to AI and ADM in this fight? We’ll definitely do our best.

What challenges have you faced when building a database on AI uses in Slovenia and what would make it easier to do this work?

We are currently starting with the data gathering. Luckily, there were efforts already made before us, namely the Automating Society Report. The author of the Slovenian part, Lenart J. Kučić, was kind enough to share with us his experience, insights, and very helpful advice on where to look and who to ask for information. So we will build on top of his findings and send requests for information first to the ministries and public authorities that have used AI and ADM in the past or are more likely to use them, which will save us a lot of time. Of course, there is no guarantee that we will get information from all of them. But there are other tactics we can try. For example, the police in Slovenia were notorious in the past for using some AI solutions that were even illegal and not properly reporting them, but they were bragging about the tools at some international event, and this is how Lenart uncovered everything.

Of course, we would benefit from more transparency and a database of uses that would be provided by the government and other public institutions. This is something we all strive for: more transparency and accountability. But until then, we can only pressure the decision-makers and politicians with monitoring and advocacy activities, like the AI and ADM databases built by civil society.

You’ve used AI tools in some of your own work. Can you tell us more about it and how do you approach human rights, when deploying AI systems?

AI is a bit of a term of fashion, so most of the technologies we deploy are more commonly associated with machine learning (ML) and were called AI in the past, but are reasonably well understood and would probably not be given the “magical” name of AI today.

We use machine learning mostly for classification and clustering tasks, especially when analysing large networks of actors (such as our analyses of the Slovenian Twitter sphere or our parliamentary analytics tool Parlameter). The other big use case for us is language technologies, especially when it comes to lemmatising words (figuring out their “basic” form) to build search indexes and performing part of speech tagging (determining if a word is used as a verb or a noun, is it gendered, is it singular or plural, etc.). These, however, are all relatively “boring” AI technologies.

We never use them as a primary tool of our analysis in all but one case (lemmatisers for search indexes), we deploy them with humans in the loop, and we always carefully explain the limitations of our approach when presented. We never process personal or user data with them, and all of our processing is done “offline” with open-source software on hardware we control.

To sum up: we use boring, reasonably well understood tech, never experiment on our users, and never have our software impersonate human-like intelligence.

Has it been easy for you to be heard on critical issues in the AI law, the AI Act, in Brussels, while being more than 1000 km away in Ljubljana?

To be honest, our primary focus was never on Brussels. We are putting more effort into trying to influence national positions on the AI Act and, more recently, the Regulation to Prevent and Combat Child Sexual Abuse and the Cyber Resilience Act. There are so many organisations and networks that are doing this work better than us and have years of experience behind them, so at this point we would rather just support their actions with the limited capacities we have and, more importantly, learn from them. And then, hopefully, one day we will be ready to do more advocacy on the EU level.

Is there anything that you are reading or listening to that you would recommend?

Through books, I always try to broaden my horizons and, even more importantly, unlearn the bad patterns and ways of thinking that growing up in a neoliberal capitalist climate will force on everyone. One of the latest books I read was Dipo Faloyin’s Africa Is Not a Country. I cannot recommend it enough! Full of humour and amazing storytelling, it dives deep into imperialism, capitalism, white supremacy, and more, but also unveils (for me, never before heard) histories about big, powerful kingdoms and strong, proud people. Currently, I am on the first few pages of Rest is Resistance by Tricia Hersey, founder of The Nap Ministry. The book teaches us to refuse the grind culture and embrace rest as a political action. I think all of us in the movement for social justice would benefit from reading it and incorporate its principles in our work and lives. I hope they will stick with me, haha.

Recent articles
Interview with Sam Jeffers from Who Targets Me: is generative AI changing election campaigns online? 

The European AI & Society Fund spoke with Sam Jeffers from Who Targets Me about how generative AI is changing online political advertising and campaigns

Interview with Nani Jansen Reventlow, the Founder of Systemic Justice: Making litigation accessible as a tool for change.

We caught up with Nani Jansen Reventlow from Systemic Justice about what it means to support community-driven litigation and what Systemic Justice has learned about how communities experience technology-enabled harms.