Research on Open Source LLM Safety at HICSS 2026

From January 6-9, 2026, TWON researcher Simon MĂĽnker presented his paper at the Hawaii International Conference on System Sciences (HICSS), one of the leading international conferences in the field of information systems and digital innovation.

The paper addresses societal risks associated with open source Large Language Models and evaluates the effectiveness of existing safety and guardrail mechanisms. Together with his co author Fabio Sartori, Simon MĂĽnker received the Best Paper Award for this research.

The study systematically examines guardrail vulnerabilities across seven widely used open source LLMs. Using advanced natural language processing classification methods, it identifies recurring patterns of harmful content generation under adversarial prompting. These vulnerabilities were first observed during earlier research activities within the TWON project, where initial experiments revealed persistent weaknesses in model safety mechanisms.

The findings show that several prominent models consistently produce content classified as hateful or offensive. This raises concerns about the potential implications of open source LLMs for democratic discourse and social cohesion. In particular, the results challenge public safety assurances by model developers and point to discrepancies between stated safeguards and observed model behavior.

The research contributes to ongoing discussions on responsible AI development and the governance of AI systems that shape online communication and public discourse. It underlines the need for more robust, transparent and empirically tested safety mechanisms in open source AI ecosystems.

The paper was presented as part of the Digital Democracy Minitrack at HICSS 2026.

From Research to Regulation: Rethinking Online Social Networks // January 28, Berlin

📆Date                             28 January 2026, 6:00-9:30pm

🎯Location                      Publix, Hermannstraße 90, 12051 Berlin

What do we know from research about the positive and negative effects of online social networks on societies? How can these platforms be designed to protect and strengthen democratic societies and foster a fair online public sphere? Which research is needed, and how can academia work hand in hand with regulators, civil society, and practitioners to jointly create change? These questions gain particular urgency at a time when global geopolitical tensions, disinformation, and the rise of right-wing extremist forces in many democracies worldwide increasingly shape digital infrastructures.

On this evening, we will present the research project “TWON – Twin of Online Social Networks” and discuss its results and implications with policymakers, journalists, and practitioners from civil society. TWON is an EU-funded research project that examines how the design of online platforms influences the quality of democratic online discourse. To this end, an interdisciplinary research team has developed a novel approach to studying online social networks: using a digital twin, simulations are conducted to explore, for example, how different ranking algorithms affect quality of debate, without experimenting on real users. The findings are translated into policy recommendations and discussed in participatory Citizen Labs with citizens across Europe. Members of the consortium include, among others, the Karlsruhe Institute of Technology (KIT), University of Trier, FZI Forschungszentrum Informatik, University of Amsterdam, University of Belgrade, Jožef Stefan Institute, and Robert Koch Institute (RKI).

Furthermore, the event will focus on how online social networks can be researched and shaped at the societal level. In particular, we will discuss promising avenues for future research and evidence-based policymaking, such as data access under the Digital Services Act (DSA), data donation frameworks, and current windows of opportunity in the European and global digital policy debate.

Before and after the stage program, guests are invited to explore interactive project demonstrators, engage with research results at poster stations, and connect informally with project partners from across Europe.

Proposed agenda:

17:30 – Arrival and Demonstrator & Poster Walk

18:00 – Opening: Jonas Fegert, TWON/FZI

18:15 – Impulse: Andrea Lindholz MdB, Vice President of the German Bundestag

18:30 – Keynote: Annette Zimmermann, University of Wisconsin-Madison

18:45 – Presentation of TWON project results

19:10 – Panel discussion

Annette Zimmermann, University of Wisconsin-Madison

Svea Windwehr, D64

Damian Trilling, TWON/University of Amsterdam

19:55 – Audience Q&A

20:10 – Buffet and Demonstrator & Poster Walk

We would be delighted to welcome you in Berlin and look forward to an open and productive discussion with you!

New Publication: Simulating Algorithmic Personalization and Polarization

We are pleased to announce a new peer-reviewed publication by Ljubiša Bojić, co-authored with Velibor Ilić, Veljko Prodanović, and Vuk Vuković, published in Chinese Political Science Review.

The paper introduces the Recommender Systems LLMs Playground (RecSysLLMsP), an agent-based simulation framework designed to study how recommender systems and large language models jointly shape engagement, emotional dynamics, and polarization in social media environments.

The study models a synthetic social media ecosystem with 100 agents grounded in real psychometric and demographic data. Agents interact through feeds with progressively increasing levels of personalization, while content is generated and adapted using large language models. This setup enables controlled observation of how algorithmic personalization affects collective behavior.

Key findings show that moderate personalization maximizes engagement, while full personalization significantly reduces content diversity and amplifies both structural and affective polarization. Network modularity increases sharply as personalization deepens, indicating the emergence of echo-chamber dynamics. At the same time, the simulation demonstrates that LLM-based agents can reproduce realistic patterns of emotional contagion and ideological clustering.

RecSysLLMsP provides a transparent and reproducible “digital laboratory” for testing recommender system designs and policy interventions before they are deployed at scale. The framework has direct relevance for research in computational social science, responsible AI, platform governance, and democratic communication.

Publication details:
An Agent-Based Simulation of Politicized Topics Using Large Language Models: Algorithmic Personalization and Polarization on Social Media
Chinese Political Science Review
DOI: 10.1007/s41111-025-00326-x

Rethinking AI for Democratic Societies – Joint Event in Ljubljana, October 14

On October 14, partners from the projects AI4Gov, SOLARIS, and TWON gathered in Ljubljana to explore how artificial intelligence can support democracy, transparency, and citizen participation.

The event titled AI, Democracy and the Public Interest: Building Resilient Digital Futures brought together experts from research, policy, civil society, and industry. The program included project presentations, followed by a panel discussion featuring contributors such as Federica Russo and Tanja Zdolsek Draksler.

Hosted at Hotel City Ljubljana, the event “AI, Democracy, and the Public Interest: Building Resilient Digital Futures” brought together experts from research, policy, civil society, and industry. The program included project presentations, followed by a panel discussion featuring contributors such as Federica Russo and Tanja Zdolšek Draksler.

Our researcher Alenka GuÄŤek introduced TWON in an expert talk, while Achim Rettinger participated as a panelist in a discussion on the influence of AI technologies on democratic processes and public trust.

The event concluded with networking and informal exchanges on how to build ethical, democratic, and human-centered digital futures.

Discussions throughout the day highlighted the need for greater transparency and accountability in AI systems, stronger cooperation across sectors, and increased public understanding of AI’s societal impact. The full recordings are now available online.

As TWON, we emphasize the importance of strengthening connections between European research and policy efforts on AI and democracy. Our sincere thanks go to the Solaris and AI4Gov teams for making this event possible.

Out now: Our new demonstrator tool TWONderland

In the past weeks, our TWON researcher Fabio Sartori (KIT) and his colleagues worked on a new demonstrator tool to make the dynamics of Online Social Networks tangible for the broad public. The result is: TWONderland!

In our simulation TWONderland, we assign the user the job as the lead designer of a new Online Social Network. In a playful and interactive way, users explore how as the platform designer, they influence the interaction on the platform and how even the tiniest design choices can ripple out to shape behavior, sentiments and relationships between the users – and potentially spark fragmentation and fuel polarization.

Unique about this demonstrator is the step-by-step walkthrough of the functionalities of Online Social Networks (OSNs). The user starts by assigning moods – from aggressive to calm – to fictive platform users. We then visualize how their fictive users are connected to each other on the platform, and how their moods adapt as they are confronted with posts of each other. In TWONderland, every OSN user participates within a specific sentiment corridor, meaning that they will interact with and adapt to other users as long as their differences in sentiment are not too significant. Here, for instance, a very calm user would not immediately interact with somebody who is very aggressive. However, in our demonstrator, we visualize that the sentiment on a platform can still shift in positive and negative directions gradually. These network dynamics were modelled based on the Axelrod model (for further information and technicalities please refer to our Deliverable).

After getting an understanding of network dynamics, the user is asked to experiment with alternative platform mechanisms that determine what users (and their moods) influence their own fictive platform user. Based on the ranking algorithms the user sets, posts with different moods – again, aggressive to calm – will become visible to their fictive character, which influence their mood. From this individual level, the demonstrator then moves on to visualizing bigger networks in which many users influence each other based on the designated platform mechanics. To understand how users influence each other’s mood on OSNs, the user can run comparative simulations and experiment how polarization is fueled or minimized only through the ranking mechanics.

New paper by TWON researcher Simon Münker: Fingerprinting LLMs through Survey Item Factor Correlation: A Case Study on Humor Style Questionnaire

We are proud to announce that our researcher Simon Münker published a new paper with the title: Fingerprinting LLMs through Survey Item Factor Correlation: A Case Study on Humor Style Questionnaire. It is published in the Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing and the results will be presented in Shanghai on 5 November.

LLMs increasingly engage with psychological instruments, yet how they represent constructs internally remains poorly understood. Simon Münker introduces a novel approach to “fingerprinting” LLMs through their factor correlation patterns on standardized psychological assessments to deepen the understanding of LLMs constructs representation. Using the Humor Style Questionnaire as a case study, he analyzes how six LLMs represent and correlate humor-related constructs to survey participants. His results show that they exhibit little similarity to human response patterns. In contrast, participants’ subsamples demonstrate remarkably high internal consistency. Exploratory graph analysis further confirms that no LLM successfully recovers the four constructs of the Humor Style Questionnaire. These findings suggest that despite advances in natural language capabilities, current LLMs represent psychological constructs in fundamentally different ways than humans, questioning the validity of application as human simulacra.

It’s a wrap: CitizenLab 2025 in Chemnitz

On 8 October, we hosted another CitizenLab in the Stadthallenpark in Chemnitz, where we got to speak with citizens about our research on Online Social Networks.

We presented our demonstrators MicroTWONY, MacroTWONY, and TWONderland to interested citizens and participants, had inspiring conversations about the impact of Online Social Networks on society and democracy, as well as possibilities for regulation and ethical design. We are glad to see how many participants enjoyed experimenting with the demonstrators and exploring how digital dynamics become tangible!

In the evening, we joined an interesting event on memory culture in digital spaces at the NSU Documentation Center with TWON researcher Jonas Fegert, journalist Nhi Le and Susanne Siegert from the channel @keineerinnerungskultur, moderated by Benjamin Fischer. The discussion focused on the opportunities social networks offer for democratic education, especially for younger audiences, and on the limitations imposed by platform mechanisms that tend to amplify hate speech and misinformation.

A day full of dialogue, reflection, and future perspectives – thank you for everybody who was a part of it, and we’re looking forward to the next CitizenLab!

New publication: Can we use automated approaches to measure the quality of online political discussion?

We’re proud to announce that our consortium members Sjoerd Stolwijk, Damian Trilling (both University of Amsterdam) and Simon MĂĽnker (Trier University) contributed to a freshly published paper on measuring the debate quality of online political discussions. The paper was released in the “Communication Methods and Measures” journal by Routledge and is open access.

Our researchers review how debate quality has been measured in communication science, and systematically compare 50 automated metrics against numerous manually coded comments. Based on their experiments, they were able to give clear recommendations for how to (not) measure debate quality in terms of interactivity, diversity, rationality, and (in)civility according to Habermas.

Their results show that transformer models and generative AI (like Llama and GPT-models) outperform older methods, yet there is variance and the success depends on the measured concept, as some (e.g. rationality) remain difficult to capture also by human coding. Which measure should be preferred for future empirical applications is likely dependent on the
objective of the study in question. For some genres, language and communication style (e.g. satire), it is strongly advised to test the accuracy of automated methods against the human interpretation beforehand, even if methods are widely used. Some approaches and implementations performed so poorly that they are not suitable for studying debate quality.

Zero-shot prompt-based classification @ACL Vienna

Simon MĂĽnker recently presented his research on the use of zero-shot, prompt-based classification for analysing political discourse on German Twitter during the European energy crisis at the 2025 Association for Computational Linguistics Conference in Vienna. He gave a poster presentation and a talk about his newly published paper.

In their paper, Dr. Achim Rettinger, Kai Kugler and Simon MĂĽnker assess advancements in NLP, specifically large foundation models, for automating annotation processes on German Twitter data concerning European crises.

The study explores how recent advances in large language models (LLMs) can reduce the need for long manual work when labeling and categorizing social media content. Instead of training models with thousands of examples, LLMs can follow written prompts to classify tweets in a zero-shot setting, meaning without prior training on the specific task.

The dataset used was collected from a German Twitter dataset based on survey questions from the SOSEC project about the energy crisis in winter 2022/23. Two domain experts and native speakers annotated a random sample of around 7,000 tweets.

The models that were evaluated included: a baseline Naive Bayes classifier using token counts; a fine-tuned German-specific BERT transformer (“gbert-base”)- a model further adapted with additional pretraining on domain-specific tweets to improve domain relevance; and instruction-tuned models based on T5, which follow prompts to classify texts without domain-specific fine-tuning using zero-shot prompting techniques.

The results show that prompt-based approaches perform almost as well as fine-tuned BERT models. The study therefore concludes that a prompt-based approach can achieve comparable performance to fine-tuned BERT without requiring annotated training data.

However, the study also emphasizes limitations such as the inherited and potentially amplified biases present in the training data and differences in outcomes related to the language used (German/English), as well as cultural nuances.

Automating the analysis of political and social debates raises questions about the role AI can and should play in interpreting sensitive public discourse.

Panel discussion: TWON researcher Jonas Fegert on “Who owns AI? On democratization, control and power relations”

On July 14th, TWON researcher Jonas Fegert (FZI Research Center for Information Technology), was invited as a panelist to the event “Who owns AI? On democratization, control and power relations” hosted by the House for Journalism and the Public Sphere in Berlin. The panel discussion explored how artificial intelligence can be shaped and governed democratically and what social, political and technological conditions are needed to make that possible.

At the heart of the discussion were fundamental questions about power structures in the field of AI. Today, artificial intelligence influences many areas of life, from work and education to everyday decision-making. Yet major developments in this space are often driven by large tech corporations without meaningful input from democratic institutions or the public. The panel reflected on what it could mean to democratize AI, who should have a say in its direction and what roles parliaments, research institutions and civil society can play in this process.

The event offered a valuable opportunity to engage with international experts from philosophy, social science and technology ethics. Many thanks to the organizers for the invitation and the insightful discussion.