SemGenAge: 1st Workshop on Semantic Generative Agents on the Web at ESWC 2025
1-2 June 2025 in Portoroz, Slovenia
“I have a dream for the Web [in which computers] become capable of analyzing allthe data on the Web – the content, links, and transactions between people and computers. A “Semantic Web”, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The “intelligent agents” people have touted for ages will finally materialize.”
Nowadays, Large Language Models (LLMs) have materialized and are seemingly providing such “intelligent agents” capabilities: They are software systems that can perceive their environment through language and act by generating language with some level of autonomy. With respect to the internet they are able to analyze data on the web, including communication between people and computers and they are able to talk to both, other machines and humans. Technologically the original vision of agents was based on symbolic knowledge representations (like RDF, OWL), agents with planning and deductive reasoning capabilities and symbolic agent communication languages (like FIPA-ACL) for multi-agent interactions and web service calls (like WSDL and WSCL).This is in stark contrast to how LLMs achieve those capabilities: They are based on machine-learned knowledge representation from textual web data. LLM-agents exhibit statistical and inductive inference capabilities and facilitate natural language for receiving instructions, for communication with humans and between machines. While LLM development has attracted huge business interest they also have fundamental limitations that the original semantic web technologies did not have: Their behavior is not guaranteed to be correct, controllable and comprehensible. Furthermore, they are computationally inefficient and without performance guarantees with regard to many tasks. When these characteristics are important, traditional semantic web technologies could be a better choice. This workshop concentrates on technologies and applications that unite the advantages of both worlds. We particularly invite submission with one of the following characteristics:- Agent knowledge representation: Representations of agents’ states or memories learned from web data that are interpretable and analyzable.
- Agent reasoning capabilities: Flexible and human-like agent behavior which still is controllable and interpretable.
- Agent communication: The flexibility and expressiveness of natural language, but also comprehensible interactions in a formal and interpretable language.
- Interpretable agents for simulating (nonrational) human behavior
- Agents on the (social) web for analyzing communicative behavior
- Platforms for simulating and researching agent communication and platform mechanics
- Recursive AI agents for higher levels of task complexity, adaptivity, and autonomy
- Semantic technologies and artificial intelligence, proposing and discussing novel technological advancements in the area of Neurosymbolic AI, Generative Agents, Web Science and Multiagent Systems.
- Computational Social Sciences, Computational Communication Science, Digital Media Studies, and related fields, using such technologies for research human communicative behavior on social networks and the influence of platform mechanics and bots on online discourse, opinion formation, and (online) behaviour.
- Marketing, customer service and related fields that study (influences on) consumer behaviour by automating customer relationship management.
Program Committee
- Jose Manuel Gomez-Perez, Expert Systems
- John Shawe Taylor, UCL
- Estevam Hruschka, MegaGon Labs
- Phoebe Sengers, Cornell University
- Natasa Milic-Frayling, Qatar Computing Research Institute / Nottingham University
- Jan Rupnik, Jozef Stefan Institute / Extrakt.ai
- Paul Lukowicz, DFKI Kaiserlautern
- Raphael Troncy, EURECOM
- Marko Tadic, University Zagreb
- Konstantin Todorov, University of Montpellier
- Jonas Fegert, FZI Forschungszentrum Informatik
- Simon Münker, Trier University
- Michael Mäs, Karlsruhe Institute of Technology
Organizing Committee
Achim Rettinger (Contact Person)
holds the chair for Computational Linguistics at Trier
University and is also Director at FZI Forschungszentrum Informatik. He started his research career on agent communication and computational trust learning in Multiagent Systems in 2004. Since then he worked primarily on machine learning for graph and textual data. Currently, his research team is working on simulating human capabilities and behavior, e.g. for researching opinion formation in social networks.
E-mail: rettinger@uni-trier.de
Damian Trilling
holds the chair of Journalism Studies at Vrije Universiteit Amsterdam, and is also affiliated with the Amsterdam School of Communication Research at University of Amsterdam. He is an expert on computational social science and coordinator of the HORIZON project TWON – Twin of Online Social Networks, which uses simulations involving LLM-powered agents to study the effect of platform mechanics on the quality of democratic debates.
E-mail: d.c.trilling@vu.nl
Marko Grobelnik
is a researcher in the field of Artificial Intelligence (AI). Marko co-leads Artificial Intelligence Lab at Jozef Stefan Institute, cofounded UNESCO International Research Center on AI (IRCAI), and is the CEO of Quintelligence.com. Marko is co-author of several books, co-founder of several start-ups and is/was involved into over 100 EU funded research projects in various fields of Artificial Intelligence. Significant organisational activities include Marko being general chair of LREC2016 and TheWebConf2021 conferences. Marko represents Slovenia in OECD AI Committee (AIGO/ONEAI), in Council of Europe Committee on AI (CAHAI/CAI), NATO (DARB), and Global Partnership on AI (GPAI). In 2016 Marko became Digital Champion of Slovenia at European Commission.
E-mail: Marko.Grobelnik@ijs.si
Workshop Format
The proposed workshop is explicitly open to exchanging ideas from a very diverse set of perspectives and formats. This is reflected in the type of papers we accept (see below). Hence, the workshop will consist of multiple elements that would best fit into a full day workshop:
● Paper presentations + poster session
● Invited talks
● Demo session for agent simulation platforms and agent interaction interfaces
We strive for a mix of these different types of papers, and will especially allocate much time for
discussion and interaction.
● Full papers (up to 12 pages including references): contain original research.
● Short papers (up to 6 pages including references): contain original research in
progress.
● Demo papers (up to 10 pages including references): contain descriptions of prototypes,
demos or software systems.
● Data papers (up to 10 pages including references): contain descriptions of resources
related to the workshop topics, such as datasets, knowledge graphs, corpora, annotation
protocols, etc.
● Position papers (up to 10 pages including references): discuss vision statements or
research directions.
Call for Papers
A quarter of a century ago Berners-Lee originally expressed his vision of the Semantic Web as follows:
“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A “Semantic Web”, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The “intelligent agents” people have touted for ages will finally materialize.”
Large Language Models (LLMs) are seemingly providing such “intelligent agents” capabilities. With respect to the internet they are able to analyze data on the web, including communication between people and computers and they are able to talk to both, other machines and humans.
Originally, such agents were envisioned to be based on symbolic knowledge representations, deductive reasoning capabilities, and symbolic agent communication languages.This is in stark contrast to how LLMs achieve those capabilities: They are based on machine-learned knowledge representation from textual web data.
While LLM development has attracted huge business interest they also have fundamental limitations that the original semantic web technologies did not have: Their behavior is not guaranteed to be correct, controllable and comprehensible. Furthermore, they are computationally inefficient and without performance guarantees with regard to many tasks. When these characteristics are important, traditional semantic web technologies could be a better choice.
This workshop concentrates on applications and technologies that unite the advantages of both worlds. We particularly invite submission with one of the following characteristics:
- Agent knowledge representation: Representations of agents’ states or memories learned from web data that are interpretable and analyzable.
- Agent reasoning capabilities: Flexible and human-like agent behavior which still is controllable and interpretable.
- Agent communication: The flexibility and expressiveness of natural language, but also comprehensible interactions in a formal and interpretable language.
In this workshop, we are particularly interested in, but not limited to, research on social interactions of any combination of agent(s) and/or human(s) on the web, including the architectures and platforms enabling and influencing such interactions:
- Interpretable agents for simulating (nonrational) human behavior
- Agents on the (social) web for analyzing communicative behavior
- Platforms for simulating and researching agent communication and platform mechanics
- Recursive AI agents for higher levels of task complexity, adaptivity, and autonomy
We are open to a wide range of different approaches, including not only the presentation of new technologies, but also case studies and applications of current technologies.
Interdisciplinarity
We welcome submissions from different backgrounds. In particular, we welcome researchers from various disciplines:
- Semantic technologies and artificial intelligence, proposing and discussing novel technological advancements in the area of Neurosymbolic AI, Generative Agents, Web Science and Multiagent Systems.
- Computational Social Sciences, Computational Communication Science, Digital Media Studies, and related fields, using such technologies for research human communicative behavior on social networks and the influence of platform mechanics and bots on online discourse, opinion formation, and (online) behaviour.
- Marketing, customer service and related fields that study (influences on) consumer behaviour by automating customer relationship management.
Submissions
The proposed workshop is explicitly open to exchanging ideas from a very diverse set of perspectives and formats. This is reflected in the type of papers we accept (see below). Hence, the workshop will consist of multiple elements that would best fit into a full day workshop:
- Paper presentations + poster session
- Invited talks
- Demo session for agent simulation platforms and agent interaction interfaces
We strive for a mix of these different types of papers, and will especially allocate much time for discussion and interaction.
- Full papers (up to 12 pages including references): contain original research.
- Short papers (up to 6 pages including references): contain original research in progress.
- Demo, data, and position papers (up to 10 pages including references): We explicitly welcome papers that go beyond traditional paper format and focus on other aspects than empirical results. In particular, we welcome demo papers that contain descriptions of prototypes, demos or software systems; data papers that contain descriptions of resources related to the workshop topics, such as datasets, knowledge graphs, corpora, annotation protocols, etc.; and position papers that discuss vision statements or research directions. These can take the form of full or short papers.
The submissions must be in English and adhere to template (see below). The papers should be submitted as PDF files to OpenReview. The review process will be single-blind. Please be aware that at least one author per paper must be registered and attend the workshop to present the work and that ESWC is a 100% in person conference.
After the conference, we intend to publish the proceedings at CEUR-WS. We therefore only accept submissions that use the New CEURART Stype. An Overleaf Template and a zipped LaTeX Template are available. Alternatively, the zip-file also contains docx and odt variants of the CEUR-ART style. We only accept submissions as PDF files that use these templates.
The timeline for Workshop Papers is as follows:
- Submission deadline: March 6, 2025
- Notifications: April 3, 2025
- Camera-ready version: April 17, 2025