[DL] [CfP-SI] ACM TIST - Special Issue on Risks and Unintended Harms of Generative AI Systems

Stefano CIRILLO scirillo at unisa.it
Wed Jan 22 14:44:05 CET 2025


ACM Transactions on Intelligent Systems and Technology

*Special Issue on Risks and Unintended Harms of Generative AI Systems*
Guest Editors:
• Stefano Cirillo, University of Salerno, Italy
• Eliana Pastor, Politecnico di Torino, Italy
• Francesco Pierri, Politecnico di Milano, Italy
• Serena Tardelli, Institute of Informatics and Telematics, CNR, Italy
• Mengxiao Zhu, University of Science and Technology of China, China


As large language models (LLMs) are rapidly evolving and becoming
increasingly integrated into diverse applications, critically examining
their potential adverse impacts on individuals, society, and the broader
online and offline information ecosystem is essential.

Recent research on generative Artificial Intelligence (AI) highlights
significant concerns about biases, misinformation, and unintended social
impacts. Advanced methods, such as multimodal credibility assessment,
fairness constraints, fact-checking with retrieval-based techniques,
content moderation, and human feedback, show promise in mitigating these
risks but remain imperfect. Studies also reveal broader societal effects,
including potential impacts on economic sectors and the reinforcement of
echo chambers, underscoring the need for comprehensive risk assessment
frameworks.

In addition to these concerns, the spread of increasingly powerful
Text-to-Image, Text-to-Video, and Text-to-Speech generative AI models
capable of generating realistic yet artificial images, videos, and audio
has introduced several new risks. Technologies that produce synthetic
media, including deepfakes and hyper-realistic avatars, have potential
applications in entertainment and education but also pose serious threats
in domains like privacy, misinformation, and cybersecurity. Despite
advancements in safety filters and prompt moderation, most harmful outputs
continue to evade these safeguards, creating ethical and legal threats for
individuals and groups. These risks extend beyond individual harm,
potentially undermining public trust in digital media and compromising
democratic processes.

Topics
We invite submissions on a wide range of topics related to generative AI,
including but not limited to:
• Risk assessment in generative AI
• Failure modes and unintended consequences
• Evaluation, mitigation, and moderation strategies
• Impact on information retrieval and recommendation systems
• Ethical and societal implications
• AI-driven misinformation and disinformation campaigns
• Regulatory frameworks for generative AI technologies
• Trustworthiness and reliability of AI-generated content
• Human-AI collaboration and interaction dynamics
• Adverse impacts of LLMs on individuals and society
• Misinformation risks from synthetic media in digital communication
• Cybersecurity threats posed by generative AI technologies
• Frameworks for ensuring responsible and accountable AI system deployment

Important Dates
• Submissions deadline: May 31, 2025
• First-round review decisions: August 31, 2025
• Deadline for revision submissions: October 31, 2025
• Notification of final decisions: December 31, 2025
• Tentative publication: February 2026

For questions and further information, please contact Stefano Cirillo (
scirillo at unisa.it) or Serena Tardelli (serena.tardelli at iit.cnr.it).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.zih.tu-dresden.de/pipermail/dl/attachments/20250122/96bf9300/attachment.htm>


More information about the dl mailing list