Tutorials

The following tutorials will be given during the first day of the conference, on Monday December 2, 2024.

Privacy 101, with Trumpets and Truffles – An Introduction to Differential Privacy and Homomorphic Encryption, and Applications in Federated Learning (morning session)

by Fernando Pérez-González (University of Vigo, Spain)

Safety, Security, and Privacy of Foundation Models (morning session)

by Xinlei He (The Hong Kong University of Science and Technology, Guangzhou, China) and Tianshuo Cong (Tsinghua University, Beijing, China)

Synthetic Realities: Impact, Advancements, and Ethical Considerations (afternoon session)

by Gabriel Bertocco and Anderson Rocha (Artificial Intelligence Lab., Recod.ai, Institute of Computing, Universidade Estadual de Campinas – Unicamp, Brazil)

European Digital Identity Wallet: Opportunities and Security Challenges (afternoon session)

by Amir Sharif, Giada Sciarretta, Alessandro Tomasi (Fondazione Bruno Kessler, Trento, Italy)

Privacy 101, with Trumpets and Truffles – An Introduction to Differential Privacy and Homomorphic Encryption, and Applications in Federated Learning

In an age where data privacy has become a paramount concern, understanding the mechanisms that protect personal information is essential. This 3-hour tutorial offers a comprehensive introduction to two of the most promising tools in privacy-preserving technologies: Differential Privacy and Homomorphic Encryption, with practical applications in Federated Learning. Designed for individuals who are new to the field, this course provides a
structured overview that combines theoretical foundations with practical insights, making complex concepts accessible and relevant. The tutorial begins with an introduction and motivation for why privacy matters and why it is difficult to achieve with seemingly sound approaches like  anonymization. Then, it presents differential privacy as a statistical indistinguishability problem, which can be achieved through randomization algorithms. We will also discuss how standard DP must face some practical challenges of utility and satisfiability, and introduce approximate-DP as a mitigation strategy. Following this, the tutorial introduces lattice-based Homomorphic Encryption. We will discuss why solving approximate problems in lattices is hard, giving raise to the Learning with Errors paradigm. This leads to explaining some cryptographic primitives that allow us to perform mathematical operations with encrypted data. The tutorial concludes with a discussion on the application of these privacy techniques in Federated Learning, highlighting real-world projects such as TRUMPET and TRUFFLES, which demonstrate the benefits of these privacy-preserving technologies.

Schedule

The tutorial will include the following timeline:

  • Introduction and motivation (15 mins)
  • Introduction to differential privacy (DP) (1 hour)
    • Statistical closeness2.2. Definition of DP
    • DP as a statistical decision game
    • Approximate DP
    • The Laplace mechanism
    • Other mechanisms (randomized response, exponential mechanism…)
  • Introduction to lattice-based homomorphic encryption (1 hour, 15 mins)
    • Private ‘area calculator’ using RSA
    • Lattices. Hard problems
    • The Learning with Errors problem
    • Primitives
    • The gadget decomposition
  • Applications in federated learning (FL) (30 mins)
    • Motivation. Membership inference attacks
    • Application of DP and homomorphic encryption to FL
    • The TRUMPET and TRUFFLES projects.

Biography

FERNANDO PÉREZ-GONZÁLEZ received the Telecommunication Engineer degree from the University of Santiago, Santiago, Spain in 1990 and the Ph.D. degree in telecommunications engineering from the University of Vigo, Vigo, Spain, in 1993. He is currently a Professor with the School of Telecommunication Engineering, University of Vigo. From 2007 to 2010, he was a Program Manager of the Spanish National R&D Plan on Electronic and Communication Technologies, Ministry of Science and Innovation. From 2009 to 2011, he was the Prince of the Asturias Endowed Chair of Information Science and Technology, The University of New Mexico, Albuquerque, NM, USA. From 2007 to 2014, he was the Executive Director of the Galician Research and Development Center in Advanced Telecommunications (Gradiant). He has been the Principal Investigator of the University of Vigo Group, which participated in several European projects, including CERTIMARK, ECRYPT, REWIND, NIFTY, WITDOM, UNCOVER and TRUMPET.
He has co-authored over 70 papers in leading international journals, 180 peer-reviewed conference papers, and several international patents. His research interests include the areas of digital communications, adaptive algorithms, privacy enhancing technologies, and information forensics and security. He was an Associate Editor of IEEE SIGNAL PROCESSING LETTERS from 2005 to 2009 and IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY from 2006 to 2010, a Senior Area Editor of IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY from 2019 to 2022 and Editor-in-Chief of the EURASIP Journal on Information Security from 2017 to 2022. He is a Fellow of the IEEE. He has given tutorials at several international conferences, including IEEE WIFS and IEEE ICIP.

Safety, Security, and Privacy of Foundation Models

The advent of foundation models, such as GPT4 and CLIP, has revolutionized the field of Artificial Intelligence (AI), enabling unprecedented advancements across various domains, including Natural Language Processing (NLP), Computer Vision (CV) tasks, and so on. However, alongside these opportunities come significant challenges related to these foundation models’ safety, security, and privacy. For instance, it has been proven that foundation models are vulnerable to various malicious attacks, compromising their integrity, confidentiality, and availability. These vulnerabilities of the foundational models hinder their practical deployment, especially in security-sensitive scenarios. This tutorial aims to provide a comprehensive overview of the current landscape, focusing on the risks, mitigation strategies, and best practices for ensuring the responsible deployment of foundation models. Participants will gain insights into the potential vulnerabilities of the foundation models, including but not limited to adversarial attacks, data privacy issues, and ethical considerations. Specifically, the tutorial will cover state-of-the-art techniques for securing foundation models,
methods for ensuring privacy-preserving data usage, and guidelines for promoting safe AI practices. Moreover, the tutorial will delve into the latest advancements in defense methods, exploring how to detect and mitigate existing attacks effectively. The ethical considerations segment will highlight frameworks and guidelines for developing and deploying AI systems that adhere to ethical standards, addressing bias, fairness, and transparency.
Through theoretical discussions and practical demonstrations, attendees will be equipped with the knowledge and tools necessary to address the critical challenges associated with foundation models in their work. Above all, our tutorial will help participants implement robust, secure, and privacy-preserving AI systems in diverse application areas.

Schedule

The tutorial will include the following timeline:

  • Introduction and Overview (30 minutes)
    • Welcome and introductions
    • Overview of foundation models and their impact
    • Objectives and structure of the tutorial
  • Session A: Foundation Model Security Risks (45 minutes)
    • Introduction to security risks
    • Techniques for securing foundation models
    • Interactive Q&A session
  • Session B: Foundation Model Privacy Risks (45 minutes)
    • Introduction to privacy risks
    • Methods for ensuring data/model privacy
    • Interactive Q&A session
  • Session C: Foundation Model Safety Risks (45 minutes)
    • Introduction to safety risks
    • Strategies for safeguarding foundation models
    • Interactive Q&A session
  • Conclusion and Wrap-up (15 minutes)
    • Recap of key takeaways
    • Open floor for final questions
    • Closing remarks

Biographies

Dr. XINLEI HE is an assistant professor in the DSA & IoT Thrust, Information Hub, HKUST(GZ). He obtained his Ph.D. from CISPA Helmholtz Center for Information Security in 2023. His research lies in the domain of trustworthy machine learning, with a special focus on privacy, security, and accountability issues stemming from machine learning paradigms. He has published over 20 papers in top-tier conferences/journals such as IEEE S&P, ACM CCS, and USENIX Security. He served as the TPC member of IEEE S&P 2024, ASIACCS 2024, ACSAC 2024, IEEE ICDCS 2024, and ESORICS 2022. He was the recipient of The Norton Labs Graduate Fellowship 2022. More details are at https://xinleihe.github.io/.

Dr. TIANSHUO CONG is a Shuimu Postdoctoral Fellow at the Institute for Advanced Study, Tsinghua University. He obtained his Ph.D. degree from Tsinghua University in 2023 and was a visiting Ph.D. student at CISPA Helmholtz Center for Information Security from 2021 to 2023. His research focuses on the security and privacy issues in machine learning systems. He has published several papers in top-tier computer security conferences such as IEEE S&P and ACM CCS. He served as the TPC member of PETS 2025 and ACSAC 2024, as well as the reviewer for TIFS, TDSC, etc. More details are at https://tianshuocong.github.io/.

Synthetic Realities: Impact, Advancements, and Ethical Considerations

We are amidst a new technological revolution where advancements transcend the mere creation of a virtual landscape; technology now enhances human ingenuity and influences its own evolution. This transformative process profoundly impacts physical realities and alters the landscape of social interactions. Termed synthetic realities, the various media crafted through these means have ignited significant debates, poised to reshape humanity’s lifestyle and societal fabric indefinitely. Synthetic realities have become pivotal tools in education, healthcare, commerce, and automation. These immersive mediums offer unprecedented learning opportunities, allowing students to dynamically explore historical events, scientific concepts, and cultural phenomena. Synthetic realities also facilitate training simulations for medical professionals, refining skills in realistic environments without risking patient safety. However, the proliferation of synthetic realities also introduces significant political and social challenges. These platforms amplify the dissemination of fake news, fueling political polarization and eroding trust in established information sources. Additionally, they threaten democratic processes by enabling manipulation and propaganda, undermining the integrity of elections and public discourse. Moreover, synthetic realities contribute to the spread of scientific denialism, hindering efforts to address pressing global issues such as climate change. Furthermore, concerns regarding individual privacy are heightened as these platforms facilitate invasive surveillance and data exploitation. The rise of synthetic realities also leads to phenomena like nonconsensual fake pornography and scams, jeopardizing individuals’ reputations and financial security.

In this tutorial, we will explore the burgeoning landscape of synthetic realities, their impact, technological advancements, and ethical quandaries. We will discuss the specifics of synthetic media, including deepfakes and their generation techniques, modern AI-empowered multimedia manipulations, (mis)information, and (dis)information. We will also touch upon the imperative need for robust detection and explainable methods to combat the potential misuse of such technologies. We will show the dual-edged nature of synthetic realities and advocate for interdisciplinary research, informed public discourse, and collaborative efforts to harness their benefits while mitigating risks. This tutorial contributes with the discourse on the responsible development and application of artificial intelligence and synthetic media in modern society.

Schedule

The tutorial will include the following timeline:

  • Introduction to Synthetic Realities
  • Different forms of creating Synthetic Realities
  • Detection methods
  • Social, Technical and Political Challenges
  • Education and Standardization
  • Practical Session with Detection Methods (hands-on)

Biographies

GABRIEL BERTOCCO is a postdoctoral researcher on Large Language Models (LLMs) and Forensics at the Artificial Intelligence Lab., Recod.ai, Institute of Computing, University of Campinas, Brazil. Intelligence Lab., Recod.ai. His main areas of interest include artificial intelligence, computer vision and digital forensics.

ANDERSON ROCHA is a full professor for Artificial Intelligence and Digital Forensics at the Institute of Computing, University of Campinas (Unicamp),
Brazil. He is the head of the Artificial Intelligence Lab., Recod.ai, and was the Director of the Institute of Computing for the 2019–2023 term. He is an IEEE Fellow, a Microsoft, Google and Tan Chin Tuan Faculty Fellow and also an Asia Pacific Association Artificial Intelligence Fellow. Finally, he is ranked Top-2% among the most influential scientists worldwide, according to recent studies from Research.com and Standford/PlosOne.

European Digital Identity Wallet: Opportunities and Security Challenges

The revised eIDAS regulation on electronic IDentification, Authentic and trust Services (eIDAS 2.0) introduces the European Digital Identity Wallet (EUDI Wallet) as the main building block of the future European digital identity infrastructure across Member States. The vision behind this initiative is to offer European citizens and businesses the possibility to securely store and present through a digital wallet a variety of credentials related to various aspects of their digital identities (e.g., driving license, qualifications). In this context, the main goal of the revised eIDAS 2.0 is to enhance privacy, for example, by empowering citizens with the capability of selectively disclosing personal data in a controlled way. To this end, a shift from centralized to decentralized identity management paradigms is being debated. In fact, even if traditional identity management solutions currently in use (e.g., based on protocols like SAML 2.0, OAuth 2.0, and OpenID Connect) reduce the number of credentials users need to remember as a central identity provider is managing user’s digital identities for all federated services, they pose security and privacy risks: (i) centralized providers may track user activity across services; (ii) central data storage increases breach risks; (iii) data might be shared without user knowledge; and (iv) excessive sharing of personal information can lead to tracking and data monetization by services. To address these issues, potential solutions based on decentralized identity management have been proposed that aim to give users control over information sharing, allowing them to reveal only the required attributes. These considerations are at the basis of eIDAS 2.0. The purpose of this tutorial is to introduce the WIFS audience to the world of digital identity wallets, and the security and privacy challenges they entail.

Schedule

The tutorial will include the following timeline:

  • Overview of the EUDI Wallet initiative and the evolution of the eIDAS ecosystem (45 minutes)
  • Primer on selective disclosure mechanisms based on cryptographic primitives and revocation mechanisms of verifiable credentials (45 minutes)
  • Discussion on overall security and privacy aspects of digital identity wallets (45 minutes)

Biographies

AMIR SHARIF is a researcher in the Security & Trust Research Unit of the Cybersecurity Center at Fondazione Bruno Kessler, Trento, Italy. He earned his Ph.D. in Secure and Reliable Systems from Università Degli Studi di Genova in 2021, specializing in secure and reliable systems. Amir is currently involved in the Security & Trust Research Unit at FBK, in the context of a joint laboratory between FBK and the Italian Government Printing Office and Mint (Poligrafico e Zecca dello Stato Italiano, responsible for producing Italian eID cards), whose primary goal is to conduct research and innovation activities in digital identity solutions. In addition, he is a member of the Italian delegation in the eIDAS Expert Group, contributing to the development of
the European Digital Identity Architecture and Reference Framework.

GIADA SCIARRETTA is a researcher of the Security & Trust research unit of Fondazione Bruno Kessler. She obtained her MSc in mathematics at the University of Trento in 2012 and received her Ph.D. in computer science at the same University in 2018 while working with the Security & Trust unit. She is currently working on different projects related to identity and access management. Her research activities involve the design, security analysis (with informal and formal specifications), and risk assessment of access delegation and single sign-on protocols (e.g., OAuth 2.0 and OpenID Connect), multi-factor authentication (e.g., based on biometric or eID cards) and fully-remote enrollment procedures.

ALESSANDRO TOMASI is Head of the Applied Cryptography research unit at FBK. Currently works on privacy-enhancing cryptography for digital identity and verifiable e-voting. Previously worked on TLS compliance assessment and QRNG modeling and entropy measures. PhD in numerical analysis applied to image segmentation from the University of Sussex, 2009.