Speakers


Index:

Invited speakers

Government Support for Cybersecurity

Research to Startup

Industry session

Invited speakers


Serge Egelman, Research Director, University of California, Berkeley

Taking Responsibility for Someone Else’s Code: Studying the Privacy Behaviors of Mobile Apps at Scale

Abstract

Modern software development has embraced the concept of “code reuse,” which is the practice of relying on third-party code to avoid “reinventing the wheel” (and rightly so). While this practice saves developers time and effort, it also creates liabilities: the resulting app may behave in ways that the app developer does not anticipate. This can cause very serious issues for privacy compliance: while an app developer did not write all of the code in their app, they are nonetheless responsible for it. In this talk, I will present research that my group has conducted to automatically examine the privacy behaviors of mobile apps vis-à-vis their compliance with privacy regulations. Using analysis tools that we developed and commercialized (as AppCensus, Inc.), we have performed dynamic analysis on hundreds of thousands of the most popular Android apps to examine what data they access, with whom they share it, and how these practices comport with various privacy regulations, app privacy policies, and platform policies. We find that while potential violations abound, many of the issues appear to be due to the (mis)use of third-party SDKs (i.e., supply chain problems). I will provide an account of the most common types of privacy and security issues that we observe and how app developers can better identify these issues prior to releasing their apps.

Bio

Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He is also Chief Scientist and co-founder of AppCensus, Inc., which is a startup that is commercializing his research by performing on-demand privacy analysis of mobile apps for developers, regulators, and watchdog groups. He conducts research to help people make more informed online privacy and security decisions, and is generally interested in consumer protection. This has included improvements to web browser security warnings, authentication on social networking websites, and most recently, privacy on mobile devices. Seven of his research publications have received awards at the ACM CHI conference, which is the top venue for human-computer interaction (HCI) research; his research on privacy on mobile platforms has received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, the USENIX Security Distinguished Paper Award, the Spanish Data Protection Authority’s Emilio Aced Personal Data Protection Research Award, as well as the CNIL-INRIA Privacy Research Award. His research has been cited in numerous lawsuits and regulatory actions, as well as featured in the New York Times, Washington Post, Wall Street Journal, Wired, CNET, NBC, and CBS. He received his PhD from Carnegie Mellon University and has previously performed research at Xerox Parc, Microsoft, and NIST.


Leyla Bilge, Global Head of Scam Research, Gen Digital

Demystifying Modern Scams: Breaking the Stigma and Building Resilience

Abstract

As online scams evolve in scale and sophistication, traditional defenses are no longer sufficient to protect users from financial and emotional harm. At Gen Digital’s Research Labs, we are pioneering a new approach to digital safety—one that combines deep cybersecurity expertise with domain-trained AI systems capable of detecting fraud before it reaches the user. 

In this talk, I will present the latest advancements in our AI-powered anti-scam technologies, including SMS and email protection, real-time browser and messaging app defenses, and our most recent AI assistant for scam prevention. I will share key findings from our research on scammer behavior, highlight regional trends in AI-enabled scams, and discuss the growing role of AI tools on this topic.

Bio

Leyla Bilge is Director of the Scam Research Labs at Gen. She holds a Ph.D. from Eurecom and Telecom ParisTech on the topic of network-based botnet detection problems. Her interests embrace most of the systems security topics with a special focus on data analysis for cyber security, DNS-based malicious URL detection, predictive analytics, cyber insurance, and web privacy.


Herbert Bos, Professor, Vrije Universiteit, Amsterdam

The Art of Being Offensive

Abstract

Until (roughly) the end of the first decade of this millenium, offensive security research was frowned upon by the academic security community. It was very difficult to get attack papers accepted by the “Top 4” security conferences. This has changed. There are now complaints that the bar for offensive work is much lower than for defensive solutions. Be that as it may, offensive security is recognized not just as an essential catalyst for new defensive research, but also as a source for insights into the true nature of computer systems. To give but one example: we now realize that we can no longer assume that the hardware is secure, as the layers of abstraction that we use (and need!) to build complex systems are leaky. Subtle effects in the implementation of the hardware at the lowest level (memory and CPUs) have security implications for the security of code running at the highest level of the stack. In this talk, I will present my views on “good” vs “bad” offensive security research (and share some of my own experiences, both good and bad).

Bio

Herbert Bos is full professor at the Vrije Universiteit Amsterdam and co-leads the VUSec Systems Security research group. He obtained an ERC Starting Grant to work on reverse engineering and an NWO VICI grant to work on vulnerability detection. In 2024, he was awarded an ERC Advanced Grant for research on detecting, analysing and mitigation transient execution attacks (such as Spectre, Meltdown and MDS) and an NWO Gravitation Grant for building a secure foundation for computer systems. Other research interests include OS design, microarchitectural attacks and defenses, fuzzing, exploitation, networking, and dependable systems.

He obtained his Ph.D. from the Cambridge University Computer Laboratory, and has been four years at the Universiteit Leiden.


Udbhav Tiwari, Vice President Strategy and Global Affairs, Signal

Signal – Resisting the Normalization of Surveillance

Abstract

“Signal – Resisting the Normalization of Surveillance,” explores Signal’s foundational role as a non-profit dedicated to genuine private communication. Signal prioritizes privacy by design, ensuring even Signal cannot access user data, a technical reality, not just a promise. The open-source, battle-tested Signal Protocol underpins end-to-end encryption for billions, setting an industry standard.

The talk will highlight Signal’s significant growth in Europe, indicative of a societal shift towards valuing privacy over pervasive surveillance. It will also address the asymmetric tussle against tech giants and state overreach, detailing our efforts to counter censorship and legislative threats like Client-Side Scanning.

Finally, it will discuss cutting-edge innovations such as usernames and post-quantum cryptography, emphasizing why Signal is critical infrastructure in an era where AI and “total recall” systems threaten to erase digital privacy.

Bio

Udbhav Tiwari is the VP for Strategy and Global Affairs at Signal. Udbhav’s experience in the technology sector spans both global and regional contexts, where he was formerly the Director for Global Product Policy at Mozilla, with prior roles at Google and the Centre for Internet and Society in India. He has testified before the U.S Senate Committee on Commerce, Science and Transportation and been quoted as an expert by CNN, The Guardian, Wired, Financial Times, BBC, and Reuters. Udbhav was previously affiliated with the Carnegie Endowment for Peace and was named to India Today’s “India Tomorrow” list in 2020.


Despina Spanou, Principal Adviser for Cybersecurity Coordination, European Commission

The European Commission Priorities for Cybersecurity

Abstract

Bio

Despina Spanou is Principal Adviser in the European Commission for Cybersecurity Coordination, including with security cross-cutting issues.


She was previously the Head of the Cabinet of the Vice-President of the European Commission, Margaritis Schinas (2019-2024). In this capacity she coordinated the Vice-President’s work on security, migration and asylum,health,skills, education,cultureand sports. She also coordinated the Vice-President’s EU Security
Union work, ranging from counterterrorism, organised crime and cybersecurity to hybrid threats.


Prior to that, she was Director for Digital Society, Trust and Cybersecurity at the Director-General of the Directorate-General for Communications Network, Content and Technology (DG CONNECT) of the European Commission. Ms. Spanou was responsible for the European Union’s cybersecurity policy and law, served as a member of the management board of ENISA, and of the Steering Board of the Computer Emergency Response Team for the EU Institutions (CERT-EU). She is a founding member of the Women4Cyber initiative and advocate for the need for more cybersecurity experts in Europe. She also teaches EU Cybersecurity policy at Harvard Kennedy School.


In her 20 years in the European Commission, Ms Spanou has held a number of senior management positions in the areas of Health and Consumer Policy and served as Deputy Head of Cabinet for Commissioners Kyprianou and Vassiliou. Before joining the European Commission, she practised EU competition and trade law with a US law firm for a number of years.


Despina Spanou is a member of the Athens Bar Association and holds a Ph.D. in European law from the University of Cambridge.


Steven J. Murdoch, Professor of Security Engineering, University College London

Memory-safety at Scale: Fifteen Years of the CHERI Project

Abstract

Memory-safety flaws have plagued the field of software engineering for decades and are estimated to be the cause of 70% of security vulnerabilities. Various solutions have been proposed, but have had limited impact due to their difficulty in integrating with legacy code, particularly the large quantities of C and C++ code that form the foundation of much of our computing infrastructure. Another option frequently discussed is to rewrite code in memory-safe languages, such as Rust; however, this would require a vast amount of time and effort, making it infeasible at scale. The CHERI project sets out to address limitations of previous protection systems to provide guaranteed memory safety that can be incrementally deployed, is suitable for securing legacy code, and doesn’t depend on secrets that could be leaked. It does so through implementing and validating minor architectural changes in processor instruction set architectures, which results in substantial improvements to safety with minimal performance loss. It has been implemented on processors ranging from 32-bit RISC-V microcontrollers to datacenter-class ARM CPUs. This talk will introduce CHERI and how it has developed over 15years, resulting in it being awarded the IEEE Security and Privacy Test of Time award in 2025.

Bio

Steven J. Murdoch is Professor of Security Engineering and head of the Information Security Research Group of University College London, working on payment system security, privacy-enhancing technologies, online safety, and the interaction between computer science and the law. He teaches on the UCL MSc in Information Security. His research interests include authentication/passwords, banking security, anonymous communications, censorship resistance and covert channels. He has worked with the OpenNet Initiative, investigating Internet censorship, and for the Tor Project, on improving the security and usability of the Tor anonymity system.

His current research is on how computer systems can produce evidence to allow fair and efficient dispute resolution. Professor Murdoch was Chief Security Architect at Cronto, and following their acquisition of the technology he developed, he took on the role of Distinguished Scientist for OneSpan. He is a member of REPHRAIN, the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online. He is a director of the Open Rights Group, a UK-based digital campaigning organisation working to protect rights to privacy and free speech online and is a Fellow of the IET and BCS.



Government Support for Cybersecurity

Senja Nordström, Advisor EU calls for proposals in cybersecurity research and innovation, NCC-SE at MSB

Bio

Senja works at the Swedish Civil Contingencies Agency, MSB, at National Cybersecurity Coordination Center (NCC-SE). Senja has extensive experience working with public funding and financial support for research and innovation in various areas such as resilience, cybersecurity, artificial intelligence and digital solutions, most recently from work at the Swedish Energy Agency.

Senja holds a Master’s Degree in Computer Science from Blekinge Tekniska Högskola.


Research to Startup

Mathias Ekstedt, Professor, KTH Royal Institute of Technology

Startup lessons-learned from foreseeti

Abstract

This talk will briefly describe the journey of foreseeti, from formation to accusation. It will touch the company’s vision and reflect upon success factors and challenges. 

Bio

Mathias Ekstedt is a professor at KTH Royal Institute of Technology. His research interests include cybersecurity in combination with software and systems architecture modeling and analysis. Much of the research revolves around developing formalisms for analyzing structural vulnerabilities and simulating attacks in large-scale computer systems. Mathias is the co-founder and director for KTH Master programme in Cybersecurity. He also co-founded foreseeti, a company that developed a software tool for cybersecurity assessment and analysis of IT-infrastructures, that was acquired by Google in 2022.


Alejandro Russo, Professor, Chalmers Institute of Technology

From Whiteboard to Whitepaper to Product: Launching a Cybersecurity Startup from Academic Roots

Bio

Alejandro Russo is a professor at Chalmers University of Technology working on the intersection of functional languages, security, and systems. He is the principal investigator of the SSF funded Octopi project (secure IoT programming), a recipient of a Google Research Awards (2010) and several young research grants from the Swedish research agencies Vetenskapsrådet (2011, 2015), STINT Initiation grants (2012, 2014, 2017), and the Barbro Osher foundation (2014). Internationally, Alejandro Russo had the honor and pleasure to work on prestigious research institutions like Stanford University, where he was appointed visiting associate professor (2013, 2014-2015). His research ranges from foundational aspects of security to developing tools to secure software.

Alejandro Russo is a co-Founder of DPella


Christian Gehrmann, Professor, Lund University

Bifrost Security – a journey from basic research to a full product offering

Abstract

Creating a new business can take many different roads. In this talk, we will discuss the start-up journey for the company Bifrost Security. Bifrost Security started with PhD project research at Lund University in 2019, and the company was created in 2022. In this talk, we will discuss the background to the research and how we moved from research to a full product offering.

Bio

Christian Gehrmann received the M.Sc. degree in Electronic Engineering and the PhD degree in Information Theory from Lund University, Lund, Sweden, in 1991 and 1997, respectively. He is currently a Full Professor of Computer Security at Lund University and is the head of the division for Security and Networked Systems at the Department of Electrical and Information Technology.

Christian has been active in many industry standardization bodies and made major contributions to the Bluetooth, Trusted Computing Group, and ONVIF (network video) standards. He has been active in the research and development of secure computer and communication systems for more than 30 years. He has numerous scientific publications and patents in the information security area. His research interests include secure systems design, secure executionenvironments, and security protocols. He is the founder of Gehrmann Trusted ICT AB, co-founder of Bifrost Security AB, and co-founder of VyPr AI AB.


Industry session

John Preuß Mattsson, Ericsson Research

Migrating Telecom to Quantum-Resistant Cryptography on a Global Scale

Abstract

The mobile industry, with its unique characteristics, has been preparing for the transition to quantum-resistant cryptography for many years. As truly global standards, 4G and 5G require algorithms that are universally trusted and secure across all regions. Mobile networks are considered critical infrastructure, heavily regulated, and expected to adhere to government recommendations for migration timelines. However, performance and costs remain high priorities, which differs from national security systems. For many IoT applications, radio is the most limiting resource, making small sizes essential. Hardware like base stations has a long lifecycle, often remaining in service for decades. Mobile networks rely heavily on IETF standards for public-key cryptography, though they have a few unique protocols. 5G and 6G standards will introduce quantum-resistant algorithms in 2027–2028, and 6G will be quantum-resistant by design. Migrating public key infrastructure (PKI) and root-of-trust for firmware and software updates is a top priority. This talk will discuss these challenges and the industry’s plans to overcome them.

Bio

John is an expert in cryptographic algorithms and security protocols at Ericsson Research in Stockholm, Sweden. His work focuses on applied cryptography, security protocols, privacy, IoT security, post-quantum cryptography, and trade compliance. During his almost 20 years at Ericsson, he has worked with a lot of different technology areas and been active in many security standardization organizations including IETF, IRTF, 3GPP, GSMA, and NIST where he has significantly influenced cryptography, Internet, and cellular security standards. In addition to designing new protocols, John has also found significant attacks on many algorithms and protocols. John holds an MSc in engineering physics from KTH Royal Institute of Technology, Sweden, and an MSc in business administration and economics from Stockholm University.


Fredrik Strömberg, Head of Research, Amagicom group

A transparent HSM using transparency technology

Abstract

Introduction
It’s best practice to protect sensitive signing keys using a hardware security module (HSM). An HSM allows its users to sign messages, but not extract private keys for later use.

HSMs and transparency technology such as Sigsum complement each other. A transparency capable HSM can help adding transparency to legacy systems, such as UEFI Secure Boot, which uses RSA signatures and is unlikely to support transparency technology anytime soon.

This talk will explain how Tillitis TKey and Tillitis HSM work, and how they can be combined with discoverable signatures using Sigsum.

Tillitis hardware
Tillitis TKey is a radically open source USB hardware security device which allows running small arbitrary applications in a more secure environment, using measured boot to give each application its own secret. It is based on an ice40 FPGA with a soft RISC-V core.

Tillitis HSM is a work-in-progress device with higher performance, intended to complement the Tillitis TKey. It builds on ideas from TKey, USB Armory, and CrypTech HSM (funded by Internet Society), and like the TKey, it can run arbitrary applications. In addition to an ice40 FPGA, it features an i.MX6 ARM SoC running a bare metal Go unikernel, and a more powerful ECP5 FPGA, for
accelerating cryptographic operations.

Transparency apps

1: Bringing transparency to legacy systems.
Verified boot mechanisms such as Intel BootGuard and UEFI Secure Boot use RSA signatures. We can store a private RSA key in an HSM, tied to an app that signs a message using that key only if the data to be signed has already been transparency logged. The HSM then needs to verify a Sigsum signature including an inclusion proof and witness cosignatures.

2: Preventing log server key misuse.
We want to reduce the risk of signing split-views, be that due to operator mistakes, or host compromise. We can store the log’s private key in an HSM tied to an app that records the most recent log state signed, and requires a valid consistency proof before signing. The HSM could even help enforcing replication, by requiring signatures by log mirrors. The mirrors certify that all
data corresponding to the new tree head is stored reliably.

3: Preventing witness key misuse
An HSM app for a witness can operate similarly, but store the most recently signed state for several logs. For each cosignature, the HSM requires a valid consistency proof to the previous state.

Bio

Fredrik Strömberg is Head of Research at the Amagicom group, consisting of Mullvad VPN, Tillitis and Glasklar Teknik. He is a co-designer of System Transparency, Sigsum, Tillitis TKey, Tillitis HSM among other open-source software and hardware projects.


TBA

TBA

Abstract

TBA

Bio

TBA