Workshop on Systems Challenges in Reliable and Secure Federated Learning

Co-located with ACM SOSP 2021

October 25, 2021

In Virtual Land

Invited Talks  


Dinesh C. Verma, IBM Watson

Title: Enterprise Challenges in Federated AI Solutions

Federated learning provides a significant opportunity in addressing many challenges of real-world challenges when using AI/ML based solutions. The key challenge in enterprises is associated with the difficulties in moving data around an enterprise. Most businesses have their data spread out in multiple locations, data at each location can grow to be fairly substantial, and moving data across such locations may be restricted due to regulatory or security concerns, latency and delays in moving data, or the expense of the data transfer. Federated learning provides a clear advantage for enterprise use-cases since model training can be speeded up by avoid data migration, and each enterprise site has sufficient and comparable computing capacity to train models. Military coalitions have a very similar situation. They have data at the head-quarters or command centers of different countries that make up the coalition, but they may have limited ability to exchange the data in raw. The bulk of current academic research in federated learning, however, is not suitable for use within the enterprise use-case scenarios. A big stumbling block in the current federated learning academic research is that it relies on a synchronous training of the AI model at all sites. This synchronization may be feasible in some situations, but is usually cumbersome and very difficult to attain in practice. In order to be effective, federated learning must be done in an asynchronous manner. Another set of challenges arise due to the fact that the data schematics, the input features and the output labels used within the data space are usually different across different sites. Before the model training can even begin, one needs to synchronize the information about the data, attain a common vocabulary and transform data so that all parties are using a single model with consistent sets of inputs and outputs. In this talk, we will discuss these and other challenges and how we have overcome them when creating real-world solutions. In order to address these challenges, one needs to perform a set of intensive transformations and modifications of the available data at each site before model fusion can begin. Systems which can be optimized for such transformations, data generation and data encoding are needed in order to make federated learning successful in the enterprise. 

Bio: 

Dinesh C. Verma (IBM Fellow, Fellow of UK Royal Academy of Engineering, IEEE Fellow) is the CTO of Edge Computing at IBM T J Watson Research Center, Yorktown Heights, New York. In this role, he is responsible for defining the strategy in the area of edge computing for IBM world-wide research. He is a member of the IBM Academy of Technology, has been recognized multiple times as an IBM Master Inventor, and has won several IBM technical awards. He has more than 25 years of professional experience. He has authored eleven books, 150+ technical papers and been granted 185+ U.S. patents. He has served on various program committees and editorial boards. He has led international research alliances for academia, industry and government labs for 15 years. His latest book released at beginning of October discusses challenges and solutions in building Federated AI solutions for Real-World Business Scenarios. More details about Dinesh can be seen at http://ibm.biz/dineshverma 


Reza Shokri, NUS Professor

Title: ML Privacy Meter

Recent inference attacks against machine learning algorithms demonstrate how an adversary can extract sensitive information about a model’s training data, by having access to its parameters or predictions. The results of these attacks on many real world systems and datasets (e.g., Google and Amazon ML as a service platforms, federated learning algorithms, and models trained on sensitive datasets such as text data, medical, location, purchase history, image data, etc), shows that large models pose a significant risk to data privacy, and need to be considered as some type of personal data. Thus, we need carefully designed methodologies and tools to audit data privacy risk in machine learning in a wide range of applications. This is also highlighted by the European Commission and the White House call for protection of personal data during all the phases of deploying AI systems. Recent reports published by the Information Commissioner’s Office (ICO) for auditing AI and the National Institute of Standards and Technology (NIST) for securing applications of Artificial Intelligence also highlight the privacy risk to data from machine learning models. It is recommended in the auditing framework by ICO for organizations to identify these threats and take measures to minimize the risk. As the ICO’s investigation teams will be using this framework to assess the compliance with data protection laws, organizations must account for and estimate the privacy risks to data through models. To this end, we have developed an open source tool, named ML Privacy Meter, based on membership inference algorithms to analyze privacy risk in machine learning algorithms. For example, ML privacy meter can help in data protection impact assessment (DPIA) by providing a quantitative assessment of privacy risk of a machine learning model. The tool can generate extensive privacy reports about the aggregate-level and individual-level risk with respect to training data records. It can estimate the amount of information that is revealed through the predictions of a model (when deployed) or its parameters (when shared). Hence, when providing query access to the model or revealing the entire model, the tool can be used to assess the potential threats to training data in both centralized and federated learning settings. In this talk, I will speak about what exactly privacy risk is, and discuss the methodology for quantifying privacy risk in machine learning using ML Privacy Meter tool. The open source software is available through privacy-meter.com  

Bio: 

Reza Shokri is a NUS Presidential Young Professor of Computer Science.His research focuses on data privacy and trustworthy machine learning. He is a recipient of the IEEE Security and Privacy (S&P) Test-of-Time Award 2021, for his paper on quantifying location privacy. He received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies in 2018, for his work on analyzing the privacy risks of machine learning models. He received the NUS Early Career Research Award 2019, VMWare Early Career Faculty Award 2021, Intel Faculty Research Award (Private AI Collaborative Research Institute) 2021-2022, and Google’s PDPO Faculty Award 2021. He obtained his PhD from EPFL.  


Quoc Do Le, Huawei

Title: Multi-stakeholder state of Machine Learning

Machine learning is fundamentally a multi-stakeholder computation. Modern machine learning applications require significant computing power and would hence benefit from running in a scalable cloud infrastructure. However, at the same time, they need to protect their code, their training data (input), and the produced model (output), all of which represent key assets for their respective owners. The development and operation of such a large system involve multiple stakeholders, including the training data owner, training code owner, the model owner, the inference data owner, the inference code owner, and the infrastructure owner, e.g., cloud providers. To perform the machine learning computation, all these stakeholders have to trust each other. However, the trust is becoming increasingly unmanageable since they might collude to gain advantages over the other stakeholders. In addition, system administrators employed by the cloud provider may leak or modify application code and data. How to allow stakeholders to jointly perform machine learning to unlock all AI benefits without trusting each other ? To answer this question, in this talk, we will present and demonstrate our solution using SCONE - a shielded execution framework built on modern Trusted Execution Environments (TEEs); and its configuration and secret management system called CAS.  

Bio: 

Quoc Do Le is a senior research engineer at Huawei Munich Research Center and a co-founder of Scontain UG. He received his PhD degree from TU Dresden in Jan 2018. During his PhD, he's been lucky to have fruitful internship/collaboration with Bell Labs. Prior to joining TU Dresden, he received his Masters degree in computer science from Pohang University of Science and Technology (POSTECH). His research interests include secure privacy-preserving data analytics, confidential computing, approximate computing, and distributed systems.  


Andrea Olgiati, AWS SageMaker

Title: Debugging Federated Learning

This talk will start by describe the overall structure of AWS SageMaker and how it allows ML developers and scientists to create, train, and deploy machine-learning (ML) models in the cloud. It will describe how complex ML pipelines are architected and orchestrated across multiple data sources owned by different teams across organizations, especially in the enterprise world. This creates a complex system that is often brittle and requires constant oversight – we will explain how SageMaker helps control the web of dependencies and allows the owners of these systems to be alerted when unexpected changes happen – during feature curation, model training or even after model deployment. Against this backdrop, we will describe the challenges faced by those who want to build Federated Learning (FL) systems for enterprise application, and what challenges they face. In particular, we will discuss how relying on a stable, well-curated dataset and a homogeneous set of devices/datasources is often an unrealistic assumption, and how debugging these complex pipelines to create complex datasets is a fact of life for many of our customers. Applying Federated Learning – which is a set of techniques designed to (among other things) overcome the lack of trust between parties – in this shaky environment brings along a set of unique challenges. How can we overcome mistakes in good faith by data providers? How can we detect them and – ultimately – allow humans to correct them if we don’t trust each other? We hope that this talk will spur some thought on the subject and allow academics in the field to consider this topic.

Bio: 

Andrea Olgiati is the Chief Engineer for AWS SageMaker – the most comprehensive ML platform, providing the tools to build, train and deploy ML models at scale. Andrea is a 6-year veteran of AWS, where he also worked on data warehousing and computer vision. Prior to AWS, Andrea used to build microchips – but doesn’t really like to talk about that anymore. He holds a MSc in CS/EE from Politecnico di Milano, Milan, Italy.  


Neil Gong, Duke University

Title: Secure Federated Learning

Federated learning is an emerging machine learning paradigm to enable many clients (e.g., smartphones, IoT devices, and edge devices) to collaboratively learn a model, with help of a server, without sharing their raw local data. Due to its communication efficiency and potential promise of protecting private or proprietary user data, and in light of emerging privacy regulations such as GDPR, federated learning has become a central playground for innovation. However, due to its distributed nature, federated learning is vulnerable to malicious clients. In this talk, we will discuss local model poisoning attacks to federated learning, in which malicious clients send carefully crafted local models or their updates to the server to corrupt the global model. Moreover, we will discuss our work on building federated learning methods that are secure against a bounded number of malicious clients.  

Bio: 

Neil Gong is an Assistant Professor in the Department of Electrical and Computer Engineering and Department of Computer Science (secondary appointment) at Duke University. He is broadly interested in cybersecurity with a recent focus on the intersections between security, privacy, and machine learning. He has received an NSF CAREER Award, an Army Research Office (ARO) Young Investigator Award, Rising Star Award from the Association of Chinese Scholars in Computing, an IBM Faculty Award, a Facebook Research Award, and multiple best paper or best paper honorable mention awards. He received a B.E. from the University of Science and Technology of China (USTC) in 2010 and a Ph.D in Computer Science from the University of California at Berkeley in 2015  


Panelists

Moderator: Saurabh Bagchi, Purdue

Bio: 

Saurabh Bagchi is a Professor in the School of Electrical and Computer Engineering and the Department of Computer Science at Purdue University in West Lafayette, Indiana. His research interest is in dependable computing and distributed systems. He is the founding Director of a university-wide resilience center at Purdue called CRISP (2017-present) and a PI of the Army's Artificial Intelligence Innovation Institute (A2I2) (2020-25) that spans 9 universities. He is the recipient of the Alexander von Humboldt Research Award (2018), the Adobe Faculty Award (2017, 2020, 2021), the AT&T Labs VURI Award (2016), the Google Faculty Award (2015), and the IBM Faculty Award (2014). He is elected to serve on IEEE Computer Society's Board of Governors (2022-25, previously 2017-20). He is an IEEE Computer Society Distinguished Visitor (2020), an IEEE Golden Core member (2018), an ACM Distinguished Scientist (2013), and a Distinguished Speaker for ACM (2012). He was selected to be a member of the International Federation for Information Processing (IFIP) in 2020.


Salman Avestimehr, USC

Bio: 

Salman Avestimehr is a Dean's Professor, the inaugural director of the USC-Amazon Center on Secure and Trusted Machine Learning (Trusted AI), and the director of the Information Theory and Machine Learning (vITAL) research lab at the Electrical & Computer Engineering and Computer Science Departments of University of Southern California. He is also an Amazon Scholar at Alexa AI. He received his Ph.D. in 2008 and M.S. degree in 2005 in Electrical Engineering and Computer Science, both from the University of California, Berkeley. His research interests include information theory, and large-scale distributed computing/learning, secure and private computing/learning, and federated learning. Dr. Avestimehr has received a number of awards for his research, including the James L. Massey Research & Teaching Award from IEEE Information Theory Society, an Information Theory Society and Communication Society Joint Paper Award, a Presidential Early Career Award for Scientists and Engineers (PECASE) from the White House (President Obama), a Young Investigator Program (YIP) award from the U. S. Air Force Office of Scientific Research, a National Science Foundation CAREER award, the David J. Sakrison Memorial Prize, and several Best Paper Awards at Conferences. He has been an Associate Editor for IEEE Transactions on Information Theory and a general Co-Chair of the 2020 International Symposium on Information Theory (ISIT). He is a fellow of IEEE.


Ameet Talwalkar, CMU

Bio: 

Ameet Talwalkar is an assistant professor in the Machine Learning Department at CMU. He also co-founded and served as Chief Scientist at Determined AI until its recent acquisition by HPE. His interests are in the field of statistical machine learning. His current work is motivated by the goal of democratizing machine learning, with a focus on topics related to automation, interpretability, and distributed learning. He led the initial development of the MLlib project in Apache Spark, co-authored the textbook 'Foundations of Machine Learning,' and created an award-winning edX MOOC on distributed machine learning. He also helped to create the MLSys conference, serving as the inaugural Program Chair in 2018, General Chair in 2019, and currently as President of the MLSys Board.


Shiva Kasiviswanathan, Amazon

Bio: 

Shiva Kasiviswanathan is a Senior Research Scientist at Amazon working on theoretical aspects of machine learning. His recent research includes algorithms for distributed optimization, differentially private data analyses , and causal inference. Shiva holds a PhD from Pennsylvania State University, and prior to joining Amazon has held research positions at various other industrial labs. Shiva has published more than 50 articles in top computer science conferences that span both machine learning venues like NeurIPS, ICML, AISATS, and theoretical CS venues like STOC, FOCS, SODA etc.


Gérôme Bovet, Swiss DoD

Bio: 

Gérôme Bovet is the head of data science for the Swiss Department of Defense, where he leads a research team and a portfolio of about 30 projects. His work focuses on Machine and Deep Learning approaches applied to cyber-defense and intelligence use cases, with an emphasis on anomaly detection, adversarial and collaborative learning. He received his Ph.D. in networks and systems from Telecom ParisTech, France, in 2015, and an Executive MBA from the University of Fribourg, Switzerland in 2021.