• Advances and Open Problems in Federated Learning


    挖个大坑,等有空了再回来填。心心念念的大综述呀(吐血三升)!

    郑重声明:原文参见标题,如有侵权,请联系作者,将会撤销发布!

    项目地址:https://github.com/open-intelligence/federated-learning-chinese

    具体内容参见项目地址,欢迎大家在项目的issue上提出问题!!!

    Abstract

      联邦学习(FL)是一种机器学习环境,其中许多客户端(如移动设备或整个组织)在中央服务器(如服务提供商)的协调下协同训练模型,同时保持训练数据去中心化。FL体现了集中数据收集和最小化的原则,可以减轻传统的中心化机器学习和数据科学方法带来的许多系统隐私风险和成本。在FL研究爆炸式增长的推动下,本文讨论了近年来的进展,提出了大量的开放性问题和挑战。

    Contents

    1 Introduction

      1.1 The Cross-Device Federated Learning Setting

        1.1.1 The Lifecycle of a Model in Federated Learning

        1.1.2 A Typical Federated Training Process

      1.2 Federated Learning Research

      1.3 Organization

    2 Relaxing the Core FL Assumptions: Applications to Emerging Settings and Scenarios

      2.1 Fully Decentralized / Peer-to-Peer Distributed Learning

        2.1.1 Algorithmic Challenges

        2.1.2 Practical Challenges

      2.2 Cross-Silo Federated Learning

      2.3 Split Learning

    3 Improving Efficiency and Effectiveness

      3.1 Non-IID Data in Federated Learning

        3.1.1 Strategies for Dealing with Non-IID Data

      3.2 Optimization Algorithms for Federated Learning

        3.2.1 Optimization Algorithms and Convergence Rates for IID Datasets

        3.2.2 Optimization Algorithms and Convergence Rates for Non-IID Datasets

      3.3 Multi-Task Learning, Personalization, and Meta-Learning

        3.3.1 Personalization via Featurization

        3.3.2 Multi-Task Learning

        3.3.3 Local Fine Tuning and Meta-Learning

        3.3.4 When is a Global FL-trained Model Better?

      3.4 Adapting ML Workflows for Federated Learning

        3.4.1 Hyperparameter Tuning

        3.4.2 Neural Architecture Design

        3.4.3 Debugging and Interpretability for FL

      3.5 Communication and Compression

      3.6 Application To More Types of Machine Learning Problems and Models

    4 Preserving the Privacy of User Data

      4.1 Actors, Threat Models, and Privacy in Depth

      4.2 Tools and Technologies

        4.2.1 Secure Computations

        4.2.2 Privacy-Preserving Disclosures

        4.2.3 Verifiability

      4.3 Protections Against External Malicious Actors

        4.3.1 Auditing the Iterates and Final Model

        4.3.2 Training with Central Differential Privacy

        4.3.3 Concealing the Iterates

        4.3.4 Repeated Analyses over Evolving Data

        4.3.5 Preventing Model Theft and Misuse

      4.4 Protections Against an Adversarial Server

        4.4.1 Challenges: Communication Channels, Sybil Attacks, and Selection

        4.4.2 Limitations of Existing Solutions

        4.4.3 Training with Distributed Differential Privacy

        4.4.4 Preserving Privacy While Training Sub-Models

      4.5 User Perception

        4.5.1 Understanding Privacy Needs for Particular Analysis Tasks

        4.5.2 Behavioral Research to Elicit Privacy Preferences

    5 Robustness to Attacks and Failures

      5.1 Adversarial Attacks on Model Performance

        5.1.1 Goals and Capabilities of an Adversary

        5.1.2 Model Update Poisoning

        5.1.3 Data Poisoning Attacks

        5.1.4 Inference-Time Evasion Attacks

        5.1.5 Defensive Capabilities from Privacy Guarantees

      5.2 Non-Malicious Failure Modes

      5.3 Exploring the Tension between Privacy and Robustness

    6 Ensuring Fairness and Addressing Sources of Bias

      6.1 Bias in Training Data

      6.2 Fairness Without Access to Sensitive Attributes

      6.3 Fairness, Privacy, and Robustness

      6.4 Leveraging Federation to Improve Model Diversity

      6.5 Federated Fairness: New Opportunities and Challenges

    7 Concluding Remarks

    A Software and Datasets for Federated Learning

  • 相关阅读:
    vim配置
    mongodb的dockercompose.yml
    上三角 css
    简单的散列函数djb2,sdbm,lose lose
    Vscode中前端比较好用的插件
    git rebase
    lineargradient mixin
    Nginx配置BrowserRouter跟随reactrouter
    Qt:no matching function for call to (类名)::connect()的错误原因总结
    Qt pro文件里如何判断系统是32位或64位
  • 原文地址:https://www.cnblogs.com/lucifer1997/p/12061562.html
Copyright © 2020-2023  润新知