The AIDE proposal researches how to counter AI-based attacks, addressing their growing sophistication by defining federated learning-based countermeasures.
We are developing a detailed model of federated learning threats that documents different adversaries’ knowledge, capabilities, and objectives for complementary use cases. The ultimate aim of this research is to provide a comprehensive understanding of the threats and risks associated with federated learning. To achieve this, we aim to distinguish between attacks against the federated learning stack itself (e.g. data or model poisoning attacks, distributed backdoor attacks, property inference attacks) and the application that aims to benefit from the federated learning model (e.g. the impact of the limited query budget against model theft attacks). This latter distinction is crucial as it helps us understand the constraints of the application domain (e.g., the robust or non-robust characteristics specific to the application), which is of utmost importance in our work. It also helps us understand the impact of honest but inquisitive or malicious adversaries and the risks associated with the security and confidentiality of internal and external adversaries (e.g., evasion attacks in cybersecurity use cases, sensitive information leaks or membership inference attacks in medical use cases).
Our work in this area involves the following specific objectives: (1) to provide AI counter-attack mechanisms at the federated learning level, such as using the adversary’s adaptation of defences; the meta-learning phase of the federated learning process can learn new ways of countering new AI-based attacks, and (2) to provide AI counter-attack mechanisms at the federated learning cybersecurity level, such as detecting the manipulation of antivirus classifiers.
Key Performance Indicator (KPI) | Leader | Contributor | Chronology |
---|---|---|---|
● Types and number of AI-based attacks that can be handled. ●Types of adversarial machine learning strategies that can be handled. | UCLOUVAIN | KULEUVEN | ● March 2023 : identifying threats to federated learning architecture ● June 2023: defining countermeasures to these threats ● End 2023: First implementation of M18 In case of extension: ● End 2025 (in case of extension) : continuous improvement based on the Deming wheel. |