11/20/2020: We are developing a new framework for backdoors with FL: Backdoors101. It extends to many new attacks (clean-label, physical backdoors, etc) and has improved user experience. Check it out!
This code includes experiments for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)
All experiments are done using Python 3.7 and PyTorch 1.0.
-
Create a virtual environment in conda
conda env create --file environment.yml
-
Activate the conda environment
conda activate pytorch_backdoor_FL_test
-
Create a folder for models to be saved
mkdir saved_models
-
Open the visdom server for visualizing the training accuracy
python -m visdom.server
(The server executes in background) -
Open another terminal, activate the conda environment, and use the following command to train the model
python -u training.py 2>&1 | tee -a log_20230310.log
I encourage to contact me ([email protected]) or raise Issues in GitHub, so I can provide more details and fix bugs.
Most of the experiments resulted by tweaking parameters in utils/params.yaml (for images) and utils/words.yaml (for text), you can play with them yourself.