DeepRobust- Pytorch Library for adversarial attack and defense in deep learning
source link: https://github.com/DSE-MSU/DeepRobust/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
DeepRobust
DeepRobust is a pytorch adversarial library for attack and defense methods on images and graphs. List of including algorithms can be found in [Image Package] and [Graph Package] .
Environment & Installation
Usage
-
Image Attack and Defense
-
Graph Attack and Defense
For more details about attacks and defenses, you can read the following papers.
Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Baisc Environment
python >= 3.6 pytorch >= 1.2.0
see setup.py
or requirements.txt
for more information.
Installation
git clone https://github.com/DSE-MSU/DeepRobust.git cd DeepRobust python setup.py install
Test Examples
python examples/graph/test_gcn_jaccard.py --dataset cora python examples/image/evaluation_attack
Usage
Image Attack and Defense
-
Train model
Example: Train a simple CNN model on MNIST dataset for 20 epoch on gpu.
import deeprobust.image.netmodels.train_model as trainmodel trainmodel.train('CNN', 'MNIST', 'cuda', 20)
Model would be saved in deeprobust/trained_models/.
-
Instantiated attack methods and defense methods.
Example: Generate adversary example with PGD attack.
from deeprobust.image.attack.pgd import PGD from deeprobust.image.config import attack_params import deeprobust.image.netmodels.resnet as resnet model = resnet.ResNet18().to('cuda') model.load_state_dict(torch.load("./trained_models/CIFAR10_ResNet18_epoch_50.pt")) model.eval() transform_val = transforms.Compose([transforms.ToTensor()]) test_loader = torch.utils.data.DataLoader( datasets.CIFAR10('deeprobust/image/data', train = False, download=True, transform = transform_val), batch_size = 10, shuffle=True) x, y = next(iter(test_loader)) x = x.to('cuda').float() adversary = PGD(model, device) Adv_img = adversary.generate(x, y, **attack_params['PGD_CIFAR10'])
Example: Train defense model.
from deeprobust.image.defense.pgdtraining import PGDtraining from deeprobust.image.config import defense_params from deeprobust.image.netmodels.CNN import Net import torch from torchvision import datasets, transforms model = Net() train_loader = torch.utils.data.DataLoader( datasets.MNIST('deeprobust/image/defense/data', train=True, download=True, transform=transforms.Compose([transforms.ToTensor()])), batch_size=100,shuffle=True) test_loader = torch.utils.data.DataLoader( datasets.MNIST('deeprobust/image/defense/data', train=False, transform=transforms.Compose([transforms.ToTensor()])), batch_size=1000,shuffle=True) defense = PGDtraining(model, 'cuda') defense.generate(train_loader, test_loader, **defense_params["PGDtraining_MNIST"])
More example code can be found in deeprobust/examples.
-
Use our evulation program to test attack algorithm against defense.
Example:
python -m deeprobust.image.evaluation_attack
Graph Attack and Defense
Attacking Graph Neural Networks
-
Load dataset
import torch import numpy as np from deeprobust.graph.data import Dataset from deeprobust.graph.defense import GCN from deeprobust.graph.global_attack import Metattack data = Dataset(root='/tmp/', name='cora', setting='nettack') adj, features, labels = data.adj, data.features, data.labels idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test idx_unlabeled = np.union1d(idx_val, idx_test)
-
Set up surrogate model
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16, with_relu=False, device=device) surrogate = surrogate.to(device) surrogate.fit(features, adj, labels, idx_train)
-
Set up attack model and generate perturbations
model = Metattack(model=surrogate, nnodes=adj.shape[0], feature_shape=features.shape, device=device) model = model.to(device) perturbations = int(0.05 * (adj.sum() // 2)) model.attack(features, adj, labels, idx_train, idx_unlabeled, perturbations, ll_constraint=False) modified_adj = model.modified_adj
For more details please refer to mettack.py
or run python examples/graph/test_mettack.py --dataset cora --ptb_rate 0.05
Defending Against Graph Attacks
-
Load dataset
import torch from deeprobust.graph.data import Dataset, PtbDataset from deeprobust.graph.defense import GCN, GCNJaccard import numpy as np np.random.seed(15) # load clean graph data = Dataset(root='/tmp/', name='cora', setting='nettack') adj, features, labels = data.adj, data.features, data.labels idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test # load pre-attacked graph by mettack perturbed_data = PtbDataset(root='/tmp/', name='cora') perturbed_adj = perturbed_data.adj
-
Test
# Set up defense model and test performance device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = GCNJaccard(nfeat=features.shape[1], nclass=labels.max()+1, nhid=16, device=device) model = model.to(device) model.fit(features, perturbed_adj, labels, idx_train) model.eval() output = model.test(idx_test) # Test on GCN model = GCN(nfeat=features.shape[1], nclass=labels.max()+1, nhid=16, device=device) model = model.to(device) model.fit(features, perturbed_adj, labels, idx_train) model.eval() output = model.test(idx_test)
For more details please refer to test_gcn_jaccard.py
or run python examples/graph/test_gcn_jaccard.py --dataset cora
Acknowledgement
Some of the algorithms are refer to paper authors' implementations. References can be found at the top of the file. Thanks to their outstanding works!
Recommend
-
74
README.md Generative Adversarial Networks (GANs) in PyTorch Introduction See
-
14
alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation contains a lot of my guesses, so I...
-
6
XSS: attack, defense – and C# programming XSS – or cross-site scripting – is one of the most common vulnerabilities in web applications. It has been on the OWASP Top 10 list (the list of the most critical security risks to web a...
-
4
Adversarial Attack on Face Recognition Search Adversarial Attack on Face Rec...
-
8
Stream the live event: August 31, 2022 | 02:00 - 04:15 PM CEST Join Our Live Event: Attack and Defense - Practical API Security Workshop API...
-
2
Introduction Adversarial machine learning is a growing threat in the AI and machine learning research community. The most commo...
-
3
SmartData Collective >
-
7
Attack and Defense Strategies - Third Edition ($39.99 Value) free download...
-
9
Attack and Defense Strategies ($39.99 Value) now free...
-
3
News Darktrace/Email upgrade enhances generative AI email attack defense Upgraded...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK