Skip to content

ShapeShifter for YOLO #182

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 37 commits into
base: shapeshifter
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
10accf5
Use attrgetter
dxoigmn Jun 9, 2023
d8fe8a0
Restore image_visualizer config
dxoigmn Jun 9, 2023
48e3be9
Make *_step_log dicts where the key is the logging name and value is …
dxoigmn Jun 12, 2023
01a2066
Fix configs
dxoigmn Jun 13, 2023
df1d0b2
remove sync_dist
dxoigmn Jun 13, 2023
14f4d1f
backwards compatibility
dxoigmn Jun 13, 2023
2e30587
Revert "Fix configs"
dxoigmn Jun 13, 2023
ca17006
Merge branch 'main' into better_litmodular
dxoigmn Jun 13, 2023
6fef148
style
dxoigmn Jun 13, 2023
c4e0d78
Make metric logging keys configurable
dxoigmn Jun 12, 2023
508798c
cleanup
dxoigmn Jun 13, 2023
fc770e8
Remove *_step_end
dxoigmn Jun 14, 2023
5050163
Merge branch 'better_litmodular2' into better_sequentialdict
dxoigmn Jun 14, 2023
4414822
Merge branch 'better_litmodular3' into better_sequentialdict
dxoigmn Jun 14, 2023
c31f4de
Don't require output module with SequentialDict
dxoigmn Jun 12, 2023
549f705
fix configs and tests
dxoigmn Jun 14, 2023
5e73817
Generalize attack objectives
dxoigmn Jun 14, 2023
62216f2
Merge branch 'better_litmodular2' into better_sequentialdict
dxoigmn Jun 14, 2023
a113f7e
Merge branch 'main' into general_visualizer
dxoigmn Jun 14, 2023
94b949b
Merge branch 'better_sequentialdict' into general_visualizer
dxoigmn Jun 14, 2023
fe8a0f8
Add torchvision YOLO model from https://github.com/pytorch/vision/pul…
dxoigmn Jun 23, 2023
f5235e1
Fix imports
dxoigmn Jun 23, 2023
2c93f2b
Merge remote-tracking branch 'origin/shapeshifter' into shapeshifter_…
dxoigmn Jun 23, 2023
680bbb9
Merge remote-tracking branch 'origin/shapeshifter' into shapeshifter_…
dxoigmn Jun 23, 2023
0939b83
Add support for calling different functions using dot-syntax in seque…
dxoigmn Jun 23, 2023
363355e
Add trainable YOLO v3/v4 experiments
dxoigmn Jun 23, 2023
02dbdcf
Merge branch 'general_visualizer' into shapeshifter_yolo
dxoigmn Jun 23, 2023
038dce1
Merge branch 'freeze_callback' into shapeshifter_yolo
dxoigmn Jun 23, 2023
2f63ea9
Add YOLO v3/v4 ShapeShifter experiments
dxoigmn Jun 23, 2023
45a06da
style
dxoigmn Jun 23, 2023
3ac5943
Merge branch 'shapeshifter' into shapeshifter_yolo
dxoigmn Jun 23, 2023
660a7a0
Merge branch 'shapeshifter' into shapeshifter_yolo
dxoigmn Jun 23, 2023
02d30be
_train_mode_ can intefer with eval mode callback
dxoigmn Jun 26, 2023
0e4cfc3
Make YOLO model return detections and losses
dxoigmn Jun 26, 2023
eacd355
bugfix
dxoigmn Jun 26, 2023
dc2e658
Merge branch 'shapeshifter' into shapeshifter_yolo
dxoigmn Jun 27, 2023
74f290f
Merge branch 'shapeshifter' into shapeshifter_yolo
dxoigmn Jun 27, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 23 additions & 24 deletions mart/callbacks/visualizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,38 +4,37 @@
# SPDX-License-Identifier: BSD-3-Clause
#

import os
from operator import attrgetter

from pytorch_lightning.callbacks import Callback
from torchvision.transforms import ToPILImage

__all__ = ["PerturbedImageVisualizer"]
__all__ = ["ImageVisualizer"]


class PerturbedImageVisualizer(Callback):
"""Save adversarial images as files."""
class ImageVisualizer(Callback):
def __init__(self, frequency: int = 100, **tag_paths):
self.frequency = frequency
self.tag_paths = tag_paths

def __init__(self, folder):
super().__init__()
def log_image(self, trainer, tag, image):
# Add image to each logger
for logger in trainer.loggers:
# FIXME: Should we just use isinstance(logger.experiment, SummaryWriter)?
if not hasattr(logger.experiment, "add_image"):
continue

# FIXME: This should use the Trainer's logging directory.
self.folder = folder
self.convert = ToPILImage()
logger.experiment.add_image(tag, image, global_step=trainer.global_step)

if not os.path.isdir(self.folder):
os.makedirs(self.folder)
def log_images(self, trainer, pl_module):
for tag, path in self.tag_paths.items():
image = attrgetter(path)(pl_module)
self.log_image(trainer, tag, image)

def on_train_batch_end(self, trainer, model, outputs, batch, batch_idx):
# Save input and target for on_train_end
self.input = batch["input"]
self.target = batch["target"]
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
if batch_idx % self.frequency != 0:
return

def on_train_end(self, trainer, model):
# FIXME: We should really just save this to outputs instead of recomputing adv_input
adv_input = model(input=self.input, target=self.target)
self.log_images(trainer, pl_module)

for img, tgt in zip(adv_input, self.target):
fname = tgt["file_name"]
fpath = os.path.join(self.folder, fname)
im = self.convert(img / 255)
im.save(fpath)
def on_train_end(self, trainer, pl_module):
self.log_images(trainer, pl_module)
4 changes: 4 additions & 0 deletions mart/configs/callbacks/perturbation_visualizer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
perturbation_visualizer:
_target_: mart.callbacks.ImageVisualizer
frequency: 100
perturbation: ???
24 changes: 24 additions & 0 deletions mart/configs/datamodule/coco_yolo.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
defaults:
- coco

train_dataset:
transforms:
transforms:
- _target_: torchvision.transforms.ToTensor
- _target_: mart.transforms.ConvertCocoPolysToMask
- _target_: mart.transforms.PadToSquare
fill: 0.5
- _target_: mart.transforms.Resize
size: [416, 416]
- _target_: mart.transforms.RemapLabels
- _target_: mart.transforms.ConvertInstanceSegmentationToPerturbable

val_dataset:
transforms: ${..train_dataset.transforms}

test_dataset:
transforms: ${..val_dataset.transforms}

collate_fn:
_target_: hydra.utils.get_method
path: mart.datamodules.coco.yolo_collate_fn
35 changes: 35 additions & 0 deletions mart/configs/experiment/COCO_YOLOv3.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# @package _global_

defaults:
- override /datamodule: coco_yolo
- override /model: yolo
- override /optimization: super_convergence
- override /metric: average_precision

task_name: "COCO_YOLOv3"
tags: ["evaluation"]

optimized_metric: "test_metrics/map"

trainer:
# 117,266 training images, 6 epochs, batch_size=16, 43,974.75
max_steps: 43975
# FIXME: "nms_kernel" not implemented for 'BFloat16', torch.ops.torchvision.nms().
precision: 32

datamodule:
num_workers: 4
ims_per_batch: 8

model:
modules:
yolo:
config_path: ${paths.data_dir}/yolov3.cfg
weights_path: ${paths.data_dir}/yolov3.weights

optimizer:
lr: 0.001
momentum: 0.9
weight_decay: 0.0005

training_metrics: null
149 changes: 149 additions & 0 deletions mart/configs/experiment/COCO_YOLOv3_ShapeShifter.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
# @package _global_

defaults:
- /attack/perturber@model.modules.perturbation: default
- /attack/perturber/initializer@model.modules.perturbation.initializer: uniform
- /attack/perturber/projector@model.modules.perturbation.projector: range
- /attack/composer@model.modules.input_adv: warp_composite
- /attack/gradient_modifier@model.gradient_modifier: lp_normalizer
- override /datamodule: coco_yolo
- override /model: yolo
- override /optimization: super_convergence
- override /metric: average_precision
- override /callbacks:
[
model_checkpoint,
lr_monitor,
perturbation_visualizer,
gradient_monitor,
attack_in_eval_mode,
no_grad_mode,
]

task_name: "COCO_YOLOv3_ShapeShifter"
tags: ["adv"]

optimized_metric: "test_metrics/map"

trainer:
# 64115 training images, batch_size=8, FLOOR(64115/16) = 8014
max_steps: 80140 # 10 epochs
# mAP can be slow to compute so limit number of images
limit_val_batches: 100
precision: 32

callbacks:
model_checkpoint:
monitor: "validation_metrics/map"
mode: "min"

attack_in_eval_mode:
module_classes:
- _target_: hydra.utils.get_class
path: torch.nn.BatchNorm2d

no_grad_mode:
module_names: "model.yolo"

perturbation_visualizer:
perturbation: "model.perturbation.perturbation"
frequency: 500

datamodule:
num_workers: 4
ims_per_batch: 8

train_dataset:
annFile: ${paths.data_dir}/coco/annotations/person_instances_train2017.json
val_dataset:
annFile: ${paths.data_dir}/coco/annotations/person_instances_val2017.json
test_dataset:
annFile: ${paths.data_dir}/coco/annotations/person_instances_val2017.json

model:
modules:
empty_targets:
_target_: mart.nn.EmptyTargets

yolo:
config_path: ${paths.data_dir}/yolov3.cfg
weights_path: ${paths.data_dir}/yolov3.weights

perturbation:
size: [3, 416, 234]

initializer:
min: 0.49
max: 0.51

projector:
min: 0.0
max: 1.0

total_variation:
_target_: mart.nn.TotalVariation

input_adv:
warp:
_target_: torchvision.transforms.Compose
transforms:
- _target_: mart.transforms.ColorJitter
brightness: [0.5, 1.5]
contrast: [0.5, 1.5]
saturation: [0.5, 1.0]
hue: [-0.05, 0.05]
- _target_: torchvision.transforms.RandomAffine
degrees: [-5, 5]
translate: [0.1, 0.25]
scale: [0.4, 0.6]
shear: [-3, 3, -3, 3]
interpolation: 2 # BILINEAR
clamp: [0, 1]

optimizer:
lr: 0.05
momentum: 0.9

lr_scheduler:
scheduler:
three_phase: true

gradient_modifier: null

training_sequence:
seq004:
empty_targets:
targets: "target.list_of_targets"
seq005: "perturbation"
seq006: "input_adv"
seq010:
yolo:
images: "input_adv"
targets: "empty_targets"
seq050:
total_variation:
_call_with_args_:
- "perturbation"
seq100:
loss:
_call_with_args_:
- "yolo.confidence"
- "total_variation"
weights:
- 1
- 0.0001

training_metrics: null
training_step_log:
total_variation: "total_variation"

validation_sequence:
seq004: ${..training_sequence.seq004}
seq005: ${..training_sequence.seq005}
seq006: ${..training_sequence.seq006}
seq050: ${..training_sequence.seq050}

test_sequence:
seq004: ${..training_sequence.seq004}
seq005: ${..training_sequence.seq005}
seq006: ${..training_sequence.seq006}
35 changes: 35 additions & 0 deletions mart/configs/experiment/COCO_YOLOv4.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# @package _global_

defaults:
- override /datamodule: coco_yolo
- override /model: yolo
- override /optimization: super_convergence
- override /metric: average_precision

task_name: "COCO_YOLOv4"
tags: ["evaluation"]

optimized_metric: "test_metrics/map"

trainer:
# 117,266 training images, 6 epochs, batch_size=16, 43,974.75
max_steps: 43975
# FIXME: "nms_kernel" not implemented for 'BFloat16', torch.ops.torchvision.nms().
precision: 32

datamodule:
num_workers: 4
ims_per_batch: 8

model:
modules:
yolo:
config_path: ${paths.data_dir}/yolov4.cfg
weights_path: ${paths.data_dir}/yolov4.weights

optimizer:
lr: 0.001
momentum: 0.9
weight_decay: 0.0005

training_metrics: null
Loading