Skip to content

ShapeShifter for YOLO #182

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 37 commits into
base: shapeshifter
Choose a base branch
from
Draft

ShapeShifter for YOLO #182

wants to merge 37 commits into from

Conversation

dxoigmn
Copy link
Contributor

@dxoigmn dxoigmn commented Jun 23, 2023

What does this PR do?

This obseletes #135 and #177. However, we should wait to merge this until we can remove the detection code and directly use torchvision.

Type of change

Please check all relevant options.

  • Improvement (non-breaking)
  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Please describe the tests that you ran to verify your changes. Consider listing any relevant details of your test configuration.

  • pytest
  • CUDA_VISIBLE_DEVICES=0 python -m mart experiment=CIFAR10_CNN_Adv trainer=gpu trainer.precision=16 reports 70% (21 sec/epoch).
  • CUDA_VISIBLE_DEVICES=0,1 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp trainer.precision=16 trainer.devices=2 model.optimizer.lr=0.2 trainer.max_steps=2925 datamodule.ims_per_batch=256 datamodule.world_size=2 reports 70% (14 sec/epoch).

Before submitting

  • The title is self-explanatory and the description concisely explains the PR
  • My PR does only one thing, instead of bundling different changes together
  • I list all the breaking changes introduced by this pull request
  • I have commented my code
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • I have run pre-commit hooks with pre-commit run -a command without errors

Did you have fun?

Make sure you had fun coding 🙃

dxoigmn added 29 commits June 9, 2023 13:58
This reverts commit 01a2066.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merge this into #132.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merge this into #132.

mart/nn/nn.py Outdated
@@ -80,6 +80,12 @@ def parse_sequence(self, sequence):
# We can omit the key of _call_with_args_ if it is the only config.
module_cfg = {"_call_with_args_": module_cfg}

# Add support for calling different functions using dot-syntax
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move _call_ functionality to new PR.

Comment on lines +106 to +108
# Turn tuple of dicts into dict of tuples
new_targets = {k: tuple(t[k] for t in targets) for k in targets[0].keys()}
new_targets["list_of_targets"] = targets
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to get rid of this...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically, we either need to decide to accept a List[Tensor] for input and List[Dict[str, Tensor]] for target or Tensor for input and Dict[str, Tensor] for target. The former interface at least allows images with different shapes and sizes even if the underlying network does not.

return super().forward(*values, weights=weights)

def _total_variation(self, image):
return torch.mean(
Copy link
Contributor Author

@dxoigmn dxoigmn Jun 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mean doesn't do anything and we should probably normalize this by the size of the patch (e.g., C*H*W).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant