A
Adversarial Attacks
Projects with this topic
-
This repo contains the code for the paper "Provable Robustness against Wasserstein Adversarial Attacks". Created by Tobias Wegel (t.wegel@stud.uni-goettingen.de).
UpdatedUpdated -
Annonymizing facial footprints - anonyME allows users to inoculate their personal images against unauthorized machine learning models, with minimal distortion of the input image, and no need for prior machine learning knowledge.
Updated