A toolkit for tools and techniques related to the privacy and compliance of AI models. https://aip360.res.ibm.com
Find a file
2021-06-14 15:14:12 +03:00
apt Try to fix documentation 2021-06-07 16:29:34 +03:00
docs Try to fix documentation 2021-06-07 17:28:10 +03:00
notebooks Notebook demonstrating attack with differential privacy defense. 2021-04-29 09:52:55 +03:00
tests Initial commit 2021-04-28 14:00:19 +03:00
.gitattributes Ignore Jupyter Notebooks in git language detection 2021-04-28 16:34:02 +03:00
.readthedocs.yaml Try to fix documentation 2021-06-07 17:01:21 +03:00
LICENSE Initial commit 2021-04-28 06:25:00 -04:00
README.md Update readme 2021-06-14 15:14:12 +03:00
requirements.txt Initial commit 2021-04-28 14:00:19 +03:00

ai-privacy-toolkit


A toolkit for tools and techniques related to the privacy and compliance of AI models.

The first release of this toolkit contains a single module called anonymization. This module contains methods for anonymizing ML model training data, so that when a model is retrained on the anonymized data, the model itself will also be considered anonymous. This may help exempt the model from different obligations and restrictions set out in data protection regulations such as GDPR, CCPA, etc.

Official ai-privacy-toolkit documentation: https://ai-privacy-toolkit.readthedocs.io/en/latest/

Installation: pip install ai-privacy-toolkit

Related toolkits:

ai-minimization-toolkit: A toolkit for reducing the amount of personal data needed to perform predictions with a machine learning model

differential-privacy-library: A general-purpose library for experimenting with, investigating and developing applications in, differential privacy.

adversarial-robustness-toolbox: A Python library for Machine Learning Security. Includes an attack module called inference that contains privacy attacks on ML models (membership inference, attribute inference, model inversion and database reconstruction) as well as a privacy metrics module that contains membership leakage metrics for ML models.