2021-04-28 06:25:00 -04:00
|
|
|
# ai-privacy-toolkit
|
2021-04-28 14:00:19 +03:00
|
|
|
<p align="center">
|
|
|
|
|
<img src="docs/images/logo with text.jpg?raw=true" width="467" title="ai-privacy-toolkit logo">
|
|
|
|
|
</p>
|
|
|
|
|
<br />
|
|
|
|
|
|
2021-04-28 06:25:00 -04:00
|
|
|
A toolkit for tools and techniques related to the privacy and compliance of AI models.
|
2021-04-28 14:00:19 +03:00
|
|
|
|
|
|
|
|
The first release of this toolkit contains a single module called [**anonymization**](apt/anonymization/README.md).
|
|
|
|
|
This module contains methods for anonymizing ML model training data, so that when
|
|
|
|
|
a model is retrained on the anonymized data, the model itself will also be considered
|
|
|
|
|
anonymous. This may help exempt the model from different obligations and restrictions
|
|
|
|
|
set out in data protection regulations such as GDPR, CCPA, etc.
|
|
|
|
|
|
2021-04-28 14:14:53 +03:00
|
|
|
Official ai-privacy-toolkit documentation: https://ai-privacy-toolkit.readthedocs.io/en/latest/
|
2021-04-28 14:00:19 +03:00
|
|
|
|
2021-06-10 08:03:54 +03:00
|
|
|
Installation: pip install ai-privacy-toolkit
|
|
|
|
|
|
2021-04-28 14:00:19 +03:00
|
|
|
**Related toolkits:**
|
|
|
|
|
|
|
|
|
|
[ai-minimization-toolkit](https://github.com/IBM/ai-minimization-toolkit): A toolkit for
|
|
|
|
|
reducing the amount of personal data needed to perform predictions with a machine learning model
|
|
|
|
|
|
|
|
|
|
[differential-privacy-library](https://github.com/IBM/differential-privacy-library): A
|
|
|
|
|
general-purpose library for experimenting with, investigating and developing applications in,
|
|
|
|
|
differential privacy.
|
|
|
|
|
|
|
|
|
|
[adversarial-robustness-toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox):
|
2021-06-14 15:14:12 +03:00
|
|
|
A Python library for Machine Learning Security. Includes an attack module called *inference* that contains privacy attacks on ML models
|
|
|
|
|
(membership inference, attribute inference, model inversion and database reconstruction) as well as a *privacy* metrics module that contains
|
|
|
|
|
membership leakage metrics for ML models.
|
2021-04-28 14:00:19 +03:00
|
|
|
|