2021-04-28 06:25:00 -04:00
|
|
|
# ai-privacy-toolkit
|
2021-04-28 14:00:19 +03:00
|
|
|
<p align="center">
|
|
|
|
|
<img src="docs/images/logo with text.jpg?raw=true" width="467" title="ai-privacy-toolkit logo">
|
|
|
|
|
</p>
|
|
|
|
|
<br />
|
|
|
|
|
|
2021-04-28 06:25:00 -04:00
|
|
|
A toolkit for tools and techniques related to the privacy and compliance of AI models.
|
2021-04-28 14:00:19 +03:00
|
|
|
|
2021-07-12 15:56:42 +03:00
|
|
|
The [**anonymization**](apt/anonymization/README.md) module contains methods for anonymizing ML model
|
|
|
|
|
training data, so that when a model is retrained on the anonymized data, the model itself will also be
|
|
|
|
|
considered anonymous. This may help exempt the model from different obligations and restrictions
|
2021-04-28 14:00:19 +03:00
|
|
|
set out in data protection regulations such as GDPR, CCPA, etc.
|
|
|
|
|
|
2021-07-12 15:56:42 +03:00
|
|
|
The [**minimization**](apt/minimization/README.md) module contains methods to help adhere to the data
|
|
|
|
|
minimization principle in GDPR for ML models. It enables to reduce the amount of
|
|
|
|
|
personal data needed to perform predictions with a machine learning model, while still enabling the model
|
|
|
|
|
to make accurate predictions. This is done by by removing or generalizing some of the input features.
|
|
|
|
|
|
2021-04-28 14:14:53 +03:00
|
|
|
Official ai-privacy-toolkit documentation: https://ai-privacy-toolkit.readthedocs.io/en/latest/
|
2021-04-28 14:00:19 +03:00
|
|
|
|
2021-06-10 08:03:54 +03:00
|
|
|
Installation: pip install ai-privacy-toolkit
|
|
|
|
|
|
2021-04-28 14:00:19 +03:00
|
|
|
**Related toolkits:**
|
|
|
|
|
|
2021-07-12 15:56:42 +03:00
|
|
|
ai-minimization-toolkit - has been migrated into this toolkit.
|
2021-04-28 14:00:19 +03:00
|
|
|
|
|
|
|
|
[differential-privacy-library](https://github.com/IBM/differential-privacy-library): A
|
|
|
|
|
general-purpose library for experimenting with, investigating and developing applications in,
|
|
|
|
|
differential privacy.
|
|
|
|
|
|
|
|
|
|
[adversarial-robustness-toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox):
|
2021-06-14 15:14:12 +03:00
|
|
|
A Python library for Machine Learning Security. Includes an attack module called *inference* that contains privacy attacks on ML models
|
|
|
|
|
(membership inference, attribute inference, model inversion and database reconstruction) as well as a *privacy* metrics module that contains
|
|
|
|
|
membership leakage metrics for ML models.
|
2021-04-28 14:00:19 +03:00
|
|
|
|