ai-privacy-toolkit/README.md

31 lines
1.4 KiB
Markdown
Raw Normal View History

2021-04-28 06:25:00 -04:00
# ai-privacy-toolkit
2021-04-28 14:00:19 +03:00
<p align="center">
<img src="docs/images/logo with text.jpg?raw=true" width="467" title="ai-privacy-toolkit logo">
</p>
<br />
2021-04-28 06:25:00 -04:00
A toolkit for tools and techniques related to the privacy and compliance of AI models.
2021-04-28 14:00:19 +03:00
The first release of this toolkit contains a single module called [**anonymization**](apt/anonymization/README.md).
This module contains methods for anonymizing ML model training data, so that when
a model is retrained on the anonymized data, the model itself will also be considered
anonymous. This may help exempt the model from different obligations and restrictions
set out in data protection regulations such as GDPR, CCPA, etc.
2021-04-28 14:14:53 +03:00
Official ai-privacy-toolkit documentation: https://ai-privacy-toolkit.readthedocs.io/en/latest/
2021-04-28 14:00:19 +03:00
Installation: pip install ai-privacy-toolkit
2021-04-28 14:00:19 +03:00
**Related toolkits:**
[ai-minimization-toolkit](https://github.com/IBM/ai-minimization-toolkit): A toolkit for
reducing the amount of personal data needed to perform predictions with a machine learning model
[differential-privacy-library](https://github.com/IBM/differential-privacy-library): A
general-purpose library for experimenting with, investigating and developing applications in,
differential privacy.
[adversarial-robustness-toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox):
A Python library for Machine Learning Security.