mirror of
https://github.com/IBM/ai-privacy-toolkit.git
synced 2026-05-07 19:12:39 +02:00
Update version and documentation
This commit is contained in:
parent
b6ea416bcb
commit
5223ad1f5a
6 changed files with 49 additions and 25 deletions
|
|
@ -8,12 +8,16 @@ Welcome to ai-privacy-toolkit's documentation!
|
|||
|
||||
This project provides tools for assessing and improving the privacy and compliance of AI models.
|
||||
|
||||
The first release of this toolkit contains a single module called anonymization. This
|
||||
module contains methods for anonymizing ML model training data, so that when
|
||||
a model is retrained on the anonymized data, the model itself will also be considered
|
||||
anonymous. This may help exempt the model from different obligations and restrictions
|
||||
The anonymization module contains methods for anonymizing ML model
|
||||
training data, so that when a model is retrained on the anonymized data, the model itself will also be
|
||||
considered anonymous. This may help exempt the model from different obligations and restrictions
|
||||
set out in data protection regulations such as GDPR, CCPA, etc.
|
||||
|
||||
The minimization module contains methods to help adhere to the data
|
||||
minimization principle in GDPR for ML models. It enables to reduce the amount of
|
||||
personal data needed to perform predictions with a machine learning model, while still enabling the model
|
||||
to make accurate predictions. This is done by by removing or generalizing some of the input features.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Getting Started:
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue