mirror of
https://github.com/IBM/ai-privacy-toolkit.git
synced 2026-04-25 04:46:21 +02:00
Initial commit
This commit is contained in:
parent
d2de0726f4
commit
5665c2e79d
22 changed files with 2369 additions and 0 deletions
22
apt/anonymization/README.md
Normal file
22
apt/anonymization/README.md
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
# anonymization module
|
||||
This module contains methods for anonymizing ML model training data, so that when
|
||||
a model is retrained on the anonymized data, the model itself will also be considered
|
||||
anonymous. This may help exempt the model from different obligations and restrictions
|
||||
set out in data protection regulations such as GDPR, CCPA, etc.
|
||||
|
||||
The module contains methods that enable anonymizing training datasets in a manner that
|
||||
is tailored to and guided by an existing, trained ML model. It uses the existing model's
|
||||
predictions on the training data to train a second, anonymizer model, that eventually determines
|
||||
the generalizations that will be applied to the training data. For more information about the
|
||||
method see: https://arxiv.org/abs/2007.13086
|
||||
|
||||
Once the anonymized training data is returned, it can be used to retrain the model.
|
||||
|
||||
The following figure depicts the overall process:
|
||||
|
||||
<p align="center">
|
||||
<img src="../../docs/images/AI_Privacy_project2.jpg?raw=true" width="667" title="anonymization process">
|
||||
</p>
|
||||
<br />
|
||||
|
||||
|
||||
Loading…
Add table
Add a link
Reference in a new issue