A toolkit for tools and techniques related to the privacy and compliance of AI models. https://aip360.res.ibm.com
Find a file
abigailgold e00535d120
Fix error with pandas dataframes (#92)
* Fix error with pandas dataframes in _columns_different_distributions + add appropriate test
* Update documentation of classes to reflect that all data should be encoded and scaled.

---------

Signed-off-by: abigailt <abigailt@il.ibm.com>
2024-02-13 08:56:12 -05:00
.github/workflows Change back flake8 warnings to errors. Fix tests not to fail it. (#76) 2023-05-11 11:33:50 +03:00
apt Fix error with pandas dataframes (#92) 2024-02-13 08:56:12 -05:00
datasets Add data minimization functionality to the ai-privacy-toolkit (#3) 2021-07-12 15:56:42 +03:00
docs bump version (#88) 2024-01-01 06:55:00 -05:00
notebooks Support for one-hot encoded features in minimization (#87) 2023-12-24 18:18:18 -05:00
tests Fix error with pandas dataframes (#92) 2024-02-13 08:56:12 -05:00
.gitattributes Ignore Jupyter Notebooks in git language detection 2021-04-28 16:34:02 +03:00
.gitignore Merge pull request #71 from IBM/dataset_assessment 2023-03-20 14:21:29 +02:00
.readthedocs.yaml Fix rtd docs (#75) 2023-05-09 13:30:28 +03:00
LICENSE Initial commit 2021-04-28 06:25:00 -04:00
pyproject.toml Files for pypi dist 2021-08-02 11:48:05 +03:00
README.md Update README.md (#78) 2023-05-11 12:30:13 +03:00
requirements.txt Make data minimization more consistent and performant (#83) 2023-08-21 18:39:15 +03:00
setup.cfg bump version (#88) 2024-01-01 06:55:00 -05:00

OpenSSF Best Practices

ai-privacy-toolkit


A toolkit for tools and techniques related to the privacy and compliance of AI models.

The anonymization module contains methods for anonymizing ML model training data, so that when a model is retrained on the anonymized data, the model itself will also be considered anonymous. This may help exempt the model from different obligations and restrictions set out in data protection regulations such as GDPR, CCPA, etc.

The minimization module contains methods to help adhere to the data minimization principle in GDPR for ML models. It enables to reduce the amount of personal data needed to perform predictions with a machine learning model, while still enabling the model to make accurate predictions. This is done by by removing or generalizing some of the input features.

The dataset assessment module implements a tool for privacy assessment of synthetic datasets that are to be used in AI model training.

Official ai-privacy-toolkit documentation: https://ai-privacy-toolkit.readthedocs.io/en/latest/

Installation: pip install ai-privacy-toolkit

For more information or help using or improving the toolkit, please contact Abigail Goldsteen at abigailt@il.ibm.com, or join our Slack channel: https://aip360.mybluemix.net/community.

We welcome new contributors! If you're interested, take a look at our contribution guidelines.

Related toolkits:

ai-minimization-toolkit - has been migrated into this toolkit.

differential-privacy-library: A general-purpose library for experimenting with, investigating and developing applications in, differential privacy.

adversarial-robustness-toolbox: A Python library for Machine Learning Security. Includes an attack module called inference that contains privacy attacks on ML models (membership inference, attribute inference, model inversion and database reconstruction) as well as a privacy metrics module that contains membership leakage metrics for ML models.

Citation

Abigail Goldsteen, Ola Saadi, Ron Shmelkin, Shlomit Shachor, Natalia Razinkov, "AI privacy toolkit", SoftwareX, Volume 22, 2023, 101352, ISSN 2352-7110, https://doi.org/10.1016/j.softx.2023.101352.