mirror of
https://github.com/IBM/ai-privacy-toolkit.git
synced 2026-04-26 13:26:21 +02:00
Merge pull request #71 from IBM/dataset_assessment
Add AI privacy Dataset assessment module with two attack implementations. Signed-off-by: Maya Anderson <mayaa@il.ibm.com>
This commit is contained in:
parent
c153635e4d
commit
dbb958f791
13 changed files with 986 additions and 1 deletions
12
apt/risk/data_assessment/__init__.py
Normal file
12
apt/risk/data_assessment/__init__.py
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
"""
|
||||
Module providing privacy risk assessment for synthetic data.
|
||||
|
||||
The main interface, ``DatasetAttack``, with the ``assess_privacy()`` main method assumes the availability of the
|
||||
training data, holdout data and synthetic data at the time of the privacy evaluation.
|
||||
It is to be implemented by concrete assessment methods, which can run the assessment on a per-record level,
|
||||
or on the whole dataset.
|
||||
The abstract class ``DatasetAttackMembership`` implements the ``DatasetAttack`` interface, but adds the result
|
||||
of the membership inference attack, so that the final score contains both the membership inference attack result
|
||||
for further analysis and the calculated score.
|
||||
"""
|
||||
from apt.risk.data_assessment import dataset_attack
|
||||
Loading…
Add table
Add a link
Reference in a new issue