With Libelle DataMasking, you can anonymize sensitive and personal data and work with it sensibly - whether in SAP or another environment. Realistic-looking and GDPR-compliant data replaces the data in your non-production systems during anonymization. Developers can then easily access development or test systems and train participants. Analyses by experts remain meaningful despite anonymized data.
And if you need to provide a non-production system in advance, Libelle SystemCopy is the right tool.
Protects against unwanted data access or even data theft on your non-production systems through anonymisation.
Libelle DataMasking delivers realistic-looking and logically correct data so you can test optimally.
Works on all common databases and can be carried out overnight using definable configurations.
Libelle DataMasking works at database level, so we manage to anonymize up to 200,000 data entries per second. If that is not fast enough, tables can also be anonymized in parallel.
Libelle DataMasking provides over 40 anonymisation algorithms and a reference database with defined target values as standard. However, if you wish, you can also adapt everything here as required.
We transform original production data into realistic-looking and logically correct data. You can thus realistically test, train, and evaluate on your non-production systems - without a concrete personal reference.
Libelle DataMasking can access any common database and be used for individual systems or complete system landscapes. Databases in different environments are recognized and anonymized across systems.
How it works
With Libelle DataMasking, we are constantly developing and responding to the needs of our customers. With the latest release, we have introduced a test run before the actual work steps. This simulates a complete anonymization down to the actual changes in the data. This allows you to test all critical processes in advance and correct errors before you let the anonymization run automatically overnight or over the weekend.
Step 1: Inspection
In the check phase, Libelle DataMasking checks the infrastructure of your IT and whether the target system is available. Especially for SAP environments, it ensures that the system is not on production. This is done fully automatically and is an important step in order to take full advantage of Libelle DataMasking. Errors or fields that were not recognized can then be easily corrected.
Step 2: Preparation
After the successful check, the next step is the pre-phase. Here, the supplied reference files are provided and the keys for anonymization are generated. If necessary, you can also create backup tables here. This is a further facilitation before your first anonymization. If this has not been done according to your wishes, you can fall back on the backups.
Step 3: Anonymization
In the anon phase, the supplied and individually set anonymization algorithms are applied. These read the data from the non-production target system and anonymize it with the references provided. The result is realistic-looking and GDPR-compliant data.
Step 4: Review
In the last step, the post phase, it is checked whether the consistency of the data has been maintained, a final measure to obtain an optimal result. To give you a complete overview of the anonymization, Libelle DataMasking provides you with a comprehensive final report with all relevant information.
Excerpt of the anonymisation algorithms
- E-mail addresses
- Dates and times
Provided SAP templates
- SAP ERP
- SAP FI/CO
- SAP CRM
- SAP HCM
- SAP SD
- SAP SRM
Supported databases and database connectors
- SAP HANA
- Microsoft SQL Server*
- IBM DB2
- SAP ASE
We also support CSV, JSON and XML file formats .
Supported operating systems
* Support on Windows and PowerPC only
Frequently Asked Questions
How long does an anonymization take with Libelle DataMasking?
Which databases does Libelle DataMasking support?
Can I also repeat an anonymization?
How easy is it to customize the software's standard workflow?
We are in constant exchange with our customers and curiously observe the handling of data and how it will develop. As soon as an innovation or adaptation has proven itself, it is incorporated into our algorithms.