Skip to content

textnoisr: Adding random noise to a dataset

build-doc code-style nightly-test unit-test

textnoisr is a python package that allows to add random noise to a text dataset, and to control very accurately the quality of the result.

Here is an example if your dataset consists on the first few lines of the Zen of python:

Raw text
Noisy text
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
...
TheO Zen of Python, by Tim Pfter

BzeautiUful is ebtter than ugly.
Eqxplicin is better than imlicit.
Simple is beateUr than comdplex.
Complex is better than comwlicated.
Flat is bejAter than neseed.
...

Four types of "actions" are implemented:

  • insert a random character, e.g. STEAM → STREAM,
  • delete a random character, e.g. STEAM → TEAM,
  • substitute a random character, e.g. STEAM → STEAL.
  • swap two consecutive characters, e.g. STEAM → STEMA

The general philosophy of the package is that only one single parameter is needed to control the noise level. This "noise level" is applied character-wise, and corresponds roughly to the probability for a character to be impacted.

More precisely, this noise level is calibrated so that the Character Error Rate of a noised dataset converges to this value as the amount of text increases.

Why a whole package for such a simple task?

In the case of inserting, deleting and substituting characters at random with a probability \(p\), the Character Error Rate is only the average number of those operations, so it will converge to the input value \(p\) due to the Law of Large Numbers.

However, the case of swapping consecutive characters is not trivial at all for two reasons:

  • First, swapping two characters is not an "atomic operation" with respect to the Character Error Rate metric.

  • Second, we do not want to swap repeatedly the same character over and over again if the probability to apply the swap action is high:
    STEAM → TSEAM
    TSEAM → TESAM
    TESAM → TEASM
    TEASM → TEAMS
    This would be equivalent to STEAM → TEAMS, so this cannot be considered "swapping consecutive characters". To avoid this behavior, we must avoid swapping a character if it has just been swapped. This breaks the independency between one character and the following one, and makes the Law of Large Numbers not applicable.

We use Markov Chains to model the swapping of characters. This allows us to compute and correct the corresponding bias in order to make it straightforward for the user to get the desired Character Error Rate, as if the Law of Large Number could be applied!

All the details of this unbiasing are here. The goal of this package is for the user to be confident on the result without worrying about the implementation details.


The documentation follows this plan:

  • You may want to follow a quick tutorial to learn the basics of the package,
  • The Results page illustrates how no calibration is needed in order to add noise to a corpus with a target Character Error Rate.
  • The How this works section explains the mechanisms, and some design choices of this package. We have been extra careful to explain how some statistical bias have been avoided, for the package to be both user-friendly and correct. A dedicated page deeps dive in the case of the swap action.
  • The API Reference details all the technical descriptions needed.

There is also a Medium article about this project.