Data available from the project

 What will you find in this page?

  1. General description of the ATCO2 corpora and links of interest.
  2. Data description per-corpus released by ATCO2 project, including its per-airport splits and license.
  3. Brief information about how we collected, pre-processed and annotated the ATCO2 corpora.
  4. Additional metadata (radar, ADS-B) and data characteristics (such as English Language Score and Voice Activity detection) in ATCO2.
  5. Automatic Speech Recognition system with only ATCO2 corpora as supervised data.
  6. Named entity recognition and sequence classification systems with ATCO2 corpora.

 I. General description about the ATCO2 corpora

This page presents more information about the datasets collected and open-sourced by the ATCO2 project. The corpora released by ATCO2 can be used for many speech and text-based machine learning (ML) tasks, including: 

  • Automatic speech recognition (ASR), 
  • Contextualized (ASR),
  • English language detection on ATC speech,
  • Speaker diarization,
  • Named entity recognition (NER),
  • Speaker Role detection, i.e., speech coming from pilot or Air Traffic Controllers.


The figure below depicts the type of annotations offered by our corpus.

Annotations offered by our corpus.png

Find below some links of interests: 

 II. Data released by ATCO2 project

The ATCO2 corpora is split into 3 main parts: 

  1. Training set, 
  2. 4h test set and
  3. 1h test subset (freely-available and downloadable below).


The training data

Consists of audio and raw metadata:

License: Available for Commercial and Non-Commercial Use (see ELRA)


The test data

The official test data consist of:

License: Available for Commercial and Non-Commercial Use (see ELRA)


Test subset

A sample test data for research purposes that consist of:

License: available for research purposes

 III. How did we collect and transcribe the ATCO corpora

An overview of the data processing pipeline developed by the ATCO2 project and used to collect the ATCO2 corpus is depicted in the figure above. The data processing pipeline developed by our project consists of several steps:

  1. Speech pre-processing tools (segmentation, volume adjustment and discarding noisy recordings),
  2. Speaker diarization (split audio per-speaker), 
  3. Automatic speech recognition,
  4. English language detection (ELD),
  5. Speaker role detection (SRD) e.g., ATCo or pilot, and 
  6. Labeling of callsigns, commands and values with named entity recognition (NER). 

ATCO2 utilized this pipeline to pre-process the ATCO2-PL-set corpus which is the training corpus and ATCO2-test-set corpus.

The ATCO2 corpus is publicly available in ELDA catalog at the following URL:

ATCO2 pipeline.png

Figure. Data collection and data-processing pipeline developed in ATCO2 project

 IV. Additional Metadata and ATCO2 corpora characteristics information available  as part of ACTO2 corpora

During the ATCO2 project, audio data was collected from radio receivers (feeders) placed near different airports worldwide. Simultaneously, we captured ADS-B (radar) data that we match with the audio recordings. This step is of special importance because it allows the ATCO2 corpora to be used for contextual ASR. In contextual ASR, we boost certain entities at decoding time, which can lead to benefits: i) reduced WER and ii) increased accuracy on entity detection, such as call-signs. 


ADS-B data: Alongside audio and transcripts pairs for the training data, we also offer radar data (ADS-B) that is aligned to the target sample. For instance, the sample below shows the files available for the recording `LKPR_Tower_134_560MHz_20220119_185902`. 



├── LKPR_Tower_134_560MHz_20220119_185902.boosting

├── LKPR_Tower_134_560MHz_20220119_185902.callsigns

├── LKPR_Tower_134_560MHz_20220119_185902.cnet_10_b15-13-400


├── LKPR_Tower_134_560MHz_20220119_185902.segm

├── LKPR_Tower_134_560MHz_20220119_185902.wav


The files ending on “.callsign” and “.boosting” are: ADS-B data in ICAO format, e.g., ECC502 SWR115Z. The “boosting” file contains different verbalization for each callsign. We take the ICAO callsign and verbalize as, e.g., “eclair five zero two; eclair zero two; swiss one one five zulu; swiss one five zulu”. 


Further information about the verbalization rules are in our papers: 

Additional characteristics available for ATCO2 corpora, per airport: In the table below, you can find some statistics about the collected databases per airport:

  • Duration, SNR, language scores and contextual data columns report the mean and standard deviation (mean/std) per sample,
  • The available data at ELDA provides timing information in RTTM format, which can be used for speaker diarization or VAD, 
  • The languages are abbreviated in IETF format for simplicity.  

Data collected by ATCO2.png

Table. Metadata about data collected by ATCO2


If you are interested in acquiring the ATCO2 dataset, you can check the table above to find out if the data you are seeking matches one of the Airports packages. Note that in most cases, you can select the data with language scores higher than 0.5, which partly ensures that the audio is in English.

The characteristics per airport can be easily exported to text files by running the preparation script from our GitHub repository:

 V. Word Error Rates of ADR models trained with ATCO2 corpora

In the table below, you can find:

  • Performance of hybrid-based ASR model (with Kaldi toolkit) with the ATCO2-PL corpus,
  • Performance in terms of word error rates (WERs) for ‘out-of-domain’ datasets (see column “Malorca Vienna”),
  • Performance on “in-domain” data, see the columns with “ATCO2-test-set-1h/4h”. 

Different ASR.png

Table. Different ASR models trained with different amounts of ATCO2 data.


You can find more information, including WERs, in the following papers: 

We also release a set of GitHub repositories: 

 VI. Named entity recognition and sequence classification systems with ATCO2 corpora

The ATCO2 corpora can be employed to perform several natural (or spoken) language understanding (NLU) tasks. This can be used to:

  • Automatically analyze and interpret the meaning of spoken messages between pilots and ATCos, 
  • NLU can help to extract important information, such as flight numbers, callsigns, or airport codes, 
  • Detect end of utterance, or “end pointing detection”,
  • Classify ATCo or Pilots based on their messages,
  • Perform text-based diarization:,

Further information is described in:, while the Figure below show examples of named-entity recognition and text-based speaker role detection tasks. 

Named entity recognition.png

Figure. Named entity recognition and text-based speaker role detection tasks


Furthermore, the table below shows the performance on Precision (@P), Recall (@R) and F1-score (@F1) when fine-tuning a BERT model on the named-entity recognition task with the ATCO2-test-set-4h in a 5-fold cross-validation scheme.

Performances of NER.png

Table. Performances of NER with ATCO2-test-set-1h corpus.


LID dataset (V1).

"The ATC LID/ASR evaluation dataset is going to be published at Interspeech 2021. Stay tuned!"

Abstract: Detecting English Speech in the Air Traffic Control Voice Communication | icon-pdf.png

 LID dataset (V1).

Name: ATCO2-LIDdataset-v1_beta

Description: This dataset was build for development and evaluation of techniques for English and non-English speech classification of ATC data. Note: The dataset is considered as beta version and will be updated in the future (more language pairs will be add and some cleaning/debugging may happen). The dataset consists of language pairs:

CZEN - devel (6.11 hours),

CZEN - eval (6.21 hours)

FREN - devel (2.68 hours),

FREN - eval (3.27 hours),

GEEN - devel English only (5.61 hours),

GEEN - eval (2.41 hours),

EN-AU (Australian English) - eval English only (0.17 hours).

Where possible we split the pair to development and evaluation subsets. We provided audio (wav format), English automatic transcript generated by an ASR and info file with estimated SNR, language and length.

Link to file to download: