Models, tokenizers, and preprocessing layers for XLM-Roberta, as described in "Unsupervised Cross-lingual Representation Learning at Scale".
For a full list of available presets, see the models page.