How to use the Remove Accents & Diacritics
Remove accents from text in two steps:
1
Paste your text
Paste text containing accented characters (é, ñ, ü, ç, ā, etc.) into the input area.
2
Convert and copy
Click 'Remove Accents'. The tool applies Unicode NFD normalisation and strips all combining diacritical marks (U+0300–U+036F). Copy or download the accent-free output.
When to use this tool
Use to normalise international text to plain ASCII for compatibility:
- →Generating clean URL slugs from article titles or product names that contain accented characters
- →Normalising international names and addresses for systems or databases that only support ASCII input
- →Creating search-friendly versions of content with accented characters so users can find it without typing accents
- →Preparing multilingual text datasets for ASCII-only NLP models or embedding pipelines
- →Normalising names for alphabetical sorting in systems where accented characters sort incorrectly
- →Removing diacritics from filenames before uploading to cloud storage or file systems with limited character support
Frequently asked questions
Q:How does diacritic removal work technically?
The tool uses the Unicode NFD (Normalisation Form Decomposition) algorithm, which decomposes composite characters into their constituent parts. For example, 'é' (U+00E9) is decomposed into 'e' (U+0065) + combining acute accent (U+0301). The tool then filters out all characters in the Unicode Combining Diacritical Marks range (U+0300–U+036F), leaving only the base letter 'e'. This is the standard and most reliable method for diacritic removal.
Q:Which languages and scripts are supported?
All Latin-script languages with diacritical marks are supported, including French (é, à, ê, ç), Spanish (ñ, á, é, ú), German (ü, ö, ä), Portuguese (ã, â, ó, ê), Italian (è, é, ì, ò), Polish (ą, ę, ó, ś), Czech and Slovak (č, š, ž, ř), Scandinavian languages (å, ø, æ), and many others. Characters from scripts without a direct ASCII equivalent (like Chinese, Arabic, or Hebrew) are not affected.
Q:Are characters like Ø, Þ, Ł, and Ð handled correctly?
These characters do not decompose under NFD normalisation because they are base characters rather than composites — Ø is not 'O + combining stroke', it is its own Unicode character. After NFD + diacritic stripping, Ø remains Ø, Þ remains Þ, and Ł remains Ł. If you need these converted to their nearest ASCII equivalents (O, Th, L), a custom transliteration map would be required beyond what NFD provides.
Q:Does removing accents change the meaning of words?
Yes — in many languages, accents carry phonetic or semantic meaning. In Spanish, 'sí' (yes) and 'si' (if) are different words. In French, 'ou' (or) and 'où' (where) differ. In German, 'über' and 'uber' are different. Diacritic removal is a lossy transformation appropriate for technical use cases like slug generation and search normalisation — not for preserving the linguistic integrity of text.
Q:Can I use this to generate URL slugs from international titles?
Yes — this is one of the most common use cases. Paste the title, remove accents, then apply the Lowercase Converter and Kebab Case Converter tools (available on this site) to produce a clean URL slug. For example: 'Café au Lait — René's Recipe' → remove accents → 'Cafe au Lait — Rene's Recipe' → lowercase → 'cafe au lait — rene's recipe' → kebab-case → 'cafe-au-lait-renes-recipe'.
Q:Will the output always be valid ASCII?
Not necessarily — only characters that decompose under NFD into a base letter + combining mark are normalised to ASCII. Characters like Ø, Þ, Ł, and non-Latin Unicode characters remain unchanged. If you need guaranteed ASCII-only output, combine this tool with the 'Remove non-ASCII' option in the Remove Special Characters tool to strip any remaining non-ASCII characters after accent removal.