Conquering Character Encoding Chaos With GNU Recode
Conversion and IdentificationIn the beginning were C and C++, and hosts of other computer programming languages. These are all based on ASCII (American Standard Code for Information Interchange), which as the name implies is based on the English alphabet. Which wouldn't be an issue except there are lot of other humans in the world, and they don't use the English alphabet.
So along came Unicode to the rescue. Unicode provides a framework for all alphabets of the world to be represented on computers. UTF-8 is the most popular Unicode implementation because it preserves backwards compatibility with ASCII. Which is all fun to know, but what good is it when you're looking at piles of computer files that need to converted from ISO-8859-1 (Latin-1, Western European) into whatever encoding you prefer? Naturally, there are a number of utilities just for this task.
$ recode UTF-8 recode-test.txtCheck out the GNU Recode Manual for instructions.
That's fast and easy enough, but there's one more job- converting the filename. The convmv command is just the tool for this job. This example converts all the ISO-8859-1 filenames in the files/ directory to UTF-8:
$ convmv -f iso-8859-1 -t utf8 --notest files/
convmv run without the --notest option does a dry-run without changing anything, which is probably a wise thing to do first.
Maybe you have a file that you don't know what the encoding is. Upload your file to this online tool and it will tell you. You can even do file conversions here.