Character encoding detection tool for NodeJS
Go to file
renovate[bot] ab6f6f7686
chore(deps): update actions/checkout action to v3
2022-11-01 22:43:59 +00:00
.github/workflows chore(deps): update actions/checkout action to v3 2022-11-01 22:43:59 +00:00
src Merge pull request #64 from crisp-dev/master 2022-10-09 22:47:30 +11:00
.eslintignore feat: Language detection improvements 2021-10-19 21:21:29 +11:00
.gitignore chore: remove lock file 2021-06-09 10:19:49 +10:00
.npmignore Add `yarn.lock` to `.npmignore`. 2018-01-25 19:49:49 -07:00
.npmrc chore: npmrc 2022-10-03 12:23:21 +11:00
.prettierrc.json BREAKING CHANGE: Repo overhaul 2020-03-31 20:53:25 +11:00
LICENSE Maintenance 2022-09-30 12:19:50 +10:00
README.md Bump README 2022-10-07 10:23:13 +02:00
jest.config.js BREAKING CHANGE: Repo overhaul 2020-03-31 20:53:25 +11:00
package.json chore: dev deps 2022-10-07 16:19:28 +11:00
renovate.json BREAKING CHANGE: Repo overhaul 2020-03-31 20:53:25 +11:00
tsconfig.json Maintenance 2022-09-30 12:19:50 +10:00

README.md

chardet

Chardet is a character detection module written in pure Javascript (Typescript). Module uses occurrence analysis to determine the most probable encoding.

  • Packed size is only 22 KB
  • Works in all environments: Node / Browser / Native
  • Works on all platforms: Linux / Mac / Windows
  • No dependencies
  • No native code / bindings
  • 100% written in Typescript
  • Extensive code coverage

Installation

npm i chardet

Usage

To return the encoding with the highest confidence:

import chardet from 'chardet';

const encoding = chardet.detect(Buffer.from('hello there!'));
// or
const encoding = await chardet.detectFile('/path/to/file');
// or
const encoding = chardet.detectFileSync('/path/to/file');

To return the full list of possible encodings use analyse method.

import chardet from 'chardet';
chardet.analyse(Buffer.from('hello there!'));

Returned value is an array of objects sorted by confidence value in descending order

[
  { confidence: 90, name: 'UTF-8' },
  { confidence: 20, name: 'windows-1252', lang: 'fr' }
];

Working with large data sets

Sometimes, when data set is huge and you want to optimize performance (with a tradeoff of less accuracy), you can sample only the first N bytes of the buffer:

chardet
  .detectFile('/path/to/file', { sampleSize: 32 })
  .then(encoding => console.log(encoding));

You can also specify where to begin reading from in the buffer:

chardet
  .detectFile('/path/to/file', { sampleSize: 32, offset: 128 })
  .then(encoding => console.log(encoding));

Supported Encodings:

  • UTF-8
  • UTF-16 LE
  • UTF-16 BE
  • UTF-32 LE
  • UTF-32 BE
  • ISO-2022-JP
  • ISO-2022-KR
  • ISO-2022-CN
  • Shift_JIS
  • Big5
  • EUC-JP
  • EUC-KR
  • GB18030
  • ISO-8859-1
  • ISO-8859-2
  • ISO-8859-5
  • ISO-8859-6
  • ISO-8859-7
  • ISO-8859-8
  • ISO-8859-9
  • windows-1250
  • windows-1251
  • windows-1252
  • windows-1253
  • windows-1254
  • windows-1255
  • windows-1256
  • KOI8-R

Currently only these encodings are supported.

Typescript?

Yes. Type definitions are included.

References