Improved tokenization of hashtags in Asian languages

Quanteda can tokenize Asian texts thanks to the ICU library’s boundary detection mechanism, but it causes problems when we analyze social media posts that contain hashtags in Chinese or Japanese. For example, a hashtag “#英国首相仍在ICU但未使用呼吸机#” in a post about the British prime minister is completely destroyed by current quanteda’s tokenizer. Altough we can correct tokenization […]

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top