I always recommend users of the LSX package visualizing polarity words using textplot_terms() because it allows them to explain intuitively what their fitted LSS models are measuring to others. I am using this function myself in my project on construction of a geopolitical threat index (GTI) for multiple countries that include China and Japan, but […]
Improved tokenization of hashtags in Asian languages
Quanteda can tokenize Asian texts thanks to the ICU library’s boundary detection mechanism, but it causes problems when we analyze social media posts that contain hashtags in Chinese or Japanese. For example, a hashtag “#英国首相仍在ICU但未使用呼吸机#” in a post about the British prime minister is completely destroyed by current quanteda’s tokenizer. Altough we can correct tokenization […]