LIWC is a popular text analysis package developed and maintained by Pennebaker et al. The latest version of the LIWC dictionary was released in 2015. This dictionary seems more appropriate than classic dictionaries such as the General Inquire dictionaries for analysis of contemporary materials, because our vocabulary changes over years. However, LIWC did not work […]
Presentation on my PhD thesis at departmental event
I presented my PhD thesis titled “Measuring News Bias in Complex Media Systems: A New Approach to Big Media Analysis” in a departmental event on the 9th June.
Workshops on Japanese text analysis using quanteda
I have presented how to analyze Japanese texts using quanteda in half-day workshops at Waseda University (22 May) and Kobe University (2 June) organized by Mikihito Tanaka (Waseda) and Atshushi Tago (Kobe). Materials for these workshops are made available on Github as Introduction to Japanese Text Analysis (IJTA).
quantedaによる日本語テキスト分析入門
quantedaについてのワークショップを早稲田大学で行いました。資料はRによる日本語テキスト分析入門と題して公開し、今後少しずつ内容を充実させていきます。今後、積極的に日本語テキストについてのワークショップの開催していこうと思うので、興味のある方はご連絡ください。
早稲田大学で多言語テキスト分析法について発表
早稲田大学の政治学研究科セミナーにて、『バイリンガル分析へのデータ駆動アプローチ:30年間の日英新聞における米国外交政策の表象』と題するプレゼンテーションを行いました。当プレゼンテーションは、アメリカの政治・外交について研究プロジェクトにおいて、異なる言語(英語と日本語)の文書に対して同一の量的テキスト分析手法を適用する方法に関するものです。本セミナーで発表した手法の一部は、5月22日の15時から行われる日本語の量的テキスト分析に関するワークショップでより具体的に説明します。
Upcoming presentation at Waseda University
I am invited to present a new approach to comparative text analysis in a research seminar at Waseda Universtiy (Tokyo) on 17th. My talk is titled Data-driven approach to bilingual text analysis: representation of US foreign policy in Japanese and British newspapers in 1985-2016. Kohei Watanabe will present a new approach to text analysis of […]
Redefining word boundaries by collocation analysis
Quanteda’s tokenizer can segment Japanese and Chinese texts thanks to stringi, but its results are not always good, because its underlying function, ICU, recognizes only limited number of words. For example, this Japanese text “ニューヨークのケネディ国際空港” can be translated to “Kennedy International Airport (ケネディ国際空港) in (の) New York (ニューヨーク)”. Quanteda’s tokenizer (tokens function) segments this into […]
Analyzing Asian texts in R on English Windows machines
R is generally good with Unicode, and we do not see garbled texts as far as we use stringi package. But there are some known bugs. The worst is probably the bug that have been discussed on the online community. On Windows, R prints character vectors properly, but not character vectors in data.frame: > sessionInfo() […]
R and Python text analysis packages performance comparison
Like many other people, I started text analysis in Python, because R was notoriously slow. Python looked like a perfect language for text analysis, and I did a lot of work during my PhD using gensim with home-grown tools. I loved gensim’s LSA that quickly and consistently decomposes very large document-feature matrices. However, I faced […]
Paper on how to measure news bias by quantitative text analysis
My paper titled Measuring news bias: Russia’s official news agency ITAR-TASS’s coverage of the Ukraine crisis is published in the European Journal of Communication. In this piece, I estimated how much the news coverage of Ukraine crisis by ITAR-TASS was biased by the influence of the Russian government with quantitative text analysis techniques: Objectivity in […]