I am organizing the POLTEXT symposium in Tokyo on 14-15 September, 2019. I have participated in the conference in 2016 (Croatia) as a presenter and in 2018 (Hungary) as a tutorial instructor, and learnt a lot from other participants. This is the time for me to offer such an opportunity people from inside and outside […]
Computing document similarity in large corpus
Since early this year, I was asked by many people how to compute document (or feature) similarity in large corpus. They said their functions stops because the lack of space in RAM: Error in .local(x, y, …) : Cholmod error ‘problem too large’ at file ../Core/cholmod_sparse.c, line 92 This happened in our textstat_simil(margn = “documents”) […]
日本語の量的テキスト分析
より多くの日本人の研究者に量的テキスト分析について関心を持ってもらうために、『日本語の量的分析』という論文をニューヨーク大学のエイミー・カタリナックと一緒に書きました。これまでのところ、Twitterで多くの方からポジティブな反応を頂いています。 本稿は、欧米の政治学者の間で近年人気を集めている量的テキスト分析(quantitative text analysis)と呼ばれる手法の日本語における利用について論ずる。まず、量的テキスト分析が登場した背景を述べたうえで、欧米の政治学においてどのように利用されているかを説明する。次に、読者が量的テキスト分析を研究で利用できるように、日本語の分析において注意すべき点に言及しながら、作業の流れを具体的に説明する。最後に、欧米で利用されている統計分析モデルを紹介した上で、それらが日本語の文書の分析にも利用できることを研究事例を用いて示す。本稿は、近年の技術的および方法論な発展によって、日本語の量的テキスト分析が十分に可能になったことを主張するが、この手法が日本の政治学において広く普及するためには、データの整備など制度的な問題に対処していく必要性があることにも触れる。
Newsmap is available on CRAN
I am happy to announce that our R package for semi-supervised document classification, newsmap is available on CRAN. This package is simple in terms of algorithms but comes with well-maintained geographical seed dictionaries in English, German, Spanish, Russian and Japanese. This package was created originally for geographical classification of news articles, but it can also […]
Presentation at ECPR Hamburg
I have presented my latest study on Sputnik News at ECPR Hamburg. This study shows that Russia is using conspiracy theory in Sputnik News articles to promote anti-establishment sentiment in the United State and Britain. The paper and slides are available.
Presentation at R user meeting in Tokyo
I have presented Quantitative Analysis of Textual Data with R at a TokyoR event on 15th July hosted by Yahoo Japan. This was a great opportunity for me to reach out broad Japanese R users and tell them how easy it is to analyze Asian texts using quanteda. It was also really nice to meet […]
Obstruction to Asian-language text analysis
In a presentation titled Internationalizing Text Analysis at a workshop on the 27th June at Waseda University, I and Oul Han discussed what obstructing adoption of quantitative text analysis techniques in Japan and Korea. Our question is why there are only few people who do quantitative analysis of Japanese and Korean texts, despite it is […]
New page on how to perform Japanese texts
We have added a new page to Quanteda Tutorials website on special handling of Japanese texts. This page will be used in Quantitative Political Methodology at Kobe University in the next week. This page summarizes my posts about Japanese text analysis in this blog. We are planing to add pages about other languages.
Presentation at BEAMS workshop
I presented a technique for a longitudinal analysis of media content at BEAMS (Behavioral and Experimental Analyses in Macro-finance) workshop at Waseda University.
Quantitative text analysis workshop at PolText 2018
I was invited to deliver a workshop on quantitative text analysis at PolText Incubator Workshop at Hungarian Academy of Science on 9 May 2018. Workshop materials are available in my Github repo.