Since early this year, I was asked by many people how to compute document (or feature) similarity in large corpus. They said their functions stops because the lack of space in RAM: Error in .local(x, y, …) : Cholmod error ‘problem too large’ at file ../Core/cholmod_sparse.c, line 92 This happened in our textstat_simil(margn = “documents”) […]
Analyze big data with small RAM
A lot of people are using quanteda to analyze social media posts because it is very fast and flexible, but they sometimes face dramatic slow down due to memory swapping caused by insufficient sizes of RAM. quanteda requires the size of RAM to be 5 times larger than the data to analyze, but it can […]
Relaxing R version requirement
Until quanteda v1.1, our users needed to have R 3.4.0 installed, but we relax the requirement to R 3.1.0, because people working in companies or other large organizations often do not have latest version of R in their computers, and therefore cannot use our package. To investigate why quanteda requires R 3.4.0 quickly, I wrote […]
Factor analysis in R and Python
Python has a number of statistical modules that allows us to perform analysis without R, but it is always good idea to compare the outputs of different implementations. I performed factor analysis using Scikit-learn module of Python for my dictionary creation system, but the outputs were completely different from that of R’s factanal function just […]