我正在尝试学习如何使用推特数据进行文本分析。我在创建术语频率矩阵时遇到了一个问题。
我用西班牙语文本(带有特殊字符)创建语料库,没有任何问题。
但是,当我创建术语频率矩阵(使用quanteda或tm库)时,西班牙语字符不会按预期显示(我看到的不是cancin,而是cancin)。
关于如何获得术语频率矩阵来存储具有正确字符的文本,有什么建议吗?
谢谢你的帮助。
请注意:我更喜欢使用quanteda库,因为最终我将创建一个wordcloud,我想我更了解这个库的方法。我也在使用Windows计算机。
我尝试过编码(tw2)<-“UTF-8”没有运气。
library(dplyr)
library(tm)
library(quanteda)
#' Creating a character with special Spanish characters:
tw2 <- "RT @None: Enmascarados, si masduro chingán a tarek. Si quieres ahora, la aguantas canción . https://t."
#Cleaning the tweet, removing special punctuation, numbers http links,
extra spaces:
clean_tw2 <- tolower(tw2)
clean_tw2 = gsub("&", "", clean_tw2)
clean_tw2 = gsub("(rt|via)((?:\\b\\W*@\\w+)+)", "", clean_tw2)
clean_tw2 = gsub("@\\w+", "", clean_tw2)
clean_tw2 = gsub("[[:punct:]]", "", clean_tw2)
clean_tw2 = gsub("http\\w+", "", clean_tw2)
clean_tw2 = gsub("[ \t]{2,}", "", clean_tw2)
clean_tw2 = gsub("^\\s+|\\s+$", "", clean_tw2)
# creates a vector with common stopwords, and other words which I want removed.
myStopwords <- c(stopwords("spanish"),"tarek","vez","ser","ahora")
clean_tw2 <- (removeWords(clean_tw2,myStopwords))
# If we print clean_tw2 we see that all the characters are displayed as expected.
clean_tw2
#'Create Corpus Using quanteda library
corp_quan<-corpus(clean_tw2)
# The corpus created via quanteda, displays the characters as expected.
corp_quan$documents$texts
#'Create Corpus Using TD library
corp_td<-Corpus(VectorSource(clean_tw2))
#' Remove common words from spanish from the Corpus.
#' If we inspect the corp_td, we see that the characters and words are displayed correctly
inspect(corp_td)
# Create the DFM with quanteda library.
tdm_quan<-dfm(corp_quan)
# Here we see that the spanish characters are displayed incorrectly for Example: canción = canción
tdm_quan
# Create the TDM with TD library
tdm_td<-TermDocumentMatrix(corp_td)
# Here we see that the Spanish characters are displayed incorrectly (e.g. canción = canciÃ), and "si" is missing.
tdm_td$dimnames$Terms