R语言tm东西包举办文本挖掘尝试
tm包是R语言中为文本挖掘提供综合性处理惩罚的package,举办操纵前载入tm包,vignette呼吁可以让你获得相关的文档说明。本文从数据导入、语料库处理惩罚、预处理惩罚、元数据打点、建设term-document矩阵这几个方面报告tm包罗的利用。
>library(tm) //利用默认安装的R平台是不带tm package的,必需要到http://www.r-project.org/网站下载package. 值得留意的是:tm package许多函数也要依赖于其它的一些package,所以在这个网站,应该把rJava,Snowball,zoo,XML,slam,Rz,Rweka,matlab这些win32 package一并下载,并解压到默认的library中去。
>vignette(“tm”) //会打开一个tm.pdf的英文文件,报告tm package的利用及相关函数
1、Data-import:
> txt <- system.file(“texts”, “txt”, package = “tm”) //是为将目次C:\Program Files\R\R-2.15.1\library\tm\texts\txt 记入txt变量
> (ovid <- Corpus(DirSource(txt),readerControl = list(language = “lat”))) //即将txt目次下的5个文件Corpus到Ovid去,language = “lat”暗示the directory txt containing Latin (lat) texts
另外,VectorSource is quite useful, as it can create a corpus from character vectors, e.g.:
> docs <- c(“This is a text.”, “This another one.”)
> Corpus(VectorSource(docs)) //A corpus with 2 text documents
在本部门中,我们Finally create a corpus for some Reuters documents as example for later use
> reut21578 <- system.file(“texts”, “crude”, package = “tm”)
> reuters <- Corpus(DirSource(reut21578),readerControl = list(reader = readReut21578XML)) // 在这一部门中,将目次C:\Program Files\R\R-2.15.1\library\tm\texts\crude下的20个XML文件Corpus成reuters,要用到XML package(前面已经下载了).
> inspect(ovid[1:2]) //会呈现以下的显示,虽然identical(ovid[[2]], ovid[[“ovid_2.txt”]])==true,所以inspet(ovid[“ovid_1.txt”,”ovid[ovid_2.txt]”])结果一样:
[attach]42940[/attach]
2、Transmation:
> reuters <- tm_map(reuters, as.PlainTextDocument) //This can be done by converting the documents to plain text documents.即去除标签
> reuters <- tm_map(reuters, stripWhitespace) //去除空格
> reuters <- tm_map(reuters, tolower) //将内容转换成小写
> reuters <- tm_map(reuters, removeWords, stopwords(“english”)) // remove stopwords
注:在这里需要留意的是,假如利用中文分词法,由于词之间无有像英文一样的空隔,亏得有Java已包办理了这样的问题,我们只需要在R-console里加载rJava与rmmseg4j两个东西包即可。如
>mmseg4j(“中国人民以后站起来了”)
[1] 中国 人民 以后 站 起来
3、Filters:
> query <- “id == ‘237’ & heading == ‘INDONESIA SEEN AT CROSSROADS OVER ECONOMIC CHANGE'” //query其实是一个字符串,设定了一些文件的条件,如
//id==237, 标题为:indonesia seen at c………
> tm_filter(reuters, FUN = sFilter, query) // A corpus with 1 text document,这个从数据中就可以看得出来。
4、Meta data management
> DublinCore(crude[[1]], “Creator”) <- “Ano Nymous” //原来第一个XML文件中是不带作者的,此语句可以改变一些属性的值,类比其它。
> meta(crude[[1]]) //显示第一个文件的元素信息数据获得下图
[attach]42941[/attach]
> meta(crude, tag = “test”, type = “corpus”) <- “test meta”
> meta(crude, type = “corpus”) 改变元素后显示如下
[attach]42942[/attach]
5、Creating Term-Document Matrices
> dtm <- DocumentTermMatrix(reuters)
> inspect(dtm[1:5, 100:105]) //显示如下:
A document-term matrix (5 documents, 6 terms)
Non-/sparse entries: 1/29
Sparsity : 97%
Maximal term length: 10
Weighting : term frequency (tf)
Terms
Docs abdul-aziz ability able abroad, abu accept
127 0 0 0 0 0 0
144 0 2 0 0 0 0
191 0 0 0 0 0 0
194 0 0 0 0 0 0
211 0 0 0 0 0 0
6、对Term-document矩阵的进一步操纵举例
> findFreqTerms(dtm, 5) //nd those terms that occur at least 5 times in these 20 files 显示如下:
[1] “15.8” “accord” “agency” “ali”
[5] “analysts” “arab” “arabia” “barrel.”
[9] “barrels” “bpd” “commitment” “crude”
[13] “daily” “dlrs” “economic” “emergency”
[17] “energy” “exchange” “exports” “feb”
[21] “futures” “government” “gulf” “help”
[25] “hold” “international” “january” “kuwait”
[29] “march” “market”
> findAssocs(dtm, “opec”, 0.8) // Find associations (i.e., terms which correlate) with at least 0:8 correlation for the term opec
opec prices. 15.8
1.00 0.81 0.80
假如需要考查多个文档中特有词汇的呈现频率,可以手工生成字典,并将它作为生成矩阵的参数
> d <- Dictionary(c(“prices”, “crude”, “oil”)))
> inspect(DocumentTermMatrix(reuters, list(dictionary = d)))
因为生成的term-document矩阵dtm是一个稀疏矩阵,再举办降维处理惩罚,之后转为尺度数据框名目
> dtm2 <- removeSparseTerms(dtm, sparse=0.95) //parse值越少,最后保存的term数量就越少
> data <- as.data.frame(inspect(dtm2)) //最后将term-document矩阵生成数据框就可以举办聚类等操纵了见下部门
7、 再之后就可以操作R语言中任何东西加以研究了,下面用条理聚类试试看
> data.scale <- scale(data)
> d <- dist(data.scale, method = “euclidean”)
> fit <- hclust(d, method=”ward”)
>plot(fit) //图形见下