牛骨文教育服务平台(让学习变的简单)
博文笔记

用nltk colocation功能抽取中文短语和专业词汇增加分词准确性

创建时间:2017-03-31 投稿人: 浏览次数:110
#用nltk+jieba发现连词和三连词。
import jieba
import nltk
from nltk.collocations import *
train_corpus = "测试数据库,用户支付表,支付金额,支付用户,测试数据库,用户支付表,支付金额,支付用户"
bigram_measures = nltk.collocations.BigramAssocMeasures()
trigram_measures = nltk.collocations.TrigramAssocMeasures()

finder = BigramCollocationFinder.from_words(jieba.cut(train_corpus))
finder.apply_word_filter(lambda w: w.lower() in [",", ".", ",", "。"])
finder.nbest(bigram_measures.pmi, 10)

finder = TrigramCollocationFinder.from_words(jieba.cut(train_corpus))
finder.apply_word_filter(lambda w: w.lower() in [",", ".", ",", "。"])
finder.nbest(trigram_measures.pmi, 10)


#用gensim+jieba发现连词
import jieba
import gensim


mddesc = ["测试数据库","用户支付表","支付金额","支付用户"]
train_corpus = []
for desc in mddesc:
train_corpus.append("/".join(jieba.cut(desc)).split("/"))
train_corpus.append("/".join(jieba.cut(desc)).split("/"))


#set the params(min_count, threshold) carefully when you use small corpus.
phrases = gensim.models.phrases.Phrases(train_corpus, min_count = 1, threshold=0.1)
bigram = gensim.models.phrases.Phraser(phrases)
input = "从用户支付表中选择支付金额大于5的用户。"
inputarr = "/".join(jieba.cut(input)).split("/")
repl = [s.replace("_","") for s in bigram[inputarr]]
print(repl)


参考:
https://radimrehurek.com/gensim/models/phrases.html
http://www.nltk.org/howto/collocations.html
http://blog.sina.com.cn/s/blog_630c58cb0100vkix.html
http://nullege.com/codes/search/nltk.metrics.TrigramAssocMeasures

结论:用nltk可方便地发现连词和三连词,用于发现常见搭配、专用词和新词。gensim有此功能,但有点麻烦,不是gensim的强项。
声明:该文观点仅代表作者本人,牛骨文系教育信息发布平台,牛骨文仅提供信息存储空间服务。