我使用sklean使用命令as计算文档中术语的TFIDF值
from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(documents) from sklearn.feature_extraction.text import TfidfTransformer tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts) X_train_tf = tf_transformer.transform(X_train_counts)
X_train_tf是scipy稀疏形状矩阵
from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(documents) from sklearn.feature_extraction.text import TfidfTransformer tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts) X_train_tf = tf_transformer.transform(X_train_counts)
输出为(2257,35788).如何在特定文档中获取TF-IDF?更具体地说,如何在给定文档中获取具有最大TF-IDF值的单词?
你可以使用sklean的TfidfVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer import numpy as np from scipy.sparse.csr import csr_matrix #need this if you want to save tfidf_matrix tf = TfidfVectorizer(input='filename', analyzer='word', ngram_range=(1,6), min_df = 0, stop_words = 'english', sublinear_tf=True) tfidf_matrix = tf.fit_transform(corpus)
上述tfidf_matix具有语料库中所有文档的TF-IDF值.这是一个很大的稀疏矩阵.现在,
feature_names = tf.get_feature_names()
这将为您提供所有令牌或n-gram或单词的列表.对于语料库中的第一个文档,
doc = 0 feature_index = tfidf_matrix[doc,:].nonzero()[1] tfidf_scores = zip(feature_index, [tfidf_matrix[doc, x] for x in feature_index])
让我们打印,
for w, s in [(feature_names[i], s) for (i, s) in tfidf_scores]: print w, s
这是带有pandas库的Python 3中的另一个更简单的解决方案
from sklearn.feature_extraction.text import TfidfVectorizer import pandas as pd vect = TfidfVectorizer() tfidf_matrix = vect.fit_transform(documents) df = pd.DataFrame(tfidf_matrix.toarray(), columns = vect.get_feature_names()) print(df)