Tf–idf
STATISTIC WHICH REFLECTS HOW IMPORTANT A WORD IS TO A DOCUMENT IN A COLLECTION OR CORPUS
TFIDF; Tfidf; Tf.idf; TF IDF; Term Frequency Inverse Document Frequency; Tf-idf; TF-IDF; Inverse document frequency; Term frequency; TDIDF; Tdidf; Tf × idf; Tf×idf; TF×IDF; TF × IDF; TF x IDF; Tf x idf; Tfxidf; TFxIDF; TF/IDF; Tf/idf; TF*IDF; TF * IDF; Tf * idf; Term-frequency; Tf*idf; Term frequency–inverse document frequency; Term frequency-inverse document frequency; Term Frequency
In information retrieval, tf–idf (also TF*IDF, TFIDF, TF–IDF, or Tf–idf), short for term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in searches of information retrieval, text mining, and user modeling.