text_analysis 0.11.0 text_analysis: ^0.11.0 copied to clipboard
Text analyzer that extracts tokens from text for use in full-text search queries and indexes.
Tokenizes text, computes document readbility and compares terms. #
THIS PACKAGE IS PRE-RELEASE, IN ACTIVE DEVELOPMENT AND SUBJECT TO DAILY BREAKING CHANGES.
Skip to section:
Overview #
The text_analysis
library provides methods to tokenize text, compute readibility scores for a document and compare similarity of words. It is intended to be used as part of an information retrieval system.
Refer to the references to learn more about information retrieval systems and the theory behind this library.
Tokenization
Tokenization comprises the following steps:
- a
term splitter
splits text to a list of terms at appropriate places like white-space and mid-sentence punctuation; - a
character filter
manipulates terms prior to stemming and tokenization (e.g. changing case and / or removing non-word characters); - a
term filter
manipulates the terms by splitting compound or hyphenated terms or applying stemming and lemmatization. ThetermFilter
can also filter outstopwords
; and - the
tokenizer
converts the resulting terms to a collection oftokens
that contain the term and a pointer to the position of the term in the source text.
A String extension method Set<KGram> kGrams([int k = 3])
that parses a set of k-grams of length k from a term
. The default k-gram length is 3 (tri-gram).
Readibility #
The TextDocument enumerates a text document's paragraphs, sentences, terms and tokens and computes readability measures:
- the average number of words in each sentence;
- the average number of syllables for words;
- the
Flesch reading ease score
, a readibility measure calculated from sentence length and word length on a 100-point scale; and Flesch-Kincaid grade level
, a readibility measure relative to U.S. school grade level.
String Comparison #
The following String extension methods can be used for comparing terms
:
lengthDistance
returns a normalized measure of difference between two terms on a log (base 2) scale;lengthSimilarity
returns the similarity in length between two terms on a scale of 0 to 1.0 (equivalent to1-lengthSimilarity
with a lower bound of 0.0);lengthSimilarityMap
returns a hashmap ofterms
to theirlengthSimilarity
with a term;jaccardSimilarity
returns the Jaccard Similarity Index of two terms; andjaccardSimilarityMap
returns a hashmap ofterms
to Jaccard Similarity Index with a term.
Usage #
In the pubspec.yaml
of your flutter project, add the following dependency:
dependencies:
text_analysis: <latest version>
In your code file add the following import:
import 'package:text_analysis/src/_index.dart';
Basic English tokenization can be performed by using a TextTokenizer
instance with the default text analyzer and no token filter:
/// Use a TextTokenizer instance to tokenize the [text] using the default
/// [English] analyzer.
final document = await TextTokenizer().tokenize(text);
To analyze text or a document, hydrate a [TextDocument] to obtain the text statistics and readibility scores:
// get some sample text
final sample =
'The Australian platypus is seemingly a hybrid of a mammal and reptilian creature.';
// hydrate the TextDocument
final textDoc = await TextDocument.analyze(sourceText: sample);
// print the `Flesch reading ease score`
print(
'Flesch Reading Ease: ${textDoc.fleschReadingEaseScore().toStringAsFixed(1)}');
// prints "Flesch Reading Ease: 37.5"
For more complex text analysis:
- implement a
TextAnalyzer
for a different language or non-language documents; - implement a custom
TextTokenizer
or extendTextTokenizerBase
; and/or - pass in a
TokenFilter
function to aTextTokenizer
to manipulate the tokens after tokenization as shown in the examples; and/or extend [TextDocumentBase].
Please see the examples for more details.
API #
The key interfaces of the text_analysis
library are briefly described in this section. Please refer to the documentation for details.
TextAnalyzer
The TextAnalyzer interface exposes language-specific properties and methods used in text analysis:
- characterFilter is a function that manipulates text prior to stemming and tokenization;
- termFilter is a filter function that returns a collection of terms from a term. It returns an empty collection if the term is to be excluded from analysis or, returns multiple terms if the term is split (at hyphens) and / or, returns modified term(s), such as applying a stemmer algorithm;
- termSplitter returns a list of terms from text;
- sentenceSplitter splits text into a list of sentences at sentence and line endings;
- paragraphSplitter splits text into a list of paragraphs at line endings; and
- syllableCounter returns the number of syllables in a word or text.
The English implementation of TextAnalyzer is included in this library.
TextTokenizer
The TextTokenizer extracts tokens from text for use in full-text search queries and indexes. It uses a TextAnalyzer and token filter in the tokenize and tokenizeJson methods that return a list of tokens from text or a document.
An unnamed factory constructor hydrates an implementation class.
TextDocument #
The TextDocument object model enumerates a text document's paragraphs, sentences, terms and tokens and provides functions that return text analysis measures:
- averageSentenceLength is the average number of words in sentences;
- averageSyllableCount is the average number of syllables per word in terms;
- wordCount the total number of words in the sourceText;
- fleschReadingEaseScore is a readibility measure calculated from sentence length and word length on a 100-point scale. The higher the score, the easier it is to understand the document;
- fleschKincaidGradeLevel is a readibility measure relative to U.S. school grade level. It is also calculated from sentence length and word length .
Definitions #
The following definitions are used throughout the documentation:
corpus
- the collection ofdocuments
for which anindex
is maintained.character filter
- filters characters from text in preparation of tokenization.dictionary
- is a hash ofterms
(vocabulary
) to the frequency of occurence in thecorpus
documents.document
- a record in thecorpus
, that has a unique identifier (docId
) in thecorpus
's primary key and that contains one or more text fields that are indexed.document frequency (dFt)
is number of documents in thecorpus
that contain a term.
Flesch reading ease score
- a readibility measure calculated from sentence length and word length on a 100-point scale. The higher the score, the easier it is to understand the document (Wikipedia(6)).Flesch-Kincaid grade level
- a readibility measure relative to U.S. school grade level. It is also calculated from sentence length and word length (Wikipedia(6)).
index
- an inverted index used to look updocument
references from thecorpus
against avocabulary
ofterms
.index-elimination
- selecting a subset of the entries in an index where theterm
is in the collection ofterms
in a search phrase.inverse document frequency
oriDft
is equal to log (N /dft
), where N is the total number of terms in the index. TheIdFt
of a rare term is high, whereas theIdFt
of a frequent term is likely to be low.Jaccard index
measures similarity between finite sample sets, and is defined as the size of the intersection divided by the size of the union of the sample sets (from Wikipedia).JSON
is an acronym for"Java Script Object Notation"
, a common format for persisting data.k-gram
- a sequence of (any) k consecutive characters from aterm
. Ak-gram
can start with "$", denoting the start of the term, and end with "$", denoting the end of the term. The 3-grams for "castle" are { $ca, cas, ast, stl, tle, le$ }.lemmatizer
- lemmatisation (or lemmatization) in linguistics is the process of grouping together the inflected forms of a word so they can be analysed as a single item, identified by the word's lemma, or dictionary form (from Wikipedia).postings
- a separate index that records whichdocuments
thevocabulary
occurs in. In a positionalindex
, the postings also records the positions of eachterm
in thetext
to create a positional invertedindex
.postings list
- a record of the positions of aterm
in adocument
. A position of aterm
refers to the index of theterm
in an array that contains all theterms
in thetext
. In a zonedindex
, thepostings lists
records the positions of eachterm
in thetext
azone
.term
- a word or phrase that is indexed from thecorpus
. Theterm
may differ from the actual word used in the corpus depending on thetokenizer
used.term filter
- filters unwanted terms from a collection of terms (e.g. stopwords), breaks compound terms into separate terms and / or manipulates terms by invoking astemmer
and / orlemmatizer
.stemmer
- stemming is the process of reducing inflected (or sometimes derived) words to their word stem, base or root form—generally a written word form (from Wikipedia).stopwords
- common words in a language that are excluded from indexing.term frequency (Ft)
is the frequency of aterm
in an index or indexed object.term position
is the zero-based index of aterm
in an ordered array ofterms
tokenized from thecorpus
.text
- the indexable content of adocument
.token
- representation of aterm
in a text source returned by atokenizer
. The token may include information about theterm
such as its position(s) (term position
) in the text or frequency of occurrence (term frequency
).token filter
- returns a subset oftokens
from the tokenizer output.tokenizer
- a function that returns a collection oftoken
s fromtext
, after applying a character filter,term
filter, stemmer and / or lemmatizer.vocabulary
- the collection ofterms
indexed from thecorpus
.zone
is the field or zone of a document that a term occurs in, used for parametric indexes or where scoring and ranking of search results attribute a higher score to documents that contain a term in a specific zone (e.g. the title rather that the body of a document).
References #
- Manning, Raghavan and Schütze, "Introduction to Information Retrieval", Cambridge University Press, 2008
- University of Cambridge, 2016 "Information Retrieval", course notes, Dr Ronan Cummins, 2016
- Wikipedia (1), "Inverted Index", from Wikipedia, the free encyclopedia
- Wikipedia (2), "Lemmatisation", from Wikipedia, the free encyclopedia
- Wikipedia (3), "Stemming", from Wikipedia, the free encyclopedia
- Wikipedia (4), "Synonym", from Wikipedia, the free encyclopedia
- Wikipedia (5), "Jaccard Index", from Wikipedia, the free encyclopedia
- Wikipedia (6), "Flesch–Kincaid readability tests", from Wikipedia, the free encyclopedia
Issues #
If you find a bug please fill an issue.
This project is a supporting package for a revenue project that has priority call on resources, so please be patient if we don't respond immediately to issues or pull requests.