Elasticsearch is a powerful open source search engine build over Apache
Lucene. You can do all kind of customized searches on huge amount of
data by creating customized indexes. This post gives an overview of
Analysis module of elasticsearch.
Analyzers basically helps you in analyzing your data.:o You need to analyze data while creating indexes and while searching. You could analyze your analyzers using Analyze Api provided by elasticsearch.
Creating indexes mainly involves three steps:
Pre-processing of raw text using char filters. This may be used to strip html tags, or you may define your custom mapping. (Couldn’t find a way to test this using analyse api. Please put it in comments if you know some way to test these through Analyze Api)
Example: You could use a char-filter of type html_strip to strip out html tags.
A text like this:
would get converted to:
Tokenization of the pre-processed text using tokenizers. Tokenizers breaks the pre-processed text into tokens. There are different kind of tokenizers available and each of them breaks the text into words differently. By default elasticsearch uses standard tokenizer.
standard tokenizer normalizes the data. Note that it removes ! from Today!
A pre-processed text like this:
Learn Something New Today! which is always fun
gets broken as
LearnSomethingNewTodaywhichisalwaysfun
You could check for yourself using Analyze Api mentioned above.
After the tokenization, token filters performs further operations on the processed text like converting it to lowercase or reversing of tokens.
By default standard tokenfilter is used which normalizes the tokens. After the application of lowercase tokenfilter.
A processed text like this:
LearnSomethingNewTodaywhichisalwaysfun
gets broken as
learnsomethingnewtodaywhichisalwaysfun
Thus analyzer is composed of char-filters, tokenizers and tokenfilters. Analyzers defines what kind of search you can preform on your data.
You can have multiple indexes on a field and create your own custom char-filters, tokenizers and tokenfilters. You can have different analyzers for different indexes.
Let’s see it in action
Example below creates an index with char-filter as html_strip, tokenizer as standard and tokenfilter i.e, filter as lowercase and standard
You can analyze the text using:
Above results shows that the while creating index it first stripped off the html tags and broke the text into words. And then converted them to lowercase.
Following the same procedure you can analyze different kind of
analyzers. Explore different kind of tokenizers, tokenfilters at //www.elasticsearch.org/guide/reference/index-modules/analysis/
In future posts I will discuss more about how to make custom analyzers and features of elasticsearch like filters and facets.