I am a little bit confused about the following topic:
I have a scenario where my documents have fields in multiple languages and the search language is now known. When I remember it correctly it can be implemented in Lucene by using the correct analyzer and ideally you use the same analyzer for search and indexing. For example I have used an analyzer in the past that select the proper language specific analyzer based on the field name.
But how would this work with my scenario? It would always use the standard analyzer and therefore not do the same stemming and stop word elimination that is used for filtering.
In Azure cognitive search on the other side (also using Lucene) I do not define the analyzer when searching. It just uses the same analyzer that is used for filtering.
Hey there, it’s Elle from the Atlas Search product team. You can define a language analyzer per field or use multi within the index definition if a single field has multiple languages. Let me know if this answers your question!