Atlas search with conditions and why MUST works faster than SHOULD and followed by other?

I am trying to implement Atlas search on the collection which has data of around 300 million, collection structure is like this -

{
    "_id": {
        "$oid": "6368ca3fcb0c042cbc5b198a"
    },
    "articleid": "159447148",
    "headline": "T20 World Cup: Zomato’s Epic ",
    "subtitle": "Response To ‘Cheat Day’ Remark Is Unmissable",
    "fulltext": "The trade began on October 21",
    "article_type": "online",
    "pubdate": "2022-11-07"
}

Now I am using MUST, SHOULD, and simple search together but the problem is MUST works really really fast than SHOULD and follow by a simple search

Here is the query -

MUST -

[
    {
        "$search": {
            "index": "fulltext",
            "compound": {
                "must": [
                    {
                        "text": {
                            "query": "AI",
                            "path": [
                                "headline",
                                "fulltext",
                                "subtitle"
                            ]
                        }
                    },
                    {
                        "text": {
                            "query": "OPENAI",
                            "path": [
                                "headline",
                                "fulltext",
                                "subtitle"
                            ]
                        }
                    }
                ]
            }
        }
    },
    {
        "$match": {
            "pubdate": {
                "$gte": "2023-01-28",
                "$lte": "2023-01-28"
            }
        }
    },
    {
        "$project": {
            "_id": 0,
            "articleid": 1
        }
    }
]

SHOULD -

[
    {
        "$search": {
            "index": "fulltext",
            "compound": {
                "should": [
                    {
                        "text": {
                            "query": "AI",
                            "path": [
                                "headline",
                                "fulltext",
                                "subtitle"
                            ]
                        }
                    },
                    {
                        "text": {
                            "query": "OPENAI",
                            "path": [
                                "headline",
                                "fulltext",
                                "subtitle"
                            ]
                        }
                    }
                ]
            }
        }
    },
    {
        "$match": {
            "pubdate": {
                "$gte": "2023-01-28",
                "$lte": "2023-01-28"
            }
        }
    },
    {
        "$project": {
            "_id": 0,
            "articleid": 1
        }
    }
]

SIMPLE Search -

[
    {
        "$search": {
            "index": "fulltext",
            "text": {
                "query": "OPENAI",
                "path": [
                    "headline",
                    "fulltext",
                    "subtitle"
                ]
            }
        }
    },
    {
        "$match": {
            "pubdate": {
                "$gte": "2023-01-28",
                "$lte": "2023-01-28"
            }
        }
    },
    {
        "$project": {
            "_id": 0,
            "articleid": 1
        }
    }
]

As you can see all three query I have 2 questions about the above statement.

  1. Why must is faster than should and followed by simple(single search), is anything wrong with my single search (simple search) query?
  2. Before searching for the word using Atlas search, why can’t we filter data first i.e. - pubdate in the above query, and then run the search in this way it will search on fewer data and get data faster, rather than searching on all data first and then do a match/filter of pubdate?

Hi @Utsav_Upadhyay2 and welcome to the MongoDB community forum!!

The MUST operator in Atlas search works on the concept of boolean AND operator where as the SHOULD operator uses the concept for boolean OR operator.

For more details, you could visit the documentation on Compounds in Atlas search.
In saying that, could you help me understand on how did you calculate the execution time for the above three operators without the $match and $project stage.

However, based on the above sample data, I tried to run the three queries against it which did not yield any results. For further help, could you share the index definition for the above collection you are using?

For this, you could use $filter with $range for the query. Please make note that, $range works with the ISODate() format so you would need to change the pubdate to ISODate() format.

Let us know if you have any further questions.

Best Regards
Aasawari

@Aasawari, Thank you so much for your quick response! My goal is to search keywords based on user input, I mean I need to search AND, OR, and single-term searches, I tried to find some example and implement it in the search but not much help as the query is slow,

I tried executionstats() and explain() to check the execution time

Could you please give 3 examples based on the below document on AND which is MUST, OR which is SHOULD, and single-term search separate queries, as I found it a little confusing while seeing some examples in the docs?

I am using fulltext index (atlas search index), I am trying to search words from 3 fields which are the headline, fulltext, subtitle

{
    "_id": {
        "$oid": "6368ca3fcb0c042cbc5b198a"
    },
    "articleid": "159447148",
    "headline": "T20 World Cup: Zomato’s Epic ",
    "subtitle": "Response To ‘Cheat Day’ Remark Is Unmissable",
    "fulltext": "The trade began on October 21",
    "article_type": "online",
    "pubdate": "2022-11-07"
}

Hi @Utsav_Upadhyay2 and thank you for sharing the above information.

I tried the following queries for the above compound operator.
Here is how the search index looks like:


{
  "mappings": {
    "dynamic": true,
    "fields": {
      "fulltext": [
        {
          "type": "string"
        },
        {
          "type": "autocomplete"
        }
      ],
      "headline": [
        {
          "type": "string"
        },
        {
          "type": "autocomplete"
        }
      ],
      "subtitle": [
        {
          "type": "string"
        },
        {
          "type": "autocomplete"
        }
      ]
    }
  }
}

Please note that above index is created with mingrams as 2, hence trying the below query for MUST and SHOULD:

Since MUST comprises the logical AND operator, in the below query, the query responds with 0 documents as for the field fulltext, the search query t is not tokenised.
Therefore, the below query responds with 0 documents:

[
  {
    '$search': {
      'index': 'logical', 
      'compound': {
        'must': [
          {
            'text': {
              'query': 'Response', 
              'path': 'subtitle'
            }
          }, {
            'text': {
              'query': 'trad', 
              'path': 'fulltext'
            }
          }
        ]
      }
    }
  }
]

Using the similar query with SHOULD with is OR operator, responds with the documents when one of the two conditions is true.

[
  {
    '$search': {
      'index': 'logical', 
      'compound': {
        'should': [
          {
            'text': {
              'query': 'Response', 
              'path': 'subtitle'
            }
          }, {
            'text': {
              'query': 'trad', 
              'path': 'fulltext'
            }
          }
        ]
      }
    }
  }
] 

Now, lastly for simple search:

[
  {
    '$search': {
      'index': 'logical', 
      'text': {
        'query': 'trade', 
        'path': 'fulltext'
      }
    }
  }
]

responds with the document, when the condition is satisfied.

Lastly, could you confirm if the executionStats mentioned involves for each stage of the pipeline or only for the $search stage?

Let us know if you have any further queries.

Best Regards
Aasawari

1 Like

@Aasawari thank you so much for this solution, I was wondering if we can use Date with a simple search and also with must, should, so if we add filter date or must date with lte & gte in the first stage of the compound and then query the text does this makes query faster as compared to normal search with text only, could you please share a syntax of date, lte & gte with must where we search a text after filtering by date?

Hi @Utsav_Upadhyay2

Could you help me a few more details on what you are trying to achieve:

  1. If the Date mentioned in the latest response is same as the pubdate field in the first post.
  2. If you wish to use the range operator, do you intend to modify the pubdate/Date field to the ISODate() format?
  3. Lastly, to help clarify scenario, could you please provide us with a sample query (or perhaps intended query) that you are trying to implement in regards to the following:

Regards
Aasawari

1 Like

I have modified the document schema for using data -

{
    "_id" : ObjectId("63cf39df1d7798a846b2eb0e"),
    "articleid" : "9d23e3ab-9b7d-11ed-a650-b0227af59807",
    "headline" : "Microsoft to Put $10b More in ChatGPT Maker OpenAI",
    "subtitle" : "OpenAI needs Microsoft’s funding and cloud-computing power to run increasingly complex models",
    "fulltext" : "\nMS Adds $1()B to Investment in ChatGPT Maker",
    "pubdate" : "2023-01-24",
    "article_type" : "print",
    "date" : ISODate("2023-01-24T00:00:00.000+0000")
}

I am facing 2 Major performance issues.

  1. I have 10 Million records in the collection, in the field - headline, subtitle, fulltext I am trying to find the words with Operator like - must, should, and simple search.

  2. Now Whenever I run a must query it is always faster and return the result within 10 seconds no matter how big the query is, but in the case of a should or a simple search of a single word it takes more than 50 seconds why?

Now, I was thinking due to large data like 10 Million I need to filter data based on date first and then search full text search, in this way I can get result much faster.

Below are my sample queries -

Should -

db.getCollection("article_fulltext").aggregate([
    {
        "$search":{
            "index":"fulltext",
            "compound":{
                "filter":[
                    {
                        "range":{
                            "path":"date",
                            "gte":"ISODate(""2023-01-01T00:00:00.000Z"")",
                            "lte":"ISODate(""2023-01-31T00:00:00.000Z"")"
                        }
                    }
                ],
                "should":[
                    {
                        "text":{
                            "query":"CHATGPT",
                            "path":[
                                "headline",
                                "fulltext",
                                "subtitle"
                            ]
                        }
                    },
                    {
                        "text":{
                            "query":"OPENAI",
                            "path":[
                                "headline",
                                "fulltext",
                                "subtitle"
                            ]
                        }
                    }
                ],"minimumShouldMatch": 1
            }
        }
    }
])

simple search -

db.getCollection("article_fulltext").aggregate([{
    $search:{
        index:"fulltext",
        text:{
            query:"Microsoft",
            path:["headline", "fulltext", "subtitle"]
        }
    }
}])

Atlas search Index -

{
  "mappings": {
    "dynamic": false,
    "fields": {
      "articleid": {
        "type": "string"
      },
      "fulltext": {
        "type": "string"
      },
      "headline": {
        "type": "string"
      },
      "subtitle": {
        "type": "string"
      }
    }
  }
}

I am facing issues with the above queries, as I am getting really good performance with must search!

Hi @Utsav_Upadhyay2,

In the above, it seems like 1. could be caused by your environment and use case rather than the performance issue itself. In saying this, It seems your concern is with performance, more specifically the performance comparisons between the “Simple Search” versus the usage of the "must" clause.

I noted that in your initial post, you are doing a “simple search” on a single search term and comparing this performance to the "must" clause operator with two search terms. I do not think this is a fair comparison as it would generally result in a different result set. I.e. different amount of documents being returned (or even different documents being returned).

"must": [
  {
    "text": {
      "query": "CHATGPT",
      "path": [
        "headline",
        "fulltext",
        "subtitle"
      ]
    }
  },
  {
  "text": {
      "query": "OPENAI",
      "path": [
        "headline",
        "fulltext",
        "subtitle"
      ]
    }
  }
]

As @Aasawari had mentioned, the must clause maps to the AND boolean operator. In the above, documents must have “CHATGPT” AND “OPENAI” in the the specified path’s to be returned. If all the fields in a document only contain 1 of the 2 terms, then it would not be returned. Below is comparison of a simple search and the must clause being used on 5 sample documents:

Using “simple search”:

db.collection.aggregate([
  {
    "$search": {
      "text": {
        "query": ["OPENAI", "CHATGPT"],
        "path": [
          "headline",
          "fulltext",
          "subtitle"
        ]
      }
    }
  }
])
/// OUTPUT (All 5 documents):
[
  {
    headline: 'CHATGPT and OPENAI is amazing',
    subtitle: 'OPENAI details here',
    fulltext: 'OPENAI is going to exist'
  },
  {
    headline: 'CHATGPT',
    subtitle: 'OPENAI',
    fulltext: 'nothing'
  },
  {
    headline: 'CHATGPT and OPENAI technology is amazing',
    subtitle: 'CHATGPT subtitle details here',
    fulltext: 'OPENAI is going to exist'
  },
  {
    headline: 'OPENAI only is amazing',
    subtitle: 'OPENAI details here',
    fulltext: 'OPENAI is going to exist'
  },
  {
    headline: 'CHATGPT only technology is amazing',
    subtitle: 'CHATGPT details here',
    fulltext: 'CHATGPT is going to exist'
  }
]

Compared to must operator:

db.collection.aggregate([
  {
    "$search": {
      "compound": {
        "must":[
          {
            "text": {
              "query": "CHATGPT",
              "path": [
                "headline",
                "fulltext",
                "subtitle"
              ]
            }
          },
          {
            "text": {
              "query": "OPENAI",
              "path": [
                "headline",
                "fulltext",
                "subtitle"
              ]
            }
          }
        ]
      }
    }
  }
])
/// OUTPUT (Only 3 documents containing both "OPENAI" AND "CHATGPT" in the specified paths.
[
  {
    headline: 'CHATGPT and OPENAI is amazing',
    subtitle: 'OPENAI details here',
    fulltext: 'OPENAI is going to exist'
  },
  {
    headline: 'CHATGPT',
    subtitle: 'OPENAI',
    fulltext: 'nothing'
  },
  {
    headline: 'CHATGPT and OPENAI technology is amazing',
    subtitle: 'CHATGPT subtitle details here',
    fulltext: 'OPENAI is going to exist'
  }
]

In this case, the must example is returning less documents than the “simple search” example I had provided above.

Lastly, I noticed your most recent index definition did not include the "date" field with the "Date" data type. Is this expected or a typo? I presume this is for the usage of filter against the "date" field. I.e.:

{
  "mappings": {
    "dynamic": false,
    "fields": {
      "articleid": {
        "type": "string"
      },
      "date": {           /// Date field added to index definition
        "type": "date"
      },
      "fulltext": {
        "type": "string"
      },
      "headline": {
        "type": "string"
      },
      "subtitle": {
        "type": "string"
      }
    }
  }
}

Apologies if my understanding is incorrect. However if it is incorrect, could you perform a similar test and help us with more information to reproduce what you’re seeing in our local testing environment, such as:

  1. how many documents you’re returning on each query
  2. what’s the document count for each term (so we can replicate the statistical property of the collection), and
  3. the output of db.collection.stats()?

Regards,
Jason

@Jason_Tran thank you so much for this answer I do understand everything. As you asked 3 questions below are the following answers.

  1. I am returning all the result documents together and the data in the collection is for one year.
  2. I really do not know the document count for each term it totally depends on the term I used.

For one above two questions I am thinking about a solution by using the date filter first and then searching the term you are right I need to index the date too and change the date format too.

but could you please share the correct syntax of using a date with the search query in simple search and in must and should too?

  1. the output of stats()
{
    "ns" : "impact.article_fulltext",
    "size" : NumberLong(19509585533),
    "count" : 6393475.0,
    "avgObjSize" : 3051.0,
    "storageSize" : NumberLong(11533303808),
    "capped" : false,
    "wiredTiger" : {
        "metadata" : {
            "formatVersion" : 1.0
        },
        "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
        "type" : "file",
        "uri" : "statistics:table:collection-6-9203085737631202376",
        "LSM" : {
            "bloom filter false positives" : 0.0,
            "bloom filter hits" : 0.0,
            "bloom filter misses" : 0.0,
            "bloom filter pages evicted from cache" : 0.0,
            "bloom filter pages read into cache" : 0.0,
            "bloom filters in the LSM tree" : 0.0,
            "chunks in the LSM tree" : 0.0,
            "highest merge generation in the LSM tree" : 0.0,
            "queries that could have benefited from a Bloom filter that did not exist" : 0.0,
            "sleep for LSM checkpoint throttle" : 0.0,
            "sleep for LSM merge throttle" : 0.0,
            "total size of bloom filters" : 0.0
        },
        "block-manager" : {
            "allocations requiring file extension" : 108850.0,
            "blocks allocated" : 2487158.0,
            "blocks freed" : 2260525.0,
            "checkpoint size" : NumberLong(11488280576),
            "file allocation unit size" : 4096.0,
            "file bytes available for reuse" : 44888064.0,
            "file magic number" : 120897.0,
            "file major version number" : 1.0,
            "file size in bytes" : NumberLong(11533303808),
            "minor version number" : 0.0
        },
        "btree" : {
            "btree checkpoint generation" : 24811.0,
            "btree clean tree checkpoint expiration time" : NumberLong(9223372036854775807),
            "column-store fixed-size leaf pages" : 0.0,
            "column-store internal pages" : 0.0,
            "column-store variable-size RLE encoded values" : 0.0,
            "column-store variable-size deleted values" : 0.0,
            "column-store variable-size leaf pages" : 0.0,
            "fixed-record size" : 0.0,
            "maximum internal page key size" : 368.0,
            "maximum internal page size" : 4096.0,
            "maximum leaf page key size" : 2867.0,
            "maximum leaf page size" : 32768.0,
            "maximum leaf page value size" : 67108864.0,
            "maximum tree depth" : 5.0,
            "number of key/value pairs" : 0.0,
            "overflow pages" : 0.0,
            "pages rewritten by compaction" : 0.0,
            "row-store empty values" : 0.0,
            "row-store internal pages" : 0.0,
            "row-store leaf pages" : 0.0
        },
        "cache" : {
            "bytes currently in the cache" : 32718145.0,
            "bytes dirty in the cache cumulative" : NumberLong(106693214626),
            "bytes read into cache" : NumberLong(517832151735),
            "bytes written from cache" : NumberLong(92076373337),
            "checkpoint blocked page eviction" : 5947.0,
            "data source pages selected for eviction unable to be evicted" : 25887.0,
            "eviction walk passes of a file" : 3056723.0,
            "eviction walk target pages histogram - 0-9" : 2586282.0,
            "eviction walk target pages histogram - 10-31" : 290118.0,
            "eviction walk target pages histogram - 128 and higher" : 0.0,
            "eviction walk target pages histogram - 32-63" : 51760.0,
            "eviction walk target pages histogram - 64-128" : 128563.0,
            "eviction walks abandoned" : 54482.0,
            "eviction walks gave up because they restarted their walk twice" : 2535193.0,
            "eviction walks gave up because they saw too many pages and found no candidates" : 50588.0,
            "eviction walks gave up because they saw too many pages and found too few candidates" : 5148.0,
            "eviction walks reached end of tree" : 5268235.0,
            "eviction walks started from root of tree" : 2645976.0,
            "eviction walks started from saved location in tree" : 410747.0,
            "hazard pointer blocked page eviction" : 846.0,
            "in-memory page passed criteria to be split" : 574.0,
            "in-memory page splits" : 287.0,
            "internal pages evicted" : 114326.0,
            "internal pages split during eviction" : 3.0,
            "leaf pages split during eviction" : 110274.0,
            "modified pages evicted" : 2179959.0,
            "overflow pages read into cache" : 0.0,
            "page split during eviction deepened the tree" : 0.0,
            "page written requiring cache overflow records" : 135.0,
            "pages read into cache" : 11598349.0,
            "pages read into cache after truncate" : 0.0,
            "pages read into cache after truncate in prepare state" : 0.0,
            "pages read into cache requiring cache overflow entries" : 104.0,
            "pages requested from the cache" : 234609621.0,
            "pages seen by eviction walk" : 83939149.0,
            "pages written from cache" : 2471972.0,
            "pages written requiring in-memory restoration" : 288515.0,
            "tracked dirty bytes in the cache" : 4504688.0,
            "unmodified pages evicted" : 10935710.0
        },
        "cache_walk" : {
            "Average difference between current eviction generation when the page was last considered" : 0.0,
            "Average on-disk page image size seen" : 0.0,
            "Average time in cache for pages that have been visited by the eviction server" : 0.0,
            "Average time in cache for pages that have not been visited by the eviction server" : 0.0,
            "Clean pages currently in cache" : 0.0,
            "Current eviction generation" : 0.0,
            "Dirty pages currently in cache" : 0.0,
            "Entries in the root page" : 0.0,
            "Internal pages currently in cache" : 0.0,
            "Leaf pages currently in cache" : 0.0,
            "Maximum difference between current eviction generation when the page was last considered" : 0.0,
            "Maximum page size seen" : 0.0,
            "Minimum on-disk page image size seen" : 0.0,
            "Number of pages never visited by eviction server" : 0.0,
            "On-disk page image sizes smaller than a single allocation unit" : 0.0,
            "Pages created in memory and never written" : 0.0,
            "Pages currently queued for eviction" : 0.0,
            "Pages that could not be queued for eviction" : 0.0,
            "Refs skipped during cache traversal" : 0.0,
            "Size of the root page" : 0.0,
            "Total number of pages currently in cache" : 0.0
        },
        "compression" : {
            "compressed page maximum internal page size prior to compression" : 4096.0,
            "compressed page maximum leaf page size prior to compression " : 32768.0,
            "compressed pages read" : 11478869.0,
            "compressed pages written" : 2421005.0,
            "page written failed to compress" : 998.0,
            "page written was too small to compress" : 49969.0
        },
        "cursor" : {
            "bulk loaded cursor insert calls" : 0.0,
            "cache cursors reuse count" : 3060833.0,
            "close calls that result in cache" : 0.0,
            "create calls" : 2331.0,
            "insert calls" : 5473321.0,
            "insert key and value bytes" : NumberLong(5985885968),
            "modify" : 17575771.0,
            "modify key and value bytes affected" : NumberLong(64607112871),
            "modify value bytes modified" : 216058125.0,
            "next calls" : 40669361.0,
            "open cursor count" : 0.0,
            "operation restarted" : 10.0,
            "prev calls" : 1.0,
            "remove calls" : 0.0,
            "remove key bytes removed" : 0.0,
            "reserve calls" : 0.0,
            "reset calls" : 74299813.0,
            "search calls" : 77818541.0,
            "search near calls" : 24160712.0,
            "truncate calls" : 0.0,
            "update calls" : 0.0,
            "update key and value bytes" : 0.0,
            "update value size change" : 280870968.0
        },
        "reconciliation" : {
            "dictionary matches" : 0.0,
            "fast-path pages deleted" : 0.0,
            "internal page key bytes discarded using suffix compression" : 2170919.0,
            "internal page multi-block writes" : 6852.0,
            "internal-page overflow keys" : 0.0,
            "leaf page key bytes discarded using prefix compression" : 0.0,
            "leaf page multi-block writes" : 114718.0,
            "leaf-page overflow keys" : 0.0,
            "maximum blocks required for a page" : 1.0,
            "overflow values written" : 0.0,
            "page checksum matches" : 38383.0,
            "page reconciliation calls" : 2236972.0,
            "page reconciliation calls for eviction" : 1443506.0,
            "pages deleted" : 0.0
        },
        "session" : {
            "object compaction" : 0.0
        },
        "transaction" : {
            "update conflicts" : 0.0
        }
    },
    "nindexes" : 3.0,
    "indexBuilds" : [

    ],
    "totalIndexSize" : 207925248.0,
    "indexSizes" : {
        "_id_" : 76206080.0,
        "pubdate_1" : 36888576.0,
        "articleid_1" : 94830592.0
    },
    "scaleFactor" : 1.0,
    "ok" : 1.0,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1676273747, 38),
        "signature" : {
            "hash" : BinData(0, "xBHdkvvZezlmwL1Bh0naumwr0kM="),
            "keyId" : NumberLong(7155578345936125954)
        }
    },
    "operationTime" : Timestamp(1676273747, 38)
}

It depends on the terms you’re searching for and the output expected. Aasawari had provided some examples of this earlier. However, based off your previous “simple search” provided, take a look at the below example which includes a filter on the date ranges provided along with this “simple search”:

A similar search with date filtering (used from your other example) will be possible with compound and the must clause:

db.getCollection("article_fulltext").aggregate([
    {
        "$search":{
            "index":"fulltext",
            "compound":{
                "filter":[
                    {
                        "range":{
                            "path":"date",
                            "gte": ISODate("2023-01-01T00:00:00.000Z"),
                            "lte": ISODate("2023-01-31T00:00:00.000Z")
                        }
                    }
                ],
                "must":[
                    { 
                        text: {
                            query: "Microsoft",
                            path: ["headline", "fulltext", "subtitle"]
                        }
                    }
                ]
            }
        }
    }
])

I haven’t tested the above so try it out and see how you go. Again, it’ll probably depend on the search terms used and your expected output, so I am unable to provide the full exact query(s) you’re after. You’ll need to adjust these accordingly based off your use case / search terms.

Please refer to the following documentation for more examples:

Regards,
Jason