Is there a better way to do this aggregation?

Hi,
i have 2 collections, ‘pages’ and ‘pageVersions’ with the schema:

pages: {
    title: string,
    htmlContent:string,
    timestamp:Date,
}
pageVersions: {
    author: string,
    versions: ObjectID[], // an id of a document in pages collection
    published_idx: number, // index in versions array that points to a published version of a page
}

i want to join both collections and create a flattened representation for each published page:

publishedPages: [{
    author: string,
    title: string,
    htmlContent: string,
    timestamp: string
}], // where title = lookup( localField: versions[published_idx], foreignField: _id, from: pages ), ... etc.

My current solution (with nodejs driver):

 const opResult = await pageVersions.aggregate([
  {
    $set: {
      publishedDoc: {
        $arrayElemAt: ['$versions', '$published_idx'],
      }
    }
  },
  {
    $lookup: {
      from: 'pages',
      localField: 'publishedDoc',
      foreignField: '_id',
      as: 'page'
    }
  },
  {
    $set: {
      firstPage: {
        $arrayElemAt: ['$page', 0]
      },
    }
  },
  {
    $set: {
      title: '$firstPage.title',
      //htmlContent: '$firstPage.htmlContent',
      timestamp: '$firstPage.timestamp',
    }
  },
  {
    $unset: ['page', 'publishedDoc', 'published_idx', 'versions', 'firstPage'],
  }
])

and it seems to work. However i wonder if it could be done better. In this pipeline i have 2 stages that do nothing but to set temporary values, and my question is if i need them at all. Also it seems that i cant use dot notation (?) like ‘$page.0.title’ returns null even though documentation seems to indicate that i should be able to.
MongoDB version 4.2