sam-rossi-&-kevin-yeh---engineering-interns

2740 results

Workload Isolation for More Scalability and Availability: Search Nodes Now on Google Cloud

Today we’re excited to take the next step in bringing scalable, dedicated architecture to your search experiences with the introduction of Atlas Search Nodes, now in public preview for Google Cloud. This post is also available in: Deutsch , Français , Español , Português , Italiano , 한국어 , 简体中文 . Since our initial announcement of Search Nodes in June of 2023, we’ve been rapidly accelerating access to the most scalable dedicated architecture, starting with general availability on AWS and now expanding to public preview on Google Cloud. We'd like to give you a bit more context on what Search Nodes are and why they're important to any search experience running at scale. Search Nodes provide dedicated infrastructure for Atlas Search and Vector Search workloads to enable even greater control over search workloads. They also allow you to isolate and optimize compute resources to scale search and database needs independently, delivering better performance at scale and higher availability. One of the last things developers want to deal with when building and scaling apps is having to worry about infrastructure problems. Any downtime or poor user experiences can result in lost users or revenue, especially when it comes to your database and search experience. This is one of the reasons developers turn to MongoDB, given the ease of use of having one unified system for your database and search solution. With the introduction of Atlas Search Nodes, we’ve taken the next step in providing our builders with ultimate control, giving them the ability to remain flexible by scaling search workloads without the need to over-provision the database. By isolating your search and database workloads while at the same time automatically keeping your search cluster data synchronized with operational data, Atlas Search and Atlas Vector Search eliminate the need to run a separate ETL tool, which takes time and effort to set up and is yet another fail point for your scaling app. This provides superior performance and higher availability while reducing architectural complexity and wasted engineering time recovering from sync failures. In fact, we’ve seen a 40% to 60% decrease in query time for many complex queries, while eliminating the chances of any resource contention or downtime. With just a quick button click, Search Nodes on Google Cloud offer our existing Atlas Search and Vector Search users the following benefits: Higher availability Increased scalability Workload isolation Better performance at scale Improved query performance We offer both compute-heavy search-specific nodes for relevance-based text search, as well as a memory-optimized option that is optimal for semantic and retrieval augmented generation (RAG) production use cases with Atlas Vector Search. This makes resource contention or availability issues a thing of the past. Search Nodes are easy to opt into and set up — to start, jump on into the MongoDB UI and follow the steps do the following: Navigate to your “Database Deployments” section in the MongoDB UI Click the green “+Create” button On the “Create New Cluster” page, change the radio button for Google Cloud for “Multi-cloud, multi-region & workload isolation” to enable Toggle the radio button for “Search Nodes for workload isolation” to enable. Select the number of nodes in the text box Check the agreement box Click “Create cluster” For existing Atlas Search users, click “Edit Configuration” in the MongoDB Atlas Search UI and enable the toggle for workload isolation. Then the steps are the same as noted above. Jump straight into our docs to learn more!

March 28, 2024

利用工作负载隔离提高可扩展性和可用性:Search Nodes 现已在 Google Cloud 上提供

今天,我们很高兴地宣布 Atlas Search Nodes(公开预览版)现已在 Google Cloud 上提供,这离我们针对搜索体验提供可扩展的专用架构这个目标更进了一步。 自 2023 年 6 月首次宣布推出 Search Nodes 以来,我们一直在加快这个最具可扩展性的专用架构的应用速度, 先是在 AWS 上正式发布 ,现在又在 Google Cloud 上发布了它的公开预览版。让我们简单介绍一下什么是 Search Nodes,以及它为何对任何大规模运行的搜索体验非常重要。 Search Nodes 可为 Atlas Search 和 Vector Search 工作负载提供专用基础架构,让您能够对搜索工作负载拥有更大的控制力度。通过隔离并优化计算资源来独立地扩展搜索和数据库需求,从而大规模提升性能并实现更高的可用性。 在构建和扩展应用时,开发者最不愿处理的一件事情就是要担心基础架构问题。任何停机或不佳的用户体验都可导致用户流失或收入受损,在涉及数据库和搜索体验时,这种影响尤为明显。这也是开发者纷纷转向 MongoDB 的原因之一,因为它可以让开发者为数据库和搜索解决方案使用一个统一的系统。 随着 Atlas Search Nodes 的推出,我们在为构建者提供最大控制力度方面又迈出了重要一步。现在,构建者可以扩展搜索工作负载,而无需过度预配数据库,因此能够保持灵活性。利用 Atlas Search 和 Atlas Vector Search,您可以在隔离搜索和数据库工作负载的同时,自动保持搜索集群数据与操作数据的同步。这样,您就无需运行单独的 ETL 工具,也就不用耗费时间和精力进行额外设置,从而避免在扩展应用时出错。这有助于提升性能和可用性,同时降低架构复杂性,以及减少从同步失败事件中恢复所耗费的工程时间。事实上,我们已经看到许多复杂查询的查询时间减少了 40% - 60%,资源争用或停机问题也得到了解决。 只需切换一下按钮,Google Cloud 上的 Search Nodes 就能为使用 Atlas Search 和 Vector Search 的用户提供以下优势: 更高的可用性 更强的可扩展性 工作负载隔离 大规模提升性能 更好的查询性能 我们为基于相关性的文本搜索提供计算密集型且特定于搜索的节点,同时还提供内存优化选项,该选项最适合使用 Atlas Vector Search 的语义和 RAG 生产用例。这解决了一直以来存在的资源争用或可用性问题。 启用和设置 Search Nodes 非常简单,只需前往 MongoDB 用户界面并执行以下操作: 前往 MongoDB 用户界面中的“数据库部署”部分 单击绿色的“+创建”按钮 在“创建新集群”页面上,将 Google Cloud 的“多云、多区域和工作负载隔离”单选按钮切换至“开启” 将“用于工作负载隔离的 Search Nodes”单选按钮切换至“开启”。在文本框中选择节点数 勾选协议框 单击“创建集群” 对于使用 Atlas Search 的用户,请单击 MongoDB Atlas Search 用户界面中的“修改配置”,并开启工作负载隔离的切换开关。后续步骤与之前所述步骤相同。 直接跳转至我们的文档以了解更多信息 !

March 28, 2024

확장성 및 가용성 향상을 위한 워크로드 격리: 이제 Google Cloud에서 노드 검색 가능

MongoDB는 확장 가능한 전용 아키텍처를 검색 환경에 도입하는 다음 단계로 나아가게 되어 매우 기쁘게 생각하며, 이의 일환으로 Google Cloud 용 공개 미리보기에서 Atlas Search Nodes를 선보입니다. 2023년 6월에 Search Nodes를 처음 발표한 이후, MongoDB는 AWS 에서의 일반 공개로 시작하여 오늘날 Google Cloud에서의 공개 미리보기에 이르기까지 확장성이 가장 뛰어난 전용 아키텍처에 대한 액세스를 급속히 앞당겨왔습니다. Search Nodes가 무엇인지, 그리고 대규모로 실행되는 모든 검색 환경에서 Search Nodes가 중요한 이유는 무엇인지에 대해 좀 더 자세히 설명하겠습니다. Search Nodes는 Atlas Search 및 Vector Search 워크로드를 위한 전용 인프라를 제공하므로 검색 워크로드를 더욱 효과적으로 제어할 수 있습니다. 컴퓨팅 리소스를 격리하고 최적화하여 검색 및 데이터베이스 요구 사항을 별개로 확장함으로써 우수한 성능과 더욱 높은 가용성을 대규모로 제공합니다. 개발자가 앱을 구축하고 확장할 때 가장 원치 않는 일 중 하나는 인프라 문제를 걱정하는 것입니다. 다운타임이나 불만족스러운 사용자 경험은 사용자 및 수익 손실을 초래하며, 이는 특히 데이터베이스 및 검색 환경의 경우 더더욱 더 그렇습니다. 이러한 맥락에서, 개발자들이 MongoDB를 선택하는 이유 중 하나는 데이터베이스와 검색 솔루션을 하나의 통합된 시스템으로 간편하게 사용할 수 있기 때문입니다. MongoDB는 Atlas Search Nodes를 도입함으로써 빌더에게 최상의 제어를 제공하고, 데이터베이스를 과도하게 프로비저닝하지 않고도 검색 워크로드 확장이 가능하여 유연성 유지가 가능하기 위한 다음 단계로 나아갔습니다. Atlas Search 및 Atlas Vector Search는 검색 및 데이터베이스 워크로드를 분리하는 동시에 검색 cluster 데이터를 운영 데이터와 자동으로 동기화하므로, 설정에 시간과 노력이 필요하고 확장 앱의 또 다른 실패 요인으로 작용하는 별도의 ETL 도구를 실행할 필요가 없습니다. 이를 통해 뛰어난 성능과 높은 가용성을 제공하는 동시에 아키텍처의 복잡성과 동기화 장애 복구에 낭비되는 엔지니어링 시간을 줄일 수 있습니다. 실제로 많은 복잡한 쿼리의 쿼리 시간이 40~60% 감소하는 동시에 리소스 경합이나 다운타임이 발생할 가능성이 제거되는 것이 확인되었습니다. Google Cloud의 Search Nodes는 빠른 버튼 전환만으로 기존 Atlas Search 및 Vector Search 사용자에게 다음과 같은 이점을 제공합니다. 향상된 가용성 확장성 증가 워크로드 격리 대규모로 성능 향상 쿼리 성능 향상 정확도 기반 텍스트 검색을 위한 컴퓨팅 집약적인 검색 전용 노드와 Atlas Vector Search의 시맨틱 및 RAG 생산 사용 사례에 최적화된 메모리 최적화 옵션을 모두 제공합니다. 따라서 이제 더 이상 리소스 경합이나 가용성 문제는 없습니다. Search Nodes는 선택과 설정이 쉽습니다. 시작하려면 MongoDB UI로 이동하여 다음을 수행하세요. MongoDB UI의 '데이터베이스 배포' 섹션으로 이동합니다. 초록색 '+생성' 버튼을 클릭합니다 '새 클러스터 생성' 페이지에서 Google Cloud의 '멀티 클라우드, 멀티 리전 및 워크로드 격리' 라디오 버튼을 변경하여 활성화합니다. '워크로드 격리를 위한 Search Nodes' 라디오 버튼을 전환하여 활성화합니다. 텍스트 상자에서 노드 수를 선택합니다. 동의란을 선택합니다. '클러스터 생성'을 클릭합니다. 기존 Atlas Search 사용자의 경우, MongoDB Atlas Search UI에서 '구성 편집'을 클릭하고 워크로드 격리 토글을 활성화합니다. 이후 단계는 위 설명과 동일합니다. 자세히 알아보려면 바로 문서로 이동하세요 !

March 28, 2024

Isolamento do volume de trabalho para maior escalabilidade e disponibilidade: Nós de Pesquisa agora no Google Cloud

Estamos empolgados em dar o próximo passo para trazer uma arquitetura com escalabilidade e dedicada às suas experiências de pesquisa, com a apresentação dos Nós de Pesquisa do Atlas na pré-visualização pública para o Google Cloud. Depois do nosso anúncio inicial dos Nós de Pesquisa em junho de 2023, estamos acelerando rapidamente o acesso à arquitetura dedicada com mais escalabilidade, começando com a disponibilidade geral no AWS e agora expandindo para a pré-visualização pública no Google Cloud. Vamos fornecer um pouco mais de contexto sobre o que são os Nós de Pesquisa e porque eles são importantes para qualquer experiência de pesquisa executada em escala. Os Nós de Pesquisa fornecem infraestrutura dedicada para volume de trabalho no Atlas Search e Vector Search para fornecer controle ainda maior sobre os volumes de trabalho de pesquisa. Ele isola e otimiza os recursos de computação para dimensionar as necessidades de pesquisa e de banco de dados de forma independente, proporcionando um desempenho melhor em escala e maior disponibilidade. Uma das últimas coisas com que os desenvolvedores querem lidar ao criar e dimensionar aplicativos é ter que se preocupar com problemas de infraestrutura. Qualquer tempo de inatividade ou experiência ruim para o usuário representa perda de usuários ou de receita, principalmente quando se trata do seu banco de dados e da experiência de pesquisa. Esse é um dos motivos pelos quais os desenvolvedores recorrem ao MongoDB, dada a facilidade de uso de um sistema unificado para seu banco de dados e solução de pesquisa. Com a introdução dos Nós de Pesquisa do Atlas demos o próximo passo para fornecer aos nossos desenvolvedores o máximo de controle, podendo permanecer flexíveis e dimensionar os volumes de pesquisa sem sermos forçados a provisionar o banco de dados em excesso. Ao isolar os volume de pesquisa e de banco de dados e manter os dados do cluster de pesquisa sincronizados com os dados operacionais automaticamente, o Atlas Search e o Atlas Vector Search eliminam a necessidade de executar uma ferramenta de ETL separada, o que leva tempo e esforço para configurar e é mais um ponto de falha para seu aplicativo de dimensionamento. Isso proporciona desempenho superior e maior disponibilidade, além de reduzir a complexidade da arquitetura e o desperdício de tempo de engenharia na recuperação das falhas de sincronização. De fato, observamos uma redução de 40% a 60% no tempo de query para muitas consultas complexas ao mesmo tempo em que eliminamos as chances de qualquer contenção de recursos ou tempo de inatividade. Com apenas uma troca rápida de botões, os Nós de Pesquisa no Google Cloud oferecem aos usuários existentes do Atlas Search e do Vector Search os seguintes benefícios: Maior disponibilidade Escalabilidade aumentada Isolamento da carga de trabalho Melhor desempenho em escala Melhor desempenho de consulta Oferecemos tanto Search Nodes específicos com alto consumo de computação para pesquisa de texto baseada em relevância quanto uma opção otimizada para memória, que é ideal para casos de uso de produção semântica e RAG com o Atlas Vector Search. Isso faz com que os problemas de contenção ou disponibilidade de recursos sejam coisa do passado. Os Nós de Pesquisa são fáceis de configurar, para começar acesse a interface do usuário do MongoDB e faça o seguinte: Navegue até a seção "Sistemas de Banco de dados" na interface do usuário do MongoDB Clique no botão verde “+Criar” Para ativá-lo, na página "Criar novo cluster" altere o botão de opção do Google Cloud para "Multinuvem, multiregião e isolamento do volume de trabalho" Para habilitá-los, alterne o botão de opção para "Nós de Pesquisa para isolamento do volume de trabalho". Selecione o número de nós na caixa de texto Marque a caixa de seleção do contrato Clique em "Criar cluster" Para usuários existentes do Atlas Search, clique em "Editar configuração" na interface do usuário do MongoDB Atlas Search e habilite o botão que alterna para o isolamento do volume de trabalho. Em seguida, as etapas são as mesmas mencionadas acima. Confira nossos documentos para saber mais!

March 28, 2024

Isolamento del carico di lavoro per ottenere maggiore scalabilità e disponibilità: Nodi di ricerca ora su Google Cloud

Oggi siamo entusiasti di fare il passo successivo nel portare un'architettura scalabile e dedicata alle tue esperienze di ricerca, con l'introduzione dei Nodi di ricerca Atlas in public preview per Google Cloud. Dopo l'annuncio iniziale dei Nodi di ricerca nel giugno del 2023, abbiamo rapidamente accelerato l'accesso all'architettura dedicata più scalabile, iniziando con la disponibilità generale su AWS e ora con la public preview su Google Cloud. Vediamo di capire meglio che cosa sono i Nodi di ricerca e perché sono importanti per qualsiasi esperienza di ricerca in scala. I Nodi di ricerca forniscono un'infrastruttura dedicata per i carichi di lavoro di Atlas Search e Vector Search , per consentire un controllo ancora maggiore sui carichi di lavoro di ricerca. Isola e ottimizza le risorse di calcolo per scalare le esigenze di ricerca e database in modo indipendente, offrendo prestazioni migliori su larga scala e maggiore disponibilità. Una delle ultime problematiche che gli sviluppatori intendono affrontare quando creano e scalano le app è doversi preoccupare dei problemi di infrastruttura. Qualsiasi tempo di inattività o esperienza negativa determina una perdita di utenti o di fatturato, soprattutto per quanto riguarda il database e l'esperienza di ricerca. Questo è uno dei motivi per cui gli sviluppatori si rivolgono a MongoDB, data la facilità d'uso di poter disporre di un sistema unificato per il database e la soluzione di ricerca. Con l'introduzione dei Nodi di ricerca Atlas abbiamo compiuto il passo successivo offrendo ai nostri sviluppatori il massimo controllo, poiché possono rimanere flessibili e scalare i carichi di lavoro di ricerca senza essere costretti a eseguire un provisioning eccessivo del database. Isolando i carichi di lavoro di ricerca e database e al contempo mantenendo automaticamente i dati del cluster di ricerca sincronizzati con i dati operativi, Atlas Search e Atlas Vector Search consentono di non dover eseguire uno strumento ETL separato, la cui configurazione richiede tempo e impegno e che rappresenta pur sempre un altro punto di errore per la propria app di scalabilità. Ciò consente prestazioni superiori e offre maggiore disponibilità, riducendo al contempo la complessità dell'architettura e gli sprechi di tempo di progettazione per il ripristino da errori di sincronizzazione. In effetti, abbiamo riscontrato una riduzione del 40% - 60% del tempo di esecuzione delle query per molte query complesse, eliminando al contempo la possibilità di conflitti di risorse o tempi di inattività. Con un semplice cambio di pulsante, i Nodi di ricerca su Google Cloud offrono ai nostri utenti di Atlas Search e Vector Search i seguenti vantaggi: Disponibilità più elevata Aumento della scalabilità Isolamento dei carichi di lavoro Prestazioni migliori su larga scala Miglioramento delle prestazioni delle query Offriamo sia Search Nodes specifici per la ricerca ad alto carico di calcolo per la ricerca di testo basata sulla pertinenza, sia un'opzione ottimizzata per la memoria che è ideale per casi d'uso di produzione semantica e RAG con Atlas Vector Search. Ciò rende i problemi di contesa o disponibilità delle risorse un vecchio ricordo. È facile attivare e impostare i Nodi di ricerca: per iniziare, accedi all'IU di MongoDB ed esegui le seguenti operazioni: Vai alla sezione "Distribuzioni di database" nell'IU di MongoDB Fai clic sul pulsante verde "+Crea" Nella pagina "Crea nuovo cluster", modifica il pulsante di opzione per Google Cloud in "Isolamento multi-cloud, multi-regione e carico di lavoro" per abilitare Attiva il pulsante di opzione "Nodi di ricerca per l'isolamento del carico di lavoro" per abilitare. Seleziona il numero di nodi nella casella di testo Seleziona la casella di accordo Fai clic su "Crea cluster" Per gli utenti esistenti di Atlas Search, fai clic su "Modifica configurazione" nell'IU di MongoDB Atlas Search e abilita l'interruttore per l'isolamento del carico di lavoro. Successivamente, i passaggi sono gli stessi indicati sopra. Per maggiori informazioni, consulta i nostri documenti .

March 28, 2024

Isolation des charges de travail pour plus d'évolutivité et de disponibilité : nœuds de recherche désormais disponibles sur Google Cloud

Aujourd'hui, nous sommes ravis de passer à l'étape suivante en apportant une architecture évolutive et dédiée à vos expériences de recherche, avec l'introduction des nœuds de recherche d'Atlas dans la version préliminaire publique de Google Cloud. Après l'annonce initiale des nœuds de recherche en juin 2023, nous avons rapidement accéléré l'accès à l'architecture dédiée la plus évolutive, en commençant par la disponibilité générale sur AWS , puis en l'étendant à la version préliminaire publique sur Google Cloud. Découvrons un peu plus de contexte sur ce que sont les nœuds de recherche et pourquoi ils sont importants pour toute expérience de recherche fonctionnant à l'échelle. Les nœuds de recherche fournissent une infrastructure dédiée aux charges de travail Atlas Search et Vector Search afin d'offrir un contrôle encore plus renforcé sur les charges de travail de recherche. Isolez et optimisez les ressources de calcul pour répartir indépendamment les besoins de recherche et de base de données, offrant ainsi de meilleures performances à grande échelle et une plus grande disponibilité. Les problèmes d'infrastructure représentent l'une des dernières choses que les développeurs souhaitent gérer lorsqu'ils créent et mettent à l'échelle des applications. Tout temps d'arrêt ou mauvaise expérience utilisateur se traduit par une perte d'utilisateurs ou de chiffre d'affaires, en particulier lorsque cela concerne votre base de données et de l'expérience de recherche. C'est l'une des raisons pour lesquelles les développeurs se tournent vers MongoDB, étant donné la facilité d'utilisation d'un système unifié pour votre base de données et votre solution de recherche. Avec l'introduction des nœuds de recherche d'Atlas, nous avons franchi une nouvelle étape en offrant à nos constructeurs un contrôle ultime, leur permettant ainsi de rester flexibles en étant capables de répartir les charges de travail de recherche sans être obligés de surapprovisionner la base de données. En isolant vos charges de travail de recherche et de base de données tout en synchronisant automatiquement les données de votre cluster de recherche avec les données opérationnelles, Atlas Search et Atlas Vector Search éliminent le besoin d'exécuter un outil ETL distinct, dont la configuration demande du temps et des efforts et constitue un autre point d'échec pour votre application de mise à l'échelle. Cela permet d'obtenir des performances supérieures et une plus grande disponibilité, tout en réduisant la complexité architecturale et le temps d'ingénierie perdu lors de la récupération après des échecs de synchronisation. En effet, nous avons constaté une diminution de 40 à 60 % du temps de requête pour de nombreuses requêtes complexes, parallèlement à une élimination des risques de conflit de ressources ou de temps d'arrêt. D'un simple clic, les nœuds de recherche sur Google Cloud offrent à nos utilisateurs existants d'Atlas Search et de Vector Search les avantages suivants : Disponibilité plus élevée Évolutivité accrue Isolation des charges de travail Meilleures performances à l'échelle Performances des requêtes renforcées Nous proposons à la fois des nœuds destinés à la recherche gourmande en calcul pour la recherche de texte basée sur la pertinence, ainsi qu'une option optimisée pour la mémoire, idéale pour les cas d'utilisation sémantiques et de production RAG avec Atlas Vector Search. Les problèmes de conflits ou de disponibilité des ressources appartiennent désormais au passé. Les nœuds de recherche sont faciles à utiliser et à configurer. Pour commencer, accédez à l'UI de MongoDB et procédez comme suit : Accédez à la section « Déploiements de bases de données » de l'UI MongoDB. Cliquez sur le bouton vert « + Créer ». Sur la page « Créer un cluster », remplacez le bouton d'option de Google Cloud par « Isolation multi-cloud, multi-région et des charges de travail » pour l'activer. Activez le bouton d'option « Nœuds de recherche pour l'isolation des charges de travail ». Sélectionnez le nombre de nœuds dans la zone de texte. Cochez la case d'accord. Cliquez sur « Créer un cluster ». Pour les utilisateurs existants d'Atlas Search, cliquez sur « Modifier la configuration » dans l'UI de MongoDB Atlas Search et activez l'option d'isolation des charges de travail. Les étapes sont alors les mêmes que celles indiquées ci-dessus. Accédez directement à nos documents pour en savoir plus !

March 28, 2024

Aislamiento de carga de trabajo para mayor escalabilidad y disponibilidad: buscar nodos ahora en Google Cloud

Hoy estamos entusiasmados de dar el siguiente paso para llevar una arquitectura escalable y dedicada a sus experiencias de búsqueda, con la introducción de Atlas Search Nodes en vista previa pública para Google Cloud. Después de nuestro anuncio inicial de Search Nodes en junio de 2023, estuvimos acelerando rápidamente el acceso a la arquitectura dedicada más escalable, comenzando con la disponibilidad general en AWS y ahora expandiéndonos a la vista previa pública en Google Cloud. Proporcionemos un poco más de contexto sobre qué son los nodos de búsqueda y por qué son importantes para cualquier experiencia de búsqueda que se ejecute a escala. Los nodos de búsqueda proporcionan una infraestructura dedicada para las cargas de trabajo de Atlas Search y Vector Search para proporcionar un control aún mayor sobre las cargas de trabajo de búsqueda. Aísle y optimice los recursos informáticos para escalar las necesidades de búsqueda y base de datos de forma independiente, ofreciendo un mejor rendimiento a escala y una mayor disponibilidad. Una de las últimas cosas con las que los desarrolladores quieren lidiar al crear y escalar aplicaciones es tener que preocuparse por problemas de infraestructura. Cualquier tiempo de inactividad o mala experiencia de usuario significa pérdida de usuarios o ingresos, especialmente cuando se trata de su base de datos y experiencia de búsqueda. Esta es una de las razones por las que los desarrolladores recurren a MongoDB, dada la facilidad de uso de tener un sistema unificado para su base de datos y solución de búsqueda. Con la introducción de Atlas Search Nodes dimos el siguiente paso para proporcionar a nuestros constructores el máximo control, pudiendo seguir siendo flexibles al ser capaces de escalar las cargas de trabajo de búsqueda sin tener que aprovisionar en exceso la base de datos. Al aislar sus cargas de trabajo de búsqueda y base de datos y al mismo tiempo mantener automáticamente los datos de su cluster de búsqueda sincronizados con los datos operativos, Atlas Search y Atlas Vector Search eliminan la necesidad de ejecutar una herramienta ETL separada, que requiere tiempo y esfuerzo de configuración y es otro punto de falla para su aplicación de escalado. Esto proporciona un rendimiento superior y una mayor disponibilidad, al tiempo que reduce la complejidad de la arquitectura y la pérdida de tiempo de ingeniería en la recuperación de fallos de sincronización. De hecho, hemos observado una reducción del 40% al 60% en el tiempo de consulta para muchas consultas complejas, al tiempo que se eliminan las posibilidades de contención de recursos o tiempos de inactividad. Con solo un botón rápido, los nodos de búsqueda en Google Cloud ofrecen a nuestros usuarios existentes de Atlas Search y Vector Search los siguientes beneficios: Mayor disponibilidad Mayor escalabilidad Aislamiento de la carga de trabajo Mejor rendimiento a escala Rendimiento de consultas mejorado Ofrecemos tanto Nodos específicos de búsqueda con gran carga computacional para la búsqueda de texto basada en relevancia, como una opción optimizada para memoria que es óptima para casos de uso semántico y de producción RAG con Atlas Vector Search. Esto hace que los problemas de contención o disponibilidad de recursos sean cosa del pasado. Los nodos de búsqueda son fáciles de aceptar y establecer: para empezar, vaya a la IU de MongoDB y haga lo siguiente: Vaya a la sección “Implementaciones de bases de datos” en la IU de MongoDB. Haga clic en el botón verde “+Crear” En la página “Crear nuevo cluster”, cambie el botón de opción de Google Cloud por “Multi-cloud, multiregión & aislamiento de carga de trabajo” para activarlo. Active el botón de opción “Buscar nodos para el aislamiento de carga de trabajo”. Seleccione el número de nodos en el cuadro de texto Marque la casilla de acuerdo. Haga clic en “Crear cluster”. Para los usuarios existentes de Atlas Search, haga clic en “Editar configuración” en la IU de MongoDB Atlas Search y habilite el interruptor para el aislamiento de cargas de trabajo. Entonces los pasos son los mismos que se indicaron anteriormente. ¡Vaya directamente a nuestros docs para obtener más información!

March 28, 2024

Workload-Isolierung für mehr Skalierbarkeit und Verfügbarkeit: Search Nodes jetzt auf Google Cloud

Heute haben wir den nächsten Schritt bei der Einführung einer skalierbaren, dedizierten Architektur für Ihre Sucherlebnisse gemacht und Atlas Search Nodes als öffentliche Vorschau für Google Cloud vorgestellt. Nach unserer ersten Ankündigung von Search Nodes im Juni 2023 haben wir den Zugang zu der am besten skalierbaren dedizierten Architektur schnell beschleunigt, angefangen mit der allgemeinen Verfügbarkeit auf AWS und jetzt auch mit der öffentlichen Vorschau auf Google Cloud. Im Folgenden erläutern wir etwas genauer, was Search Nodes sind und warum sie für jedes Sucherlebnis im großen Maßstab wichtig sind. Search Nodes bieten eine dedizierte Infrastruktur für Atlas Search - und Vector Search -Workloads, um eine noch bessere Kontrolle über Search-Workloads zu ermöglichen. Isolieren und optimieren Sie Rechenressourcen, um Such- und Datenbankanforderungen unabhängig voneinander zu skalieren, und sorgen Sie so für eine bessere Leistung im großen Maßstab und eine höhere Verfügbarkeit. Eines der letzten Dinge, mit denen sich Entwickler bei der Entwicklung und Skalierung von Apps beschäftigen möchten, sind Probleme mit der Infrastruktur. Jede Ausfallzeit oder schlechte Benutzererfahrung bedeutet verlorene Benutzer oder Einnahmen, insbesondere wenn es um Ihre Datenbank und die Suchfunktion geht. Dies ist einer der Gründe, warum sich Entwickler für MongoDB entscheiden, denn es ist einfach, ein einheitliches System für Ihre Datenbank- und Suchlösung zu haben. Mit der Einführung von Atlas Search Nodes haben wir den nächsten Schritt unternommen, um unseren Entwicklern die ultimative Kontrolle zu geben. Sie bleiben flexibel, indem sie Sucharbeitslasten skalieren können, ohne gezwungen zu sein, die Datenbank zu überlasten. Durch die Isolierung Ihrer Such- und Datenbank-Workloads und die gleichzeitige automatische Synchronisierung Ihrer Suchclusterdaten mit den Betriebsdaten machen Atlas Search und Atlas Vector Search ein separates ETL-Tool überflüssig, dessen Einrichtung Zeit und Mühe kostet und einen weiteren Fehlerpunkt für Ihre skalierende App darstellt. Dies sorgt für eine überragende Leistung und höhere Verfügbarkeit und reduziert gleichzeitig die architektonische Komplexität und die Zeitverschwendung bei der Wiederherstellung nach Synchronisationsfehlern. In der Tat konnten wir bei vielen komplexen Abfragen eine Verringerung der Abfragezeit um 40 bis 60 % feststellen, während gleichzeitig die Gefahr von Ressourcenkonflikten oder Ausfallzeiten beseitigt wurde. Mit einem einfachen Knopfdruck bieten Search Nodes auf Google Cloud unseren bestehenden Atlas Search- und Vector Search-Benutzern die folgenden Vorteile: Höhere Verfügbarkeit Erhöhte Skalierbarkeit Workload-Isolation Bessere Skalierungsleistung Verbesserte Abfrageleistung Wir bieten sowohl rechenintensive suchspezifische Knoten für die relevanzbasierte Textsuche als auch eine speicheroptimierte Option, die für semantische und RAG -Produktionsanwendungen mit Atlas Vector Search optimal ist. Damit gehören Ressourcenkonflikte oder Verfügbarkeitsprobleme der Vergangenheit an. Search Nodes sind einfach zu aktivieren und einzurichten. Gehen Sie dazu in die MongoDB-Benutzeroberfläche und führen Sie die folgenden Schritte aus: Navigieren Sie in der MongoDB-Benutzeroberfläche zu Ihrem Abschnitt „Datenbankbereitstellungen“ Klicken Sie auf die grüne Schaltfläche „+Erstellen“ Ändern Sie auf der Seite „Neuen Cluster erstellen“ die Optionsschaltfläche für Google Cloud für „Multi-Cloud, mehrere Regionen & Workload-Isolierung“, um es zu aktivieren. Schalten Sie das Optionsfeld für „Search Nodes für Workload-Isolierung“ um, um es zu aktivieren. Wählen Sie die Anzahl der Knoten im Textfeld aus Aktivieren Sie das Kontrollkästchen „Vereinbarung“ Klicken Sie auf „Cluster erstellen“ Für bestehende Atlas Search-Benutzer klicken Sie in der MongoDB Atlas Search-Benutzeroberfläche auf „Konfiguration bearbeiten“ und aktivieren Sie den Schalter für die Workload-Isolierung. Dann sind die Schritte die gleichen wie oben beschrieben. Sehen Sie sich unsere Dokumente direkt an, um mehr zu erfahren!

March 28, 2024

Building AI With MongoDB: How DevRev is Redefining CRM for Product-Led Growth

OneCRM from DevRev is purpose-built for Software-as-a-Service (SaaS) companies. It brings together previously separate customer relationship management (CRM) suites for product management, support, and software development. Built on a foundation of customizable large language models (LLMs), data engineering, analytics, and MongoDB Atlas , it connects end users, sellers, support, product owners, and developers. OneCRM converges multiple discrete business apps and teams onto a common platform. As the company states on its website “Our mission is to connect makers (Dev) to customers (Rev) . When every employee adopts a “product-thinking” mindset, customer-centricity transcends from a department to become a culture.” DevRev was founded in October 2020 and raised over $85 million in seed funding from investors such as Khosla Ventures and Mayfield. At the time, this made it the largest seed in the history of Silicon Valley. The company is led by its co-founder and CEO, Dheeraj Pandey, who was previously the co-founder and CEO of Nutanix, and by Manoj Agarwal, DevRev's co-founder and former SVP of Engineering at Nutanix. DevRev is headquartered in Palo Alto and has offices in seven global locations. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. CRM + AI: Digging into the stack DevRev’s Support and Product CRM serve over 4,500 customers: Support CRM brings support staff, product managers, and developers onto an AI-native platform to automate Level 1 (L1), assist L2, and elevate L3 to become true collaborators. Product CRM brings product planning, software work management, and product 360 together so product teams can assimilate the voice of the customer in real-time. Figure 1: DevRev’s real-time dashboards empower product teams to detect at-risk customers, monitor product health, track development velocity, and more. AI is central to both the Support and Product CRMs. The company’s engineers build and run their own neural networks, fine-tuned with application data managed by MongoDB Atlas. This data is also encoded by open-source embedding models where it is used alongside OpenAI models for customer support chatbots and question-answering tasks orchestrated by autonomous agents. MongoDB partner LangChain is used to call the models, while also providing a layer of abstraction that frees DevRev engineers to effortlessly switch between different generative AI models as needed. Data flows across DevRev’s distributed microservices estate and into its AI models are powered by MongoDB change streams . Downstream services are notified in real-time of any data changes using a fully reactive, event-driven architecture. MongoDB Atlas: AI-powered CRM on an agile and trusted data platform MongoDB is the primary database backing OneCRM, managing users, customer and product data, tickets, and more. DevRev selected MongoDB Atlas from the very outset of the company. The flexibility of its data model, freedom to run anywhere, reliability and compliance, and operational efficiency of the Atlas managed service all impact how quickly DevRev can build and ship high-quality features to its customers. The flexibility of the document data model enables DevRev’s engineers to handle the massive variety of data structures their microservices need to work with. Documents are large, and each can have many custom fields. To efficiently store, index, and query this data, developers use MongoDB’s Attribute pattern and have the flexibility to add, modify, and remove fields at any time. The freedom to run MongoDB anywhere helps the engineering team develop, test, and release faster. Developers can experiment locally, then move to integration testing, and then production — all running in different environments — without changing a single line of code. This is core to DevRev’s velocity in handling over 4,000 pull requests per month: Developers can experiment and test with MongoDB on local instances — for example adding indexes or evaluating new query operators, enabling them to catch issues earlier in the development cycle. Once unit tests are complete, developers can move to temporary instances in Docker containers for end-to-end integration testing. When ready, teams can deploy to production in MongoDB Atlas. The multi-cloud architecture of Atlas provides flexibility and choice that proprietary offerings from the hyperscalers can’t match. While DevRev today runs on AWS, in the early days of the company, they evaluated multiple cloud vendors. Knowing that MongoDB Atlas could run anywhere gave them the confidence to make a choice on the platform, knowing they would not be locked into that choice in the future. With MongoDB Atlas, our development velocity is 3-4x higher than if we used alternative databases. We can get our innovations to market faster, providing our customers with even more modern and useful CRM solutions. Anshu Avinash, Founding Engineer, DevRev The HashiCorp Terraform MongoDB Atlas Provider automates infrastructure deployments by making it easy to provision, manage, and control Atlas configurations as code. “The automation provided by Atlas and Terraform means we’ve avoided having to hire a dedicated infrastructure engineer for our database layer,” says Anshu. “This is a savings we can redirect into adding developers to work on customer-facing features.” Figure 2: The reactive, event-driven microservices architecture underpinning DevRev’s AI-powered CRM platform Anshu goes on to say, “We have a microservices architecture where each microservice manages its own database and collections. By using MongoDB Atlas, we have little to no management overhead. We never even look at minor version upgrades, which Atlas does for us in the background with zero downtime. Even the major version upgrades do not require any downtime, which is pretty unique for database systems.” Discussing scalability, Anshu says, “As the business has grown, we have been able to scale Atlas, again without downtime. We can move between instance and cluster sizes as our workloads expand, and with auto-storage scaling, we don’t need to worry about disks getting full.” DevRev manages critical customer data, and so relies on MongoDB Atlas’ native encryption and backup for data protection and regulatory compliance. The ability to provide multi-region databases in Atlas means global customers get further control over data residency, latency, and high availability requirements. Anshu goes on to say, “We also have the flexibility to use MongoDB’s native sharding to scale-out the workloads of our largest customers with complete tenant isolation.” DevRev is redefining the CRM market through AI, with MongoDB Atlas playing a critical role as the company’s data foundation. You can learn more about how innovators across the world are using MongoDB by reviewing our Building AI case studies . If your team is building AI apps, sign up for the AI Innovators Program . Successful companies get access to free Atlas credits and technical enablement, as well as connections into the broader AI ecosystem.

March 27, 2024

Fireworks AI and MongoDB: The Fastest AI Apps with the Best Models, Powered By Your Data

We’re happy to announce that Fireworks AI and MongoDB are now partnering to make innovating with generative AI faster, more efficient, and more secure. Fireworks AI was founded in late 2022 by industry veterans from Meta’s PyTorch team, where they focused on performance optimization, improving the developer experience, and running AI apps at scale. It’s this expertise that Fireworks AI brings to its production AI platform, curating and optimizing the industry's leading open models. Benchmarking by the company shows gen AI models running on Fireworks AI deliver up to 4x faster inference speeds than alternative platforms, with up to 8x higher throughput and scale. Models are one part of the application stack. But for developers to unlock the power of gen AI, they also need to bring enterprise data to those models. That’s why Fireworks AI has partnered with MongoDB, addressing one of the toughest challenges to adopting AI. With MongoDB Atlas , developers can securely unify operational data, unstructured data, and vector embeddings to safely build consistent, correct, and differentiated AI applications and experiences. Jointly, Fireworks AI and MongoDB provide a solution for developers who want to leverage highly curated and optimized open-source models, and combine these with their organization’s own proprietary data — and to do it all with unparalleled speed and security. Lightning-fast models from Fireworks AI: Enabling speed, efficiency, and value Developers can choose from many different models to build their gen AI-powered apps. Navigating the AI landscape to identify the most suitable models for specific tasks — and tuning them to achieve the best levels of price and performance — is complex and creates friction in building and running gen AI apps. This is one of the key pain points that Fireworks AI alleviates. With its lightning-fast inference platform, Fireworks AI curates, optimizes, and deploys 40+ different AI models. These optimizations can simultaneously result in significant cost savings , reduced latency , and improved throughput. Their platform delivers this via: Off-the-shelf models, optimized models, and add-ons: Fireworks AI provides a collection of top-quality text, embedding, and image foundation models . Developers can leverage these models or fine-tune and deploy their own, pairing them with their own proprietary data using MongoDB Atlas. Fine-tuning capabilities : To further improve model accuracy and speed, Fireworks AI also offers a fine-tuning service using its CLI to ingest JSON-formatted objects from databases such as MongoDB Atlas. Simple interfaces and APIs for development and production: The Fireworks AI playground allows developers to interact with models right in a browser. It can also be accessed programmatically via a convenient REST API. This is OpenAI API-compatible and thus interoperates with the broader LLM ecosystem. Cookbook: A simple and easy-to-use cookbook provides a comprehensive set of ready-to-use recipes that can be adapted for various use cases, including fine-tuning, generation, and evaluation. Fireworks AI and MongoDB: Setting the standard for AI with curated, optimized, and fast models With Fireworks AI and MongoDB Atlas, apps run in isolated environments ensuring uptime and privacy, protected by sophisticated security controls that meet the toughest regulatory standards: As one of the top open-source model API providers, Fireworks AI serves 66 billion tokens per day (and growing). With Atlas, you run your apps on a proven platform that serves tens of thousands of customers, from high-growth startups to the largest enterprises and governments. Together, the Fireworks AI and MongoDB joint solution enables: Retrieval-augmented generation (RAG) or Q&A from a vast pool of documents: Ingest a large number of documents to produce summaries and structured data that can then power conversational AI. Classification through semantic/similarity search: Classify and analyze concepts and emotions from sales calls, video conferences, and more to provide better intelligence and strategies. Or, organize and classify a product catalog using product images and text. Images to structured data extraction: Extract meaning from images to produce structured data that can be processed and searched in a range of vision apps — from stock photos, to fashion, to object detection, to medical diagnostics. Alert intelligence: Process large amounts of data in real-time to automatically detect and alert on instances of fraud, cybersecurity threats, and more. Figure 1: The Fireworks tutorial showcases how to bring your own data to LLMs with retrieval-augmented generation (RAG) and MongoDB Atlas Getting started with Fireworks AI and MongoDB Atlas To help you get started, review the Optimizing RAG with MongoDB Atlas and Fireworks AI tutorial, which shows you how to build a movie recommendation app and involves: MongoDB Atlas Database that indexes movies using embeddings. (Vector Store) A system for document embedding generation. We'll use the Fireworks embedding API to create embeddings from text data. (Vectorisation) MongoDB Atlas Vector Search responds to user queries by converting the query to an embedding, fetching the corresponding movies. (Retrieval Engine) The Mixtral model uses the Fireworks inference API to generate the recommendations. You can also use Llama, Gemma, and other great OSS models if you like. (LLM) Loading MongoDB Atlas Sample Mflix Dataset to generate embeddings (Dataset) We can also help you design the best architecture for your organization’s needs. Feel free to connect with your account team or contact us here to schedule a collaborative session and explore how Fireworks AI and MongoDB can optimize your AI development process.

March 26, 2024

Architecting Success as a Woman in Tech

Celia Halenke , Solutions Architect at MongoDB, shares insight into the skills, experiences, and aspirations that shape her MongoDB journey in the dynamic world of technology. Plus, learn about her advice for teams wanting to build more inclusive environments for women in tech sales. Mastering the balance: Technical and communication skills In my role, it's all about having a strong blend of technical skills and effective communication. To succeed as a pre-sales Solutions Architect, you need to blend both seamlessly. I’ve had to learn the ins and outs of MongoDB's technology, but equally important is grasping the unique challenges and objectives my clients face. This allows me to craft solutions that are not just tailored but perfectly aligned with their needs. Communication is just as important. From running demos to conducting workshops with people from diverse backgrounds, clear and concise communication is a must. It's not just about showcasing the technology; it's about ensuring everyone is on the same page. Team collaboration is another vital aspect of my role. Working closely with sales reps, CSMs, product managers, and engineers requires building strong relationships. These connections are not just essential for success but play a significant role in personal growth. Celia and team members Fostering inclusivity in tech Being a woman in tech, I can't stress enough the importance of seeing more women in leadership roles. It's not just about breaking stereotypes; it's about having role models who inspire and motivate. That's why promoting women into leadership is crucial. Mentorship and leadership programs specifically designed for women can make a significant impact, providing the support and guidance needed to thrive in a historically male-dominated industry tech. I'm proud to be part of MongoDB, where employee resource groups for women and other communities create a supportive environment. More companies should consider implementing similar initiatives to foster inclusivity and provide platforms for sharing experiences. Celebrating success One of the highlights of my journey at MongoDB has been working closely with the Product Led Sales team. They have recognized me for my efforts for two consecutive quarters, which is a testament to the trust and collaboration I’ve built within the team. It feels really good! Knowing that my work is valued and appreciated motivates me to keep pushing boundaries. I encourage women to make time to celebrate their accomplishments. The joys of customer interaction What I love most about my customer-facing role is the direct interaction with our customers. Understanding their projects, and the problems they aim to solve, and then offering them the perfect MongoDB Atlas feature brings me immense satisfaction. Recently, I had the opportunity to visit clients on-site during a business trip to Latin America. I enjoyed this experience and it changed my perspective on customer interactions: though not as quick as hopping on a video conference, in-person sessions are some of the most engaging. Celia in Latin America Aspirations and future growth Looking ahead, my goal is to continue growing as a Solutions Architect at MongoDB! Embracing the evolving challenges of my role allows me to constantly learn and enhance my communication and technical skills. I aspire to work with larger customers, witnessing firsthand the positive impact MongoDB's applications can have on people's lives. As I gather more experience, I'm eager to take on a leadership role, guiding others in their MongoDB journeys . My journey at MongoDB is a testament to the ever-evolving landscape of technology, where success is not just about technical expertise but also about building meaningful connections, fostering inclusivity, and celebrating every milestone along the way. Learn more about Sell Like a Girl and MDBWomen, Employee Resource Groups supporting a community of women around the world at MongoDB.

March 26, 2024

AI-powered SQL Query Converter Tool is Now Available in Relational Migrator

When I traveled to Japan for the first time it was shortly after translation apps on smartphones had really taken off. Even though I knew enough phrases to get by as a tourist I was amazed at how empowered I was by being able to have smoother conversations and read signs more easily. The power of AI helped me understand a language I had only a passing familiarity with and drastically improved my experience in another country. I was able to spend more time enjoying myself and spend less time looking up common words and sentences in a phrase book. So what does this have to do with application modernization? Transitioning from relational databases as part of a modernization effort is more than migrating data from a legacy database to a modern one. There is all the planning, designing, testing, refactoring, validating, and ongoing operation that makes modernization efforts a complex project to navigate successfully. MongoDB’s free Relational Migrator tool has helped with many of these tasks including schema design, data migration, and code generation, but we know this is just the beginning. One of the most common challenges of migrating legacy applications to MongoDB is working with SQL queries, triggers, and stored procedures that are often undocumented and must be manually converted to MongoDB Query API syntax. This requires deep knowledge of both SQL and the MongoDB Query API, which is rare if teams are used to only using one system or the other. In addition, teams often have hundreds, if not thousands of queries, triggers, and stored procedures that must be converted, which is extremely time-consuming and tedious. Doing these conversions manually would be like traveling abroad and looking up each object one by one in a phrase book instead of using a translation app. Thankfully with generative AI, we are finally able to get the modern version of the translation app on your phone. The latest release of Relational Migrator is able to use generative AI to help your developers quickly convert existing SQL queries, triggers, and stored procedures to work with MongoDB using your choice of programming language (JavaScript, C#, or Java). By automating the generation of development-ready MongoDB queries, your team can be more efficient by redirecting their time to more important testing and optimization efforts — accelerating your migration project. Teams that are familiar with SQL can also use the Query Converter to help close their MongoDB knowledge gap. The SQL objects they're familiar with are translated, making it easier to learn the new syntax by seeing them next to each other. Let’s take a closer look at how Query Converter can convert a SQL Server stored procedure to work with MongoDB. Figure 1: The MongoDB Query Converter Dashboard We’ll start by importing the stored procedure from the relational database into our Relational Migrator project. This particular stored procedure joins the results from two tables, performs some arithmetic on some of the columns, and filters the results based on an input parameter. CREATE PROCEDURE CustOrdersDetail @OrderID int AS SELECT ProductName, UnitPrice=ROUND(Od.UnitPrice, 2), Quantity, Discount=CONVERT(int, Discount * 100), ExtendedPrice=ROUND(CONVERT(money, Quantity * (1 - Discount) * Od.UnitPrice), 2) FROM Products P, [Order Details] Od WHERE Od.ProductID = P.ProductID and Od.OrderID = @OrderID Developers who are experienced with the MongoDB aggregation framework would know that the equivalent method to join data from two collections is to use the $lookup stage. However, when migrating a relational database to MongoDB, it often makes sense to consolidate data from multiple tables into a single collection. In this example, we are doing exactly that, by combining data from the Orders , Order Details , and Products table into a single orders collection. This means that, when considering the changes to the schema, we do not actually need a $lookup stage at all, as the data from each of the required tables has already been merged into a single collection. Relational Migrator’s Query Converter works alongside the schema mapping functionality and automatically adjusts the generated query to work against your chosen schema. With JavaScript chosen as our target language, the converted query avoids the need for a costly join and includes MongoDB equivalents of our original SQL arithmetic functions. The query is now ready to test and include in our modernized app. const CustOrdersDetail = async (db, OrderID) => { return await db.collection('orders').aggregate([ { $match: { orderId: OrderID } }, { $unwind: '$lineItems' }, { $project: { ProductName: '$product.productName', UnitPrice: { $round: ['$lineItems.unitPrice', 2] }, Quantity: '$lineItems.quantity', Discount: { $multiply: ['$lineItems.discount', 100] }, ExtendedPrice: { $round: [ { $multiply: [ '$lineItems.quantity', { $subtract: [1, '$lineItems.discount'] }, '$lineItems.unitPrice' ] }, 2 ] } } } ]).toArray(); }; Relational Migrator does more than just query conversion, it also assists with app code generation, data modeling, and data migration, which drastically cuts down on the time and effort required to modernize your team's applications. Just like a language translation app while traveling abroad it can drastically improve your experience converting and understanding a new language or technology. The new Query Converter tool is now available for free for anyone to try as part of a public preview in the Relational Migrator tool. Download Relational Migrator and try converting your SQL queries and stored procedures today.

March 25, 2024