MongoDB Blog

Articles, announcements, news, updates and more

Running S3 compatible endpoint to host Ops Manager binaries in Kubernetes

This post is part of our ongoing series on running Ops Manager with the MongoDB Kubernetes Operator. Check out the related articles for more. Introducing MongoDB Ops Manager in Kubernetes Running MongoDB Ops Manager in Kubernetes Recently, we announced the ability to deploy Ops Manager in Kubernetes and provided a step-by-step guide on how to set this up in your environment. Now we’re back to show you how to use an exciting new feature we’ve released: the ability to run an S3 compatible or HTTP(s) endpoint to serve the mongoDB binaries used to deploy clusters. This allows you to set up Ops Manager on a Kubernetes cluster without access to the internet and without the need to manage mongod/mongos binaries on each pod within the Ops Manager deployment. We’ll show you how to configure your endpoint, upload the correct binaries and reconfigure the Ops Manager deployment to use files served from this endpoint. Prerequisites This guide assumes you’ve already set up and run Ops Manager and the MongoDB Enterprise Kubernetes Operator on your cluster, following the steps in the guide here . This will give you a deployed cluster that we can modify to use our new HTTP endpoint to retrieve the binaries needed to deploy and upgrade mongod/mongos instances. If you have an existing environment, then you will need to have the following versions installed: Kubernetes API v1.16 or above for MongoDB Operator v1.17 or above for MinIO Operator MongoDB Enterprise Kubernetes Operator 1.6.0 or above MongoDB Ops Manager 4.4.0 or above Choosing Where to Host Binaries The only requirement from Ops Manager is an HTTP endpoint, so you can leverage an existing file server configuration within your organisation if required. For this tutorial, we’re going to use MinIO , a popular Open Source Object Storage platform that is optimised for private cloud deployments on containerised infrastructure such as Kubernetes, used in many Fortune 500 companies. With MinIO, customers can manage their binaries with individual policies through rich programmatic and UI admin services that align with best practices for file management such as bitrot protection, encryption, and identity management. Installing MinIO Installing MinIo is similar to the MongoDB Kubernetes Operator. The MinIO documentation shows you how to deploy on your Kubernetes cluster, with many configurations to suit your organization’s requirements. In this tutorial, we will show you how to use the Operator to manage the installation of your instance. First, let’s ensure the operator is installed and configured by running the following command on your cluster: kubectl apply -k github.com/minio/operator Now, let's create the MinIO instances to serve our binaries: kubectl apply -f https://raw.githubusercontent.com/minio/operator/master/examples/tenant.yaml This deployment will take a few moments. Let's verify that the instance successfully deploys: kubectl get pods NAME READY STATUS RESTARTS AGE minio-zone-0-0 1/1 Running 0 6m14s minio-zone-0-1 1/1 Running 0 6m14s minio-zone-0-2 1/1 Running 0 6m14s minio-zone-0-3 1/1 Running 0 6m14s Once this is complete, we should also get a service name, which we will use in the Ops Manager configuration later: kubectl get svc -l v1.min.io/tenant=minio NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE minio-hl ClusterIP None <none> 9000/TCP 48m Creating Buckets With MongoDB Binaries and Database Tools Now that we have a running instance of MinIO, we can populate it with binaries for use by the operator to deploy mongoDB instances. The MongoDB Kubernetes Operator, as of v0.10 , uses Ubuntu 16.04 mongoDB binaries, which can be downloaded from https://www.mongodb.org/dl/linux/x86_64-ubuntu1604 . If Ops Manager will also be managing deployments, you may also wish to populate your buckets with versions for other architectures and platforms. A full list of the currently supported binaries can be retrieved from the version manifest of Ops Manager through the Version Manifest endpoint. For the purposes of this tutorial, we will just fetch versions 4.2.3 and 4.2.0 for use by the operator: curl -O http://downloads.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1604-4.2.3.tgz curl -O http://downloads.mongodb.org/linux/mongodb-linux-x86_64-ubuntu1604-4.2.0.tgz Also, we’ll need the mongodb-database-tools for the current version of Ops Manager. For the 4.4.0 release, this is v100.0.2. curl -O https://fastdl.mongodb.org/tools/db/mongodb-database-tools-ubuntu1604-x86_64-100.0.2.tgz Now, let's put the mongod files into a bucket named ‘linux’, and the database tools in a bucket named ‘tools/db’. Both buckets will have read-only permissions for all users. We can do this with the mc MinIO command-line client or through the graphical UI that MinIO provides. For this tutorial, we will use the UI, but the command line would allow us to script this operation if required. To access the UI, we will need to forward a connection to the MinIO service running on kubernetes. In a terminal, run the following command: kubectl port-forward svc/minio-hl-svc 9000:9000 You should now be able to access the MinIO dashboard at http://localhost:9000 The default credentials, from the public sample config we deployed earlier, are minio/minio123 . Once we log in, we can create a bucket named ‘linux’ through the + → Create bucket button in the bottom right of the interface. Once this is created, it will appear in the left menu. Select this and upload the binaries you downloaded earlier, again using the + → Upload file button in the bottom right. Once all files are uploaded, you will need to allow read-only access to the bucket from anyone who has the link. This is to allow unauthenticated GET and HEAD HTTP requests to access the resource from our Ops Manager container. To enable read-only access, click the ... button on the ‘linux’ bucket and select Edit Policy. On the modal dialog, select Read Only and click Add , which will enable access. At this point, we should be able to validate the files that can be downloaded. Running the following HEAD request should now return a successful result: curl --head http://localhost:9000/linux/mongodb-linux-x86_64-ubuntu1604-4.2.3.tgz HTTP/1.1 200 OK Accept-Ranges: bytes Content-Length: 132149028 Content-Security-Policy: block-all-mixed-content Content-Type: application/gzip ETag: "397903bcabe29abd9d1258653270e76f-1" Last-Modified: Tue, 04 Feb 2020 17:44:10 GMT Server: MinIO/RELEASE.2020-01-03T19-12-21Z Vary: Origin X-Amz-Request-Id: 15F0DF60120863EB X-Xss-Protection: 1; mode=block Date: Thu, 06 Feb 2020 17:11:12 GMT Our S3 buckets are now ready to serve binaries for the Ops Manager instance. Let’s now configure it to consume these resources. Upgrade Ops Manager to 4.4.0 or Later In the previous tutorial, we created the ops-manager.yaml configuration file, which we will now edit to include the following options: spec version: 4.4.0 # This must be 4.4.0 or later to support the new HTTP endpoint as a source for binaries. configuration automation.versions.source: remote # This sets the endpoint that will serve mongoDB binaries and was retrieved earlier when we fetched the service name for our MinIO instance. automation.versions.download.baseUrl: http://minio-hl:9000 The updated configuration file should match the following: apiVersion: mongodb.com/v1 kind: MongoDBOpsManager metadata: name: ops-manager namespace: mongodb spec: # the version of Ops Manager distro to use version: 4.4.0 # the name of the secret containing admin user credentials. adminCredentials: ops-manager-admin-secret configuration: automation.versions.source: remote automation.versions.download.baseUrl: http://minio-hl:9000 externalConnectivity: type: LoadBalancer # the Replica Set backing Ops Manager. # appDB has the SCRAM-SHA authentication mode always enabled applicationDatabase: members: 3 version: 4.2.2 We can now update the running instance: kubectl apply -f ops-manager.yaml The deployment will take a few minutes to update, as it will need to redeploy the configuration with new pods. Check the status and ensure the Ops Manager resource gets to the “Running” phase: kubectl get om -n mongodb NAME REPLICAS VERSION VERSION (DB) STATE STATE (DB) AGE ops-manager 1 4.4.0 4.2.2 Running Running 13m Now, let’s connect to our Ops Manager instance through the UI Dashboard. Get the load balancer address, as with the previous tutorial: kubectl get svc ops-manager-svc-ext -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' Create a MongoDB Replica Set At this point, everything should be in place to let us deploy mongoDB binaries using our MinIO instance. Let’s verify this, creating a new project and replica set using the Kubernetes Operator. First, let’s make the replica set configuration Create the file replica-set-s3-endpoint.yaml describing the new MongoDB resource, reusing the ops-manager-connection configMap from the previous tutorial: apiVersion: mongodb.com/v1 kind: MongoDB metadata: name: replica-set-s3-endpoint namespace: mongodb spec: members: 3 version: 4.2.3 type: ReplicaSet opsManager: configMapRef: name: ops-manager-connection credentials: ops-manager-admin-secret Apply it to the Kubernetes cluster: kubectl apply -f replica-set-s3-endpoint.yaml Wait until the resource enters the Running state: kubectl get mdb -n mongodb NAME TYPE STATE VERSION AGE my-replica-set-s3-endpoint ReplicaSet Running 4.2.3 12m Now, let’s verify this has deployed correctly in Ops Manager: Also, if we click on the modify button and click the versions dropdown, we will see the listing is limited to 4.2.3 and 4.2.2; the versions we uploaded to the MinIO instance. If the user configured a version that isn’t stored on the MinIO instance, we instead will get a deployment failure. The MongoDB Agent logs will give a clear error about which file and location could not be found on the remote endpoint, allowing admins to quickly identify where the files should be uploaded. To test this, let’s change the version to 4.2.1 on the replica-set-s3-endpoint.yaml and deploy using: kubectl -f apply replica-set-s3-endpoint.yaml In Ops Manager, a red deployment bar will be displayed for the deployment. If we click on the view agent logs link, the following will now be shown : Would you like to know more? Stay tuned for upcoming blog posts in this series, where we will continue to do deep dives on all things Ops Manager. Learn about how to backup Ops Manager, configure it for high availability, set up SCRAM and get an inside look at the architecture behind the MongoDB Enterprise Kubernetes Operator.

August 28, 2020
QuickStart

Free Your Genius With MongoDB Atlas Free Tier

What’s free, lasts forever, and can help you explore all your app ideas? You guessed it — a MongoDB Atlas free tier cluster. Obviously there are plenty of reasons to use a more powerful cluster, but before you do, let’s explore the capabilities of Atlas free tier. It’s Free — Forever You heard it here first, folks. You never have to pay for your free tier cluster, and you can keep it on for as long as you’d like, on us. That means there’s no credit card required. We invest in the wonder of you, and all your great ideas. Why not test them all out? New to Atlas? MongoDB Atlas is our fully managed global cloud database providing best-in-class infrastructure, automation and proven practices that guarantee availability, scalability, and compliance with security standards. Atlas is available on more than 70 regions across AWS, GCP and Azure on our M0 free tier or our paid tiers (starting at M2). The most popular features of Atlas are available on paid and free clusters . Features like MongoDB Atlas Search help you design and deliver top-notch built-in search capabilities. The Collections view lets you inspect and interact with your data. It also provides the Aggregation Pipeline Builder , which allows you to learn, test and visualize MongoDB’s aggregation framework. You can also use your free tier cluster to test out MongoDB solutions that come integrated with Atlas, such as MongoDB Realm and MongoDB Charts , which both have their own free tier versions. Free tier clusters are a great opportunity to start innovating at no cost. If you decide you want something more powerful, upgrading to a larger tier has never been easier. To get yourself an account, sign up for Atlas today. Atlas Veteran? If you already know your way around a cloud database, why would you use a free tier cluster? No matter where you are in your Atlas journey, free tier clusters can be useful for development environments, proof of concepts, checking out new Atlas features, and demos. No matter how many projects you have, you can always spin up one free tier cluster per project. It’s Advantageous Free tier clusters are deployed on the latest battle-tested version of MongoDB, meaning you’re getting all the perks (on-demand materialized views, client side FLE, wildcard indexes), with none of the cost. For more information on what you’d get with a free tier cluster today, read through this blog post, which discusses free tier features on MongoDB 4.2 . Free tier clusters come with 512MB of storage. If you’re curious about what you can do with that, we have a few ideas to get you started: Build a search engine in less than 10 minutes with MongoDB Atlas Search, test out MongoDB with new programming languages , or kickstart your Atlas learning process with our sample data sets . It’s Distributed Atlas free tier clusters are available on the cloud provider of your choice in the most popular regions, including AWS North Virginia, Google Cloud São Paulo, and Azure in the Netherlands. We’ll be expanding to more regions in the future, so if you don’t have access to your preferred region now, it won’t be long until you do. To see all the free tier options available, simply create a new cluster in the Atlas UI. The best way to check out all the features available on free tier is to play around with one yourself… What are you waiting for?

August 14, 2020
Home

Ready to get Started with MongoDB Atlas?

Get Started