As a result of a network migration from AWS Classic networking to AWS VPC networking, the public-facing IP addresses of MongoDB’s services, including both JIRA (jira.mongodb.org) and MMS (mms.mongodb.com and api-backup.mongodb.com), will be changing on Thursday, January 29, 2015 (16:00 UTC / 11:00 EDT).
The current IP addresses and new IP addresses are documented (along with a procedure for making a seamless change, if you have them hard-coded anywhere) here.
This will only impact Customers that * are using DNS TTL overrides, * have our IP addresses hard-coded into one or more of their systems * or have firewalls controlling outbound access to those IP addresses.
Workarounds In the scenario where there is a hard-coded IP address, it will need to be changed on Thursday, January 29, 2015 (16:00 UTC / 11:00 EDT) or access to our systems will be degraded once the IP address change has been made. Please also note that the old IPs will be decommissioned shortly after and at that point access will be lost completely until your systems are updated.
Firewalls can add both sets of IP addresses in advance, and then remove the old IP addresses once the change has been made. DNS servers should be adjusted to honor the TTL returned during this process, or will need their caches manually cleared once the change has been made. Hard-coded IP addresses in applications should be switched to use DNS or will need to be manually adjusted post-change.
If you have any questions about this change or need help implementing any workarounds on your systems, feel free to file a ticket in JIRA prior to the change date and we will work with you to make sure that your team can make the appropriate updates on your side.
Leaf in the Wild: Qihoo Scales with MongoDB
Leaf in the Wild posts highlight real world MongoDB deployments. Read other stories about how companies are using MongoDB for their mission-critical projects. 100+ apps, 1,500+ Instances, 20B Queries per Day Qihoo is China’s number 1 Android mobile distribution platform. Qihoo is also China’s top malware protection company, providing products for both web and mobile platforms. A MongoDB user since 2011, Qihoo has built over 100 different applications on MongoDB – including new services and migrations from MySQL and Redis – running on 1,500+ instances and supporting 20 billion queries per day. I had the chance to sit down with Yang Yan Jie, the Senior DBA at Qihoo to learn more about how and why they use MongoDB, his scaling best practices, and recommendations for those getting started with the database. Can you start by telling us about Qihoo? Qihoo 360 Technology Co. Ltd. is a leading Chinese Internet company. At the end of June 2014, we had around 500 million monthly active PC Internet users and over 640 million mobile users. Recognizing malware protection as a fundamental need of all Internet and mobile users, we built our large user base by offering comprehensive, effective and user-friendly Internet and mobile security products and services to protect users' computers and mobile devices against malware and malicious websites. Our products and services are supported by our cloud-based security technology, which we believe is one of the most advanced and robust technologies in the malware protection industry. We monetize our user base primarily through online advertising and Internet value-added services. In terms of our market position, we are: A top three Internet Company as measured by user base in China No. 1 Android-based mobile distribution platform in China No. 1 provider of Internet and mobile malware protection products and services in China No. 2 PC search engine in China When did Qihoo start using MongoDB? We were a very early adopter of MongoDB, building our first applications on the database back in 2011. I think we were using version 1.8 then! How is Qihoo using MongoDB today? MongoDB has become our standard modern database platform. We now have over 100 applications powered by MongoDB – both external customer-facing services and internal business applications. In total we have more than 1,500 MongoDB instances running on our in-house built “HULK” cloud platform, collectively serving 20 billion queries per day. Three particularly critical applications for our business are: Location-based mobile search application. We use MongoDB with its geospatial indexes and queries to deliver geo-aware search results to mobile users. The user can be searching for anything, from a local restaurant, to a shop, to a car dealership. The app will detect their location and serve search results based on proximity. MongoDB handles 1.2 billion queries per day from this application. Caching layer for user authentication data. Qihoo is a central portal for many Chinese Internet users. We have many partners that our users can connect to directly after logging into our site. We provide Single Sign On (SSO) to multiple services so users don’t need to keep providing their security credentials as they navigate around the web. The user’s SSO session is cached in MongoDB for ultra-fast access. MongoDB supports millions of concurrent users, handling 30,000 operations per second and 1.8 billion queries daily. Log analytics platform. We need to know our infrastructure is running well. Our internal business users also want to measure user engagement with new promotions and campaigns. To accomplish this, we collect log data from all of our Linux, Apache web server and Tomcat servers, and stream it directly into MongoDB. From there, our internal business users can generate real time analytics and reports using our PHP-based Business Intelligence (BI) platform. MongoDB stores 2.5 billion documents at any one time across 18 shards configured with 3-node replica sets for always-on availability. MongoDB serves nearly 3 billion queries per day, including 1 billion writes. What other databases do you use? MongoDB is one of the three database technologies used in our company. It isn’t necessarily suitable for all applications, so we also use MySQL for relational data problems and Redis for certain caching use-cases. Over time, we have migrated more than a dozen projects from MySQL and Redis to MongoDB. What factors drove this migration? Our goal is to use the best technology where it best fits. In the case of MySQL, migration was driven by scalability and developer productivity. As a relational database, MySQL does not scale out, so as our user base grew above 100 million active users, we hit the limits of how far we could push MySQL. MongoDB auto-sharding allows us to scale on-demand using commodity hardware. The MongoDB data model is also far more flexible. Our developers can get more done and iterate faster with MongoDB than they can with the relational model. In the case of Redis, the migrations were driven by cost and flexibility. We found that MongoDB meets our low latency caching requirements for many applications, while it’s on-disk persistence reduces the need to provision costly systems configured with high-memory footprints. In addition, there is much more you can do with MongoDB’s document data model than you can with Redis’ Key-Value model. This translates directly to richer application functionality. For applications where data volumes are expected to grow rapidly, we choose MongoDB over Redis. Tell us about the platforms you are running MongoDB on. Most of our applications are PHP based. We run CentOS on x86 hardware. We have standardized on local SSD storage as this gives us the best performance. We are running MongoDB 2.4 and the latest 2.6 releases. We are also looking forward to MongoDB 3.0! How is MongoDB configured? We run both single replica sets and sharded clusters, depending on the application. We have data centres across the country, with the main ones located in Beijing. We deploy MongoDB on our private cloud across multiple data centers, both for disaster recovery and for low latency local reads and writes. We don’t control our own fiber, so network quality is out of our control. For the most critical apps, we spin up identical MongoDB clusters in multiple data centers and use our own message queue to replicate between them – this gives us assurance of maintaining availability in the face of network partitions. How do you manage your MongoDB deployment? We have developed a centralized orchestration web platform, which we call the HULK cloud platform. It is used by nearly all of our technical engineers to control our mission critical infrastructure and services. It is a complex piece of engineering which we are very proud of. When we originally started the cloud platform project, we hoped it would allow our engineers to stand on the shoulders of giants, relying on the platform to speed up the time to market for their applications. Hence we named it “HULK”. HULK currently provides elastic services such as Web, relational database, NoSQL and distributed storage, etc. At same time, the open platform concept attracted various internal teams to move their applications onto the platform. The re-platforming of these applications provided immediate access to other LoBs internally, and in the process of doing that we helped the business groups to attain higher efficiency and greater technology expertise. MongoDB is one of the most critical services on HULK and it is fully integrated into the platform with a high degree of automation, allowing us to operate more than 1,500 MongoDB instances with just one and a half DBAs. The DBAs can perform “one click deployment” and “one click upgrade” tasks via the HULK management interface. All backup and monitoring is fully automated. For instance, if you add a new MongoDB node or cluster, HULK automatically configures the monitoring and backup strategy, as well as deploy the necessary agents. For developers, they can monitor a multitude of MongoDB metrics and status. In addition, they can open a ticket right on the management portal itself, instead of using email or IM, all with a few mouse clicks. How do you backup MongoDB? We use a combination of approaches, governed by the application’s RPO and RTO objectives: Filesystem backups. This is the default approach. We shut down a secondary replica set member and snapshot the filesystem image Incremental replication. For continuous backup, we have built a tool that tails the MongoDB oplog. We use this approach for more critical apps where we need faster restoration of service Delayed replicas . We use this approach for additional assurances, again governed by how quickly we need to bring the data back Can you share any best practices on scaling your MongoDB infrastructure? There are three tips I would like to share: From a DBA perspective, invest time to understand application usage. The developers will give their guidance, but we generally take any number they give us and add 50%! If you encounter performance issues, start with your hardware. We found upgrading from hard disks to SSDs gave us an instant performance boost without any other optimizations. For highly dynamic, write-intensive workloads, make sure you monitor storage fragmentation and compact regularly if needed. Are you measuring the impact of MongoDB on your business? Yes – in terms of time to market. An example of the impact this makes is our reaction to the 2014 earthquake in Yunnan province. Everyone in China wanted to have access to the latest updates and to be able to check in on friends and family in the region. The business felt the best way to do this was to build an app that verified and then consolidated newsfeeds from multiple sources. We designed the app in the morning after the earthquake, coded it in the afternoon and launched it in the evening. One business day from concept to production. Only MongoDB could support that velocity of development. Are you looking forward to MongoDB 3.0? We started testing MongoDB 3.0 and filing bugs as soon as we could get our hands on the first Release Candidate. We are especially excited about document level concurrency control. This will further improve write scaling and fully saturate the latest generation of dense multi-core systems we are using now. Compression is also a huge benefit for us. We have standardized on SSDs, so compression means we can pack more onto each drive, which will bring costs down. It will also give us another performance boost as fewer bits are read from disk, making better use of disk I/O cycles. What advice would you give to those considering using MongoDB for their next project? MongoDB’s document data model and dynamic schema bring great flexibility and power. But they also bring great responsibility! I’d recommend not storing multitudes of different document types and formats within a single collection as it makes ongoing application maintenance complex. Split out documents of different types and structures into their own collections. We have implemented tools that scan and sample documents from each collection. If variances in structure exceed our best practices, we alert the devs so they can go and address the issue. So that is where I’d start. Mr. Yang – I’d like to thank you for taking the time to share your insights with the MongoDB community. Struggling to scale your relational database? Download our Migration White Paper: Migration White Paper About the Author - Mat Keep Mat is part of the MongoDB product marketing team, responsible for building the vision, positioning and content for MongoDB’s products and services, including the analysis of market trends and customer requirements. Prior to MongoDB, Mat was director of product management at Oracle Corp. with responsibility for the MySQL database in web, telecoms, cloud and big data workloads. This followed a series of sales, business development and analyst / programmer positions with both technology vendors and end-user companies.
How DataSwitch And MongoDB Atlas Can Help Modernize Your Legacy Workloads
Data modernization is here to stay, and DataSwitch and MongoDB are leading the way forward. Research strongly indicates that the future of the Database Management System (DBMS) market is in the cloud, and the ideal way to shift from an outdated, legacy DBMS to a modern, cloud-friendly data warehouse is through data modernization. There are a few key factors driving this shift. Increasingly, companies need to store and manage unstructured data in a cloud-enabled system, as opposed to a legacy DBMS which is only designed for structured data. Moreover, the amount of data generated by a business is increasing at a rate of 55% to 65% every year and the majority of it is unstructured. A modernized database that can improve data quality and availability provides tremendous benefits in performance, scalability, and cost optimization. It also provides a foundation for improving business value through informed decision-making. Additionally, cloud-enabled databases support greater agility so you can upgrade current applications and build new ones faster to meet customer demand. Gartner predicts that by 2022, 75% of all databases will be on the cloud – either by direct deployment or through data migration and modernization. But research shows that over 40% of migration projects fail. This is due to challenges such as: Inadequate knowledge of legacy applications and their data design Complexity of code and design from different legacy applications Lack of automation tools for transforming from legacy data processing to cloud-friendly data and processes It is essential to harness a strategic approach and choose the right partner for your data modernization journey. We’re here to help you do just that. Why MongoDB? MongoDB is built for modern application developers and for the cloud era. As a general purpose, document-based, distributed database, it facilitates high productivity and can handle huge volumes of data. The document database stores data in JSON-like documents and is built on a scale-out architecture that is optimal for any kind of developer who builds scalable applications through agile methodologies. Ultimately, MongoDB fosters business agility, scalability and innovation. Key MongoDB advantages include: Rich JSON Documents Powerful query language Multi-cloud data distribution Security of sensitive data Quick storage and retrieval of data Capacity for huge volumes of data and traffic Design supports greater developer productivity Extremely reliable for mission-critical workloads Architected for optimal performance and efficiency Key advantages of MongoDB Atlas , MongoDB’s hosted database as a service, include: Multi-cloud data distribution Secure for sensitive data Designed for developer productivity Reliable for mission critical workloads Built for optimal performance Managed for operational efficiency To be clear, JSON documents are the most productive way to work with data as they support nested objects and arrays as values. They also support schemas that are flexible and dynamic. MongoDB’s powerful query language enables sorting and filtering of any field, regardless of how nested it is in a document. Moreover, it provides support for aggregations as well as modern use cases including graph search, geo-based search and text search. Queries are in JSON and are easy to compose. MongoDB provides support for joins in queries. MongoDB supports two types of relationships with the ability to reference and embed. It has all the power of a relational database and much, much more. Companies of all sizes can use MongoDB as it successfully operates on a large and mature platform ecosystem. Developers enjoy a great user experience with the ability to provision MongoDB Atlas clusters and commence coding instantly. A global community of developers and consultants makes it easy to get the help you need, if and when you need it. In addition, MongoDB supports all major languages and provides enterprise-grade support. Why DataSwitch as a partner for MongoDB? Automated schema re-design, data migration & code conversion DataSwitch is a trusted partner for cost-effective, accelerated solutions for digital data transformation, migration and modernization through a modern database platform. Our no-code and low-code solutions along with cloud data expertise and unique, automated schema generation accelerates time to market. We provide end-to-end data, schema and process migration with automated replatforming and refactoring, thereby delivering: 50% faster time to market 60% reduction in total cost of delivery Assured quality with built-in best practices, guidelines and accuracy Data modernization: How “DataSwitch Migrate” helps you migrate from RDBMS to MongoDB DataSwitch Migrate (“DS Migrate”) is a no-code and low-code toolkit that leverages advanced automation to provide intuitive, predictive and self-serviceable schema redesign from a traditional RDBMS model to MongoDB’s Document Model with built-in best practices. Based on data volume, performance, and criticality, DS Migrate automatically recommends the appropriate ETTL (Extract, Transfer, Transform & Load) data migration process. DataSwitch delivers data engineering solutions and transformations in half the timeframe of the existing typical data modernization solutions. Consider these key areas: Schema redesign – construct a new framework for data management. DS Migrate provides automated data migration and transformation based on your redesigned schema, as well as no-touch code conversion from legacy data scripts to MongoDB Atlas APIs. Users can simply drag and drop the schema for redesign and the platform converts it to a document-based JSON structure by applying MongoDB modeling best practices. The platform then automatically migrates data to the new, re-designed JSON structure. It also converts the legacy database script for MongoDB. This automated, user-friendly data migration is faster than anything you’ve ever seen. Here’s a look at how the schema designer works. Refactoring – change the data structure to match the new schema. DS Migrate handles this through auto code generation for migrating the data. This is far beyond a mere lift and shift. DataSwitch takes care of refactoring and replatforming (moving from the legacy platform to MongoDB) automatically. It is a game-changing unique capability to perform all these tasks within a single platform. Security – mask and tokenize data while moving the data from on-premise to the cloud. As the data is moving to a potentially public cloud, you must keep it secure. DataSwitch’s tool has the capability to configure and apply security measures automatically while migrating the data. Data Quality – ensure that data is clean, complete, trustworthy, consistent. DataSwitch allows you to configure your own quality rules and automatically apply them during data migration. In summary: first, the DataSwitch tool automatically extracts the data from an existing database, like Oracle. It then exports the data and stores it locally before zipping and transferring it to the cloud. Next, DataSwitch transforms the data by altering the data structure to match the re-designed schema, and applying data security measures during the transform step. Lastly, DS Migrate loads the data and processes it into MongoDB in its entirety. Process Conversion Process conversion, where scripts and process logic are migrated from legacy DBMS to a modern DBMS, is made easier thanks to a high degree of automation. Minimal coding and manual intervention are required and the journey is accelerated. It involves: DML – Data Manipulation Language CRUD – typical application functionality (Create, Read, Update & Delete) Converting to the equivalent of MongoDB Atlas API Degree of automation DataSwitch provides during Migration Schema Migration Activities DS Automation Capabilities Application Data Usage Analysis 70% 3NF to NoSQL Schema Recommendation 60% Schema Re-Design Self Services 50% Predictive Data Mapping 60% Process Migration Activities DS Automation Capabilities CRUD based SQL conversion (Oracle, MySQL, SQLServer, Teradata, DB2) to MongoDB API 70% Data Migration Activities DS Automation Capabilities Migration Script Creation 90% Historical Data Migration 90% 2 Catch Load 90% DataSwitch Legacy Modernization as a Service (LMaas): Our consulting expertise combined with the DS Migrate tool allows us to harness the power of the cloud for data transformation of RDBMS legacy data systems to MongoDB. Our solution delivers legacy transformation in half the time frame through pay-per-usage. Key strengths include: ● Data Architecture Consulting ● Data Modernization Assessment and Migration Strategy ● Specialized Modernization Services DS Migrate Architecture Diagram Contact us to learn more.