EY recently announced that Co-founder and Chairman Dwight Merriman of MongoDB, Inc. is a finalist for the EY Entrepreneur Of The Year™ 2014 Award in the New York region. The awards program recognizes entrepreneurs who demonstrate excellence and extraordinary success in such areas as innovation, financial performance and personal commitment to their businesses and communities. Dwight Merriman was selected as a finalist by a panel of independent judges. Award winners will be announced at a special gala event on Tuesday, June 17th at the Marriott Marquis.
Now in its 28th year, the program has expanded to recognize business leaders in more than 145 cities in more than 60 countries throughout the world.
Regional award winners are eligible for consideration for the EY Entrepreneur Of The Year National program. Award winners in several national categories, as well as the EY Entrepreneur Of The Year National Overall Award winner, will be announced at the annual awards gala in Palm Springs, California, on November 15, 2014. The awards are the culminating event of the EY Strategic Growth Forum®, the nation’s most prestigious gathering of high-growth, market-leading companies.
About EY Entrepreneur Of The Year™
EY Entrepreneur Of The Year™ is the world’s most prestigious business award for entrepreneurs. The unique award makes a difference through the way it encourages entrepreneurial activity among those with potential and recognizes the contribution of people who inspire others with their vision, leadership and achievement. As the first and only truly global award of its kind, Entrepreneur Of The Year celebrates those who are building and leading successful, growing and dynamic businesses, recognizing them through regional, national and global awards programs in more than 145 cities in more than 60 countries.
MongoDB Backup Strategies Compared
In a recent webinar , MongoDB VP of Education & Cloud Services, Andrew Erlichson, delivered a presentation comparing and contrasting different MongoDB backup strategies. We thought we’d continue on that topic by comparing these options in-depth on the MMS blog. There are three main approaches for backing up MongoDB: mongodump, a utility bundled with the MongoDB database Filesystem snapshots, such as those provided by Linux LVM or AWS EBS MongoDB Management Service (MMS) , a fully managed, cloud service that provides continuous backup for MongoDB (also available as on-prem software with a MongoDB subscription ) Let’s look at each of these strategies in further detail. Backup Strategy #1: mongodump mongodump is a tool bundled with MongoDB that performs a backup of the data in MongoDB. mongodump may be used to dump an entire database, collection, or result of a query. mongodump can produce a consistent snapshot of the data by dumping the oplog. You can then use the mongorestore utility to restore the data to a new or existing database. mongorestore will import content from the BSON database dumps produced by mongodump and replay the oplog. mongodump is a straightforward approach and has the benefit of producing backups that can be filtered based on your specific needs. While mongodump is sufficient for small deployments, it is not appropriate for larger systems. mongodump exerts too much load to be a truly scalable solution. It is not an incremental approach, so it requires a complete dump at each snapshot point, which is resource-intensive. As your system grows, you should evaluate lower impact solutions such as filesystem snapshots or MMS. In addition, while the complexity of deploying mongodump for small configurations is fairly low, the complexity of deploying mongodump in large sharded systems can be significant. Backup Strategy #2: Copying the Underlying Files You can back up MongoDB by copying the underlying files that the database processes uses to store data. To obtain a consistent snapshot of the database, you must either stop all writes to the database and use standard file system copy tools, or create a snapshot of the entire file system, if your volume manager supports it. For example, Linux LVM quickly and efficiently creates a consistent snapshot of the file system that can be copied for backup and restore purposes. To ensure that the snapshot is logically consistent, you must have journaling enabled within MongoDB. Because backups are taken at the storage level, filesystem snapshots can be a more efficient approach than mongodump for taking full backups and restoring them. However, unlike mongodump, it is a more coarse approach in that you don’t have the flexibility to target specific databases or collections in your backup. This may result in large backup files, which in turn may result in long-running backup operations. To implement filesystem snapshots requires ongoing maintenance as your system evolves and becomes more complex. To coordinate backups across multiple replica sets, particularly in a sharded system, requires devops expertise to ensure consistency across the various components. Backup Strategy #3: MongoDB Management Service (MMS) MongoDB Management Service provides continuous, online backup for MongoDB as a fully managed service. You install the Backup Agent in your environment, which conducts an initial sync to MongoDB’s secure and redundant datacenters. After the initial sync, MMS streams encrypted and compressed MongoDB oplog data to MMS so that you have a continuous backup. By default, MMS takes snapshots every 6 hours and oplog data is retained for 24 hours. The snapshot schedule and retention policy can be configured to meet your requirements. You also have the flexibility to exclude non-mission critical databases and collections. For replica sets, a custom, point-in-time snapshot can be restored for any moment in the last 24 hours. For sharded system, MMS produces a consistent snapshot of the cluster every 6 hours. You also have the option of using checkpoints for more granular restores of sharded clusters. Because MMS reads only the oplog, the ongoing performance impact is minimal, similar to that of adding an additional node to the replica set. In addition to the cloud-based service, MMS is available as on-prem software as part of a MongoDB Standard or Enterprise Subscription . Learn More: To get a more in-depth review of MongoDB backup strategies, read Backup and its Role in Disaster Recovery .
4 Ways to Create a Zero Trust Environment in Financial Services
For years, security professionals protected their IT much like medieval guards protected a walled city — they made it as difficult as possible to get inside. Once someone was past the perimeter, however, they had generous access to the riches within. In the financial sector, this would mean access to personal identifiable information (PII), including a “marketable data set” of credit card numbers, names, social security information, and more. Sadly, such breaches occurred in many cases, adversely affecting end users. A famous example is the Equifax incident, where a small breach led to years of unhappy customers. Since then, the security mindset has changed as users increasingly access networks and applications from any location, on any device, on platforms hosted in the cloud — the classic point-to-point security approach is obsolete. The perimeter has changed, so reliance on it as a protective barrier has changed as well. Given the huge amount of confidential client and customer data that the financial services industry deals with on a daily basis — and the strict regulations — security needs to be an even higher priority. The perceived value of this data also makes financial services organizations a primary target for data breaches. In this article, we’ll examine a different approach to security, called zero trust , that can better protect your assets. Paradigm shift Zero trust presents a new paradigm for cybersecurity. In a zero trust environment, the perimeter is assumed to have been breached; there are no trusted users, and no user or device gains trust simply because of its physical or network location. Every user, device, and connection must be continually verified and audited. Here are four concepts to know about creating a zero trust environment. 1. Securing the data Although ensuring access to banking apps and online services is vital, the database, which is the backend of these applications, is a key part of creating a zero trust environment. The database contains much of an organization’s sensitive, and regulated, information, along with data that may not be sensitive but is critical to keeping the organization running. Thus, it is imperative that a database be ready and able to work in a zero trust environment. As more databases are becoming cloud-based services, an important aspect is ensuring that the database is secure by default—meaning it is secure out of the box. This approach takes some of the responsibility for security out of the hands of administrators, because the highest levels of security are in place from the start, without requiring attention from users or administrators. To allow access, users and administrators must proactively make changes— nothing is automatically granted. As more financial institutions embrace the cloud, securing data can get more complicated. Security responsibilities are divided between the clients’ own organization, the cloud providers, and the vendors of the cloud services being used. This approach is known as the shared responsibility model. It moves away from the classic model where IT owns hardening of the servers and security and then needs to harden the software on top—for example, the version of the database software—and then harden the actual application code. In this model, the hardware (CPU, network, storage) are solely in the realm of the cloud provider that provisions these systems. The service provider for a Data-as-a-Service model then delivers the database hardened to the client with a designated endpoint. Only then does the actual client team and their application developers and DevOps team come into play for the actual solution. Security and resilience in the cloud are only possible when everyone is clear on their roles and responsibilities. Shared responsibility recognizes that cloud vendors ensure that their products are secure by default, while still available, but also that organizations take appropriate steps to continue to protect the data they keep in the cloud. 2. Authentication for customers and users In banks and finance organizations, there is a lot of focus on customer authentication, or making sure that accessing funds is as secure as possible. It’s also important, however, to ensure secure access to the database on the other end. An IT organization can use various methods to allow users to authenticate themselves to a database. Most often, the process includes a username and password. But, given the increased need to maintain the privacy of confidential customer information by financial services organizations, this step should only be viewed as a base layer. At the database layer, it is important to have transport layer security and SCRAM authentication , which enables traffic from clients to the database to be authenticated and encrypted in transit. Passwordless authentication should also be considered—not just for customers, but for internal teams as well. This can be done in multiple ways with the database, for example, auto-generated certificates may be required to access the database. Advanced options exist for organizations already using X.509 certificates that have a certificate management infrastructure. 3. Logging and auditing In the highly regulated financial industry, it is also important to monitor your zero trust environment to ensure that it remains in force and encompasses your database. The database should be able to log all actions or have functionality to apply filters to capture only specific events, users, or roles. Role-based auditing lets you log and report activities by specific roles, such as userAdmin or dbAdmin, coupled with any roles inherited by each user, rather than having to extract activity for each individual administrator. This approach makes it easier for organizations to enforce end-to-end operational control and maintain the insight necessary for compliance and reporting. 4. Encryption With large amounts of valuable data, financial institutions also need to make sure that they are embracing encryption —in flight, at rest, and even in use. Securing data with client-side, field-level encryption allows you to move to managed services in the cloud with greater confidence. The database only works with encrypted fields and organizations control their own encryption keys, rather than having the database provider manage them. This additional layer of security enforces an even more fine-grained separation of duties between those who use the database and those who administer and manage it. Also, as more data is being transmitted and stored in the cloud—some of which are highly sensitive workloads—additional technical options to control and limit access to confidential and regulated data is needed. However, this data still needs to be used. So, ensuring that in-use data encryption is part of your zero trust solution is vital. This approach enables organizations to confidently store sensitive data, meeting compliance requirements while also enabling different parts of the business to gain access and insights from it. Conclusion In a world where security of data is only becoming more important, financial services organizations rank among those with the most to lose if data gets into the wrong hands. Ditching the perimeter mentality and moving toward zero trust—especially as more cloud and as-a-service offerings are embedded in infrastructure—is the only way to truly protect such valuable assets. Learn more about developing a strategic advantage in financial services. Read the ebook now .