Tag Archive for mongodb

Powering Microservices with MongoDB, Docker, Kubernetes & Kafka

Andrew Morgan presenting on Microservices at MongoDB Europe 2016The slides and recording from my session at MongoDB Europe 2016, are now available. The presentation covers Microservices and some of the key technologies that enable them.

Session Summary

Organisations are building their applications around microservice architectures because of the flexibility, speed of delivery, and maintainability they deliver.

Want to try out MongoDB on your laptop? Execute a single command and you have a lightweight, self-contained sandbox; another command removes all trace when you’re done. Need an identical copy of your application stack in multiple environments? Build your own container image and then your entire development, test, operations, and support teams can launch an identical clone environment.

Containers are revolutionising the entire software lifecycle: from the earliest technical experiments and proofs of concept through development, test, deployment, and support. Orchestration tools manage how multiple containers are created, upgraded and made highly available. Orchestration also controls how containers are connected to build sophisticated applications from multiple, microservice containers.

This session introduces you to technologies such as Docker, Kubernetes & Kafka which are driving the microservices revolution. Learn about containers and orchestration – and most importantly how to exploit them for stateful services such as MongoDB.

Recording

Slides





Language-Specific Views in MongoDB 3.4

Language-Specific Views in MongoDB 3.4

Introduction

This post shows you how to create multiple language-specific views on top of a common collection. Each view is optimized for its language with a collated index which only presents entries for documents in that language. Additionally, each view excludes some fields from the underlying collection – further limiting the data that can be seen through the that view. Finally, user-defined roles are created to restrict users to just the view(s) they should be able to see, ensuring they can only access the data that they’re entitled to.

In the course of setting up this environment, a number of features are demonstrated:

  • Read-Only Views (New in MongoDB 3.4): DBAs can define non-materialized views that expose only a subset of data from an underlying collection, i.e. a view that filters out entire documents or specific fields, such as Personally Identifiable Information (PII) from sales data or health records. As a result, risks of data exposure are dramatically reduced. DBAs can define a view of a collection that’s generated from an aggregation over another collection(s) or view.
  • Multiple Language Collations (New in MongoDB 3.4): Applications addressing global audiences require handling content that spans many languages. Each language has different rules governing the comparison and sorting of data. MongoDB collations allow users to build applications that adhere to these language-specific comparison rules for over 100 different languages and locales. Developers can specify collations for collections, indexes, views, or for individual operations.
  • Partial Indexes: Partial indexes balance delivering good query performance while consuming fewer system resources. For example, consider an order processing application. The order collection is frequently queried by the application to display all incomplete orders for a particular user. Building an index on the userID field of the collection is necessary for good performance. However, only a small percentage of orders are in progress at a given time. Limiting the index on userID to contain only orders that are in the “active” state could reduce the number of index entries from millions to thousands, saving working set memory and disk space, while speeding up queries even further as smaller indexes result in faster searches.
  • User Defined Roles: User-defined roles, enable administrators to assign finely-grained privileges to users and applications, based on the specific functionality they require. MongoDB provides the ability to specify user privileges at both the database, collection, and view levels.
  • MongoDB Compass: MongoDB Compass is the easiest way for DBAs to explore and manage MongoDB data. As the GUI for MongoDB, Compass enables users to visually explore their data, and run ad-hoc queries in seconds – all with zero knowledge of MongoDB’s query language. The latest Compass release expands functionality to allow users to manipulate documents directly from the GUI, optimize performance, and create data governance controls.

The Data Set

The example used in this post is built on a collection containing documents for customers from multiple countries – one of the fields indicates a customer’s country, but there is no field that identifies their spoken language. To fix that, we infer their language from their country to create a new language field in each document:

db.customers.updateMany(
    {country: "China"},
    {$set: {language: "Chinese"}})

db.customers.updateMany(
    {country: "Germany"},
    {$set: {language: "German"}})

db.customers.updateMany(
    {country: {$in: ["USA", "Canada", "United Kingdom"]}},
    {$set: {language: "English"}})

A typical document now looks like this:

db.customers.findOne()
{
    "_id" : ObjectId("57fb8fbb99b01440193088eb"),
    "first_name" : "Ben",
    "last_name" : "Dixon",
    "country" : "Germany",
    "avatar" : "https://robohash.org/quiseumquam.bmp?size=50x50&set=set1",
    "ip_address" : "10.102.15.35",
    "dependents" : [
        {
            "name" : "Ben",
            "birthday" : "12-Apr-1994"
        },
        {
            "name" : "Lucas",
            "birthday" : "22-Jun-2016"
        },
        {
            "name" : "Erik",
            "birthday" : "05-Jul-2005"
        }
    ],
    "birthday" : "02-Jul-1964",
    "salary" : "£910070.80",
    "skills" : [
        {
            "skill" : "Cvent"
        },
        {
            "skill" : "TKI"
        }
    ],
    "gender" : "Male",
    "language" : "German"
}

You might ask why we need to add this extra field rather than simply calculating the language each time it’s needed? The answer is that multiple countries share the same language and partial indexes don’t allow us to use the $or or $in operators.

At this stage, the only index on the collection is on the _id field:

db.customers.getIndexes()
[
    {
        "v" : 2,
        "key" : {
            "_id" : 1
        },
        "name" : "_id_",
        "ns" : "production.customers"
    }
]

If you want to work through this example for yourself then the following steps will populate a collection called “customers” in a database called production:

curl -o customers.tgz http://clusterdb.com/upload/customers.tgz
tar fxz customers.tgz
mongorestore

There should be 111,000 documents in the collection after running mongorestore:

use production
db.customers.findOne()
{
    "_id" : ObjectId("57fb8fbb99b01440193088eb"),
    "first_name" : "Ben",
    "last_name" : "Dixon",
    "country" : "Germany",
    "avatar" : "https://robohash.org/quiseumquam.bmp?size=50x50&set=set1",
    "ip_address" : "10.102.15.35",
    "dependents" : [
        {
            "name" : "Ben",
            "birthday" : "12-Apr-1994"
        },
        {
            "name" : "Lucas",
            "birthday" : "22-Jun-2016"
        },
        {
            "name" : "Erik",
            "birthday" : "05-Jul-2005"
        }
    ],
    "birthday" : "02-Jul-1964",
    "salary" : "£910070.80",
    "skills" : [
        {
            "skill" : "Cvent"
        },
        {
            "skill" : "TKI"
        }
    ],
    "gender" : "Male",
    "language" : "German"
}

db.customers.count()
111000

Adding Indexes

Collations – allow values to be compared and sorted using rules specific to a local language. In this example, we are supporting 3 languages: English, German, and Chinese. For each of these languages, a collated index will later be used to correctly sort the customers based on their last and first name.

To this end, collation-specific, compound (last_name + first_name) indexes are created:

db.customers.createIndex( 
    {last_name: 1, first_name : 1 }, 
    {name: "chinese_name_index",
     collation: {locale: "zh" },
     partialFilterExpression: { language: "Chinese" } 
    }
);

db.customers.createIndex( 
    {last_name: 1, first_name : 1 }, 
    {name: "english_name_index",
     collation: {locale: "en" },
     partialFilterExpression: { language: "English" } 
    }
);

db.customers.createIndex( 
    {last_name: 1, first_name : 1 }, 
    {name: "german_name_index",
     collation: {locale: "de" },
     partialFilterExpression: { language: "German" } 
    }
);

The exact behaviour of comparisons and sorting using the collated index can be further refined by including additional parameters alongside the locale in the collation documentation. Details of these optional parameters can be found in the collation documentation.

Note that each of those indexes is partial, only containing entries for document where language is set to the matching value. This saves memory and disk space, and speeds up both reads and writes.

This is the final set of indexes:

db.customers.getIndexes()
[
    {
        "v" : 2,
        "key" : {
            "_id" : 1
        },
        "name" : "_id_",
        "ns" : "production.customers"
    },
    {
        "v" : 2,
        "key" : {
            "last_name" : 1,
            "first_name" : 1
        },
        "name" : "german_name_index",
        "ns" : "production.customers",
        "partialFilterExpression" : {
            "language" : "German"
        },
        "collation" : {
            "locale" : "de",
            "caseLevel" : false,
            "caseFirst" : "off",
            "strength" : 3,
            "numericOrdering" : false,
            "alternate" : "non-ignorable",
            "maxVariable" : "punct",
            "normalization" : false,
            "backwards" : false,
            "version" : "57.1"
        }
    },
    {
        "v" : 2,
        "key" : {
            "last_name" : 1,
            "first_name" : 1
        },
        "name" : "chinese_name_index",
        "ns" : "production.customers",
        "partialFilterExpression" : {
            "language" : "Chinese"
        },
        "collation" : {
            "locale" : "zh",
            "caseLevel" : false,
            "caseFirst" : "off",
            "strength" : 3,
            "numericOrdering" : false,
            "alternate" : "non-ignorable",
            "maxVariable" : "punct",
            "normalization" : false,
            "backwards" : false,
            "version" : "57.1"
        }
    },
    {
        "v" : 2,
        "key" : {
            "last_name" : 1,
            "first_name" : 1
        },
        "name" : "english_name_index",
        "ns" : "production.customers",
        "partialFilterExpression" : {
            "language" : "English"
        },
        "collation" : {
            "locale" : "en",
            "caseLevel" : false,
            "caseFirst" : "off",
            "strength" : 3,
            "numericOrdering" : false,
            "alternate" : "non-ignorable",
            "maxVariable" : "punct",
            "normalization" : false,
            "backwards" : false,
            "version" : "57.1"
        }
    }
]

Create Views

A view is created for each language to:

  • Filter out any documents where the language field doesn’t match that of the view
  • Remove the salary, country, and language fields
  • Indicate which collation should be used
db.createView(
    "chineseSpeakersRedacted",
    "customers",
    [
        {$match: {
            language: "Chinese",
            last_name: {$exists: true}
        }},
        {$project: {
            salary: 0, 
            country: 0,
            language: 0
            }
        }
    ],
    {collation: {locale: "zh"}}
)

db.createView(
    "englishSpeakersRedacted",
    "customers",
    [
        {$match: {
            language: "English",
            last_name: {$exists: true}
        }},
        {$project: {
            salary: 0, 
            country: 0,
            language: 0
            }
        }
    ],
    {collation: {locale: "en"}}
)

db.createView(
    "germanSpeakersRedacted",
    "customers",
    [
        {$match: {
            language: "German",
            last_name: {$exists: true}
        }},
        {$project: {
            salary: 0, 
            country: 0,
            language: 0
            }
        }
    ],
    {collation: {locale: "de"}}
)

You might ask why last_name: {$exists: true} is included in the $match stage? The reason is that it encourages the optimizer to use our language-specific partial indexes when using these views.

Note that this is using the MongoDB Aggregation Framework and so you could add lots of other operations, including: unwinding arrays, looking up values from other collections, grouping data, and adding new, computed fields.

The views now appear like collections and can be queried in the same manner (note that they are ready-only):

show collections

chineseSpeakersRedacted
customers
englishSpeakersRedacted
germanSpeakersRedacted
system.views

db.germanSpeakersRedacted.find({last_name: "Cole"}, {first_name:1, _id:0, gender:1}).sort({first_name: 1})
{ "first_name" : "Amelie", "gender" : "Female" }
{ "first_name" : "Amelie", "gender" : "Female" }
{ "first_name" : "Amelie", "gender" : "Female" }
{ "first_name" : "Amelie", "gender" : "Female" }
{ "first_name" : "Amelie", "gender" : "Female" }
{ "first_name" : "Amelie", "gender" : "Female" }
{ "first_name" : "Amelie", "gender" : "Female" }
{ "first_name" : "Amelie", "gender" : "Female" }
{ "first_name" : "Amelie", "gender" : "Female" }
{ "first_name" : "Anna", "gender" : "Female" }
{ "first_name" : "Anna", "gender" : "Female" }
{ "first_name" : "Anna", "gender" : "Female" }
{ "first_name" : "Anna", "gender" : "Female" }
{ "first_name" : "Anna", "gender" : "Female" }
{ "first_name" : "Anna", "gender" : "Female" }
{ "first_name" : "Anna", "gender" : "Female" }
{ "first_name" : "Anna", "gender" : "Female" }
{ "first_name" : "Anton", "gender" : "Male" }
{ "first_name" : "Anton", "gender" : "Male" }
{ "first_name" : "Anton", "gender" : "Male" }

The query above searches for all documents where the last_name is Cole (because this query is using the German view, behind the scenes, all non-German documents have already been filtered out), discards all but the first_name and gender fields, and then sorts by the first_name (using the German collation).

explain() confirms that the German collation index was used:

db.germanSpeakersRedacted.find({last_name: "Cole"}, {first_name:1, _id:0, gender:1}).sort({first_name: 1}).explain()
{
    "stages" : [
        {
            "$cursor" : {
                "query" : {
                    "$and" : [
                        {
                            "language" : "German",
                            "last_name" : {
                                "$exists" : true
                            }
                        },
                        {
                            "last_name" : "Cole"
                        }
                    ]
                },
                "fields" : {
                    "first_name" : 1,
                    "gender" : 1,
                    "_id" : 0
                },
                "queryPlanner" : {
                    "plannerVersion" : 1,
                    "namespace" : "production.customers",
                    "indexFilterSet" : false,
                    "parsedQuery" : {
                        "$and" : [
                            {
                                "language" : {
                                    "$eq" : "German"
                                }
                            },
                            {
                                "last_name" : {
                                    "$eq" : "Cole"
                                }
                            },
                            {
                                "last_name" : {
                                    "$exists" : true
                                }
                            }
                        ]
                    },
                    "collation" : {
                        "locale" : "de",
                        "caseLevel" : false,
                        "caseFirst" : "off",
                        "strength" : 3,
                        "numericOrdering" : false,
                        "alternate" : "non-ignorable",
                        "maxVariable" : "punct",
                        "normalization" : false,
                        "backwards" : false,
                        "version" : "57.1"
                    },
                    "winningPlan" : {
                        "stage" : "FETCH",
                        "filter" : {
                            "$and" : [
                                {
                                    "last_name" : {
                                        "$exists" : true
                                    }
                                },
                                {
                                    "language" : {
                                        "$eq" : "German"
                                    }
                                }
                            ]
                        },
                        "inputStage" : {
                            "stage" : "IXSCAN",
                            "keyPattern" : {
                                "last_name" : 1,
                                "first_name" : 1
                            },
                            "indexName" : "german_name_index",
                            "collation" : {
                                "locale" : "de",
                                "caseLevel" : false,
                                "caseFirst" : "off",
                                "strength" : 3,
                                "numericOrdering" : false,
                                "alternate" : "non-ignorable",
                                "maxVariable" : "punct",
                                "normalization" : false,
                                "backwards" : false,
                                "version" : "57.1"
                            },
                            "isMultiKey" : false,
                            "multiKeyPaths" : {
                                "last_name" : [ ],
                                "first_name" : [ ]
                            },
                            "isUnique" : false,
                            "isSparse" : false,
                            "isPartial" : true,
                            "indexVersion" : 2,
                            "direction" : "forward",
                            "indexBounds" : {
                                "last_name" : [
                                    "[\"-E?1\u0001\b\u0001\u0007\", \"-E?1\u0001\b\u0001\u0007\"]"
                                ],
                                "first_name" : [
                                    "[MinKey, MaxKey]"
                                ]
                            }
                        }
                    },
                    "rejectedPlans" : [ ]
                }
            }
        },
        {
            "$project" : {
                "language" : false,
                "country" : false,
                "salary" : false
            }
        },
        {
            "$sort" : {
                "sortKey" : {
                    "first_name" : 1
                }
            }
        },
        {
            "$project" : {
                "_id" : false,
                "gender" : true,
                "first_name" : true
            }
        }
    ],
    "ok" : 1

User-Defined Roles – Limiting Access to the Views

One of the reasons for creating the views was to protect some of the data (the customers’ salaries) as not all users should see this information. At this point, all users can still access the base “customers” collection and so we’ve fallen short of that objective. User-defined roles to the rescue!

We create an admin user that has the built in root role and so can access any database, create new users, and perform any other activity:

use admin
db.createUser({
    user: "admin",
    pwd: "secret",
    roles: [
        {role:"root",db:"admin"}
        ]
    })

The next step is to create a role that only gives its members read access to the germanSpeakersRedacted collection (within the production database):

use admin
db.createRole(
   {
     role: "germanViewer",
     privileges: [
       { resource: { db: "production", collection: "germanSpeakersRedacted" },  actions: [ "find" ] }
     ],
     roles: []
   }
)

You can then create one or more users that have germanViewer within their defined roles:

use admin
db.createUser({
    user: "germanIT",
    pwd: "secret",
    roles: [
        {role:"germanViewer",db:"admin"}
        ]
    })

Additional privileges can be added to existing roles using grantPrivilegesToRole and extra roles can be assigned to existing users using grantRolesToUser.

For these access controls to work, users must be created with appropriate permissions and the MongoDB server process must be started with the --auth option:

mongod --auth

When connecting to the database as our newly-created admin user, the base customers collection is still accessible:

mongo -u admin -p secret --authenticationDatabase admin

use production
db.customers.findOne()

{
    "_id" : ObjectId("57fb8fbb99b01440193088eb"),
    "first_name" : "Ben",
    "last_name" : "Dixon",
    "country" : "Germany",
    "avatar" : "https://robohash.org/quiseumquam.bmp?size=50x50&set=set1",
    "ip_address" : "10.102.15.35",
    "dependents" : [
        {
            "name" : "Ben",
            "birthday" : "12-Apr-1994"
        },
        {
            "name" : "Lucas",
            "birthday" : "22-Jun-2016"
        },
        {
            "name" : "Erik",
            "birthday" : "05-Jul-2005"
        }
    ],
    "birthday" : "02-Jul-1964",
    "salary" : "£910070.80",
    "skills" : [
        {
            "skill" : "Cvent"
        },
        {
            "skill" : "TKI"
        }
    ],
    "gender" : "Male",
    "language" : "German"
}

When connecting as the germanIT user, only the German view can be accessed:

mongo -u germanIT -p secret --authenticationDatabase admin

use production

show collections
2016-10-28T10:24:03.765+0100 E QUERY    [main] Error: listCollections failed: {
    "ok" : 0,
    "errmsg" : "not authorized on production to execute command { listCollections: 1.0, filter: {} }",
    "code" : 13,
    "codeName" : "Unauthorized"
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype._getCollectionInfosCommand@src/mongo/shell/db.js:805:1
DB.prototype.getCollectionInfos@src/mongo/shell/db.js:817:19
DB.prototype.getCollectionNames@src/mongo/shell/db.js:828:16
shellHelper.show@src/mongo/shell/utils.js:748:9
shellHelper@src/mongo/shell/utils.js:645:15
@(shellhelp2):1:1

db.customers.findOne()
2016-10-21T14:40:19.477+0100 E QUERY    [main] Error: error: {
    "ok" : 0,
    "errmsg" : "not authorized on production to execute command { find: \"customers\", filter: {}, limit: 1.0, singleBatch: true }",
    "code" : 13,
    "codeName" : "Unauthorized"
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DBCommandCursor@src/mongo/shell/query.js:702:1
DBQuery.prototype._exec@src/mongo/shell/query.js:117:28
DBQuery.prototype.hasNext@src/mongo/shell/query.js:288:5
DBCollection.prototype.findOne@src/mongo/shell/collection.js:294:10
@(shell):1:1

db.germanSpeakersRedacted.findOne()
{
    "_id" : ObjectId("57fb8fbb99b01440193088eb"),
    "first_name" : "Ben",
    "last_name" : "Dixon",
    "avatar" : "https://robohash.org/quiseumquam.bmp?size=50x50&set=set1",
    "ip_address" : "10.102.15.35",
    "dependents" : [
        {
            "name" : "Ben",
            "birthday" : "12-Apr-1994"
        },
        {
            "name" : "Lucas",
            "birthday" : "22-Jun-2016"
        },
        {
            "name" : "Erik",
            "birthday" : "05-Jul-2005"
        }
    ],
    "birthday" : "02-Jul-1964",
    "skills" : [
        {
            "skill" : "Cvent"
        },
        {
            "skill" : "TKI"
        }
    ],
    "gender" : "Male"
}

MongoDB Compass – Viewing Views Graphically

While the mongo shell is very powerful and flexible, it is often easier to understand and navigate your data graphically, this is where MongoDB Compass is invaluable. The good news is that MongoDB Compass handles views in exactly the same way as it does collections.

In Figure 1, we can view the documents in the base, customers, collection. Note that the salary value is visible.

View data in MongoDB base customers collection

Figure 1: View data in base customers collection

Figure 2 confirms that the salary field has been removed from the German view.

Salary has been redacted from the German view

Figure 2: Salary has been redacted from the German view

In Figure 3, we see that only Chinese documents have been included in the Chinese view.

Chinese view contains only Chinese documents

Figure 3: Chinese view contains only Chinese documents

Next Steps

Collation and read-only views are just 2 of the exciting new features added in MongoDB 3.4 – read more and these and everything else that’s new in MongoDB 3.4: What’s New.





Webinar Replay (EMEA) – Data Streaming with Apache Kafka & MongoDB

MongoDB with Apache Kafka

The replay from the MongoDB/Apache Kafka webinar that I co-presented with David Tucker from Confluent earlier this week is now available:

The replay is now available: Data Streaming with Apache Kafka & MongoDB.

Abstract

A new generation of technologies is needed to consume and exploit today’s real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies.

This webinar explores the use-cases and architecture for Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.

Watch the webinar to learn:

  • What MongoDB is and where it’s used
  • What data streaming is and where it fits into modern data architectures
  • How Kafka works, what it delivers, and where it’s used
  • How to operationalize the Data Lake with MongoDB & Kafka
  • How MongoDB integrates with Kafka – both as a producer and a consumer of event – data

Slides





Processing Data Streams with Amazon Kinesis and MongoDB Atlas

This post provides an introduction to Amazon Kinesis: its architecture, what it provides, and how it’s typically used. It goes on to step through how to implement an application where data is ingested by Amazon Kinesis before being processed and then stored in MongoDB Atlas.

This is part of a series of posts which examine how to use MongoDB Atlas with a number of complementary technologies and frameworks.

Introduction to Amazon Kinesis

The role of Amazon Kinesis is to get large volumes of streaming data into AWS where it can then be processed, analyzed, and moved between AWS services. The service is designed to ingest and store terabytes of data every hour, from multiple sources. Kinesis provides high availability, including synchronous replication within an AWS region. It also transparently handles scalability, adding and removing resources as needed.

Once the data is inside AWS, it can be processed or analyzed immediately, as well as being stored using other AWS services (such as S3) for later use. By storing the data in MongoDB, it can be used both to drive real-time, operational decisions as well as for deeper analysis.

As the number, variety, and velocity of data sources grow, new architectures and technologies are needed. Technologies like Amazon Kinesis and Apache Kafka are focused on ingesting the massive flow of data from multiple fire hoses and then routing it to the systems that need it – optionally filtering, aggregating, and analyzing en-route.

AWS Kinesis Architecture

Figure 1: AWS Kinesis Architecture

Typical data sources include:

  • IoT assets and devices(e.g., sensor readings)
  • On-line purchases from an ecommerce store
  • Log files
  • Video game activity
  • Social media posts
  • Financial market data feeds

Rather than leave this data to fester in text files, Kinesis can ingest the data, allowing it to be processed to find patterns, detect exceptions, drive operational actions, and provide aggregations to be displayed through dashboards.

There are actually 3 services which make up Amazon Kinesis:

  • Amazon Kinesis Firehose is the simplest way to load massive volumes of streaming data into AWS. The capacity of your Firehose is adjusted automatically to keep pace with the stream throughput. It can optionally compress and encrypt the data before it’s stored.
  • Amazon Kinesis Streams are similar to the Firehose service but give you more control, allowing for:
    • Multi-stage processing
    • Custom stream partitioning rules
    • Reliable storage of the stream data until it has been processed.
  • Amazon Kinesis Analytics is the simplest way to process the data once it has been ingested by either Kinesis Firehose or Streams. The user provides SQL queries which are then applied to analyze the data; the results can then be displayed, stored, or sent to another Kinesis stream for further processing.

This post focuses on Amazon Kinesis Streams, in particular, implementing a consumer that ingests the data, enriches it, and then stores it in MongoDB.

Accessing Kinesis Streams – the Libraries

There are multiple ways to read (consume) and write (produce) data with Kinesis Streams:

  • Amazon Kinesis Streams API
  • Amazon Kinesis Producer Library (KPL)
    • Easy to use and highly configurable Java library that helps you put data into an Amazon Kinesis stream. Amazon Kinesis Producer Library (KPL) presents a simple, asynchronous, high throughput, and reliable interface.
  • Amazon Kinesis Agent
    • The agent continuously monitors a set of files and sends new entries to your Stream or Firehose.
  • Amazon Kinesis Client Library (KCL)
    • A Java library that helps you easily build Amazon Kinesis Applications for reading and processing data from an Amazon Kinesis stream. KCL handles issues such as adapting to changes in stream volume, load-balancing streaming data, coordinating distributed services, providing fault-tolerance, and processing data.
  • Amazon Kinesis Client Library MultiLangDemon
    • The MultiLangDemon is used as a proxy by non-Java applications to use the Kinesis Client Library.
  • Amazon Kinesis Connector Library
    • A library that helps you easily integrate Amazon Kinesis with other AWS services and third-party tools.
  • Amazon Kinesis Storm Spout
    • A library that helps you easily integrate Amazon Kinesis Streams with Apache Storm.

The example application in this post use the Kinesis Agent and the Kinesis Client Library MultiLangDemon (with Node.js).

Role of MongoDB Atlas

MongoDB is a distributed database delivering a flexible schema for rapid application development, rich queries, idiomatic drivers, and built in redundancy and scale-out. This makes it the go-to database for anyone looking to build modern applications.

MongoDB Atlas is a hosted database service for MongoDB. It provides all of the features of MongoDB, without the operational heavy lifting required for any new application. MongoDB Atlas is available on demand through a pay-as-you-go model and billed on an hourly basis, letting you focus on what you do best.

It’s easy to get started – use a simple GUI to select the instance size, region, and features you need. MongoDB Atlas provides:

  • Security features to protect access to your data
  • Built in replication for always-on availability, tolerating complete data center failure
  • Backups and point in time recovery to protect against data corruption
  • Fine-grained monitoring to let you know when to scale. Additional instances can be provisioned with the push of a button
  • Automated patching and one-click upgrades for new major versions of the database, enabling you to take advantage of the latest and greatest MongoDB features
  • A choice of regions and billing options

Like Amazon Kinesis, MongoDB Atlas is a natural fit for users looking to simplify their development and operations work, letting them focus on what makes their application unique rather than commodity (albeit essential) plumbing. Also like Kinesis, you only pay for MongoDB Atlas when you’re using it with no upfront costs and no charges after you terminate your cluster.

Example Application

The rest of this post focuses on building a system to process log data. There are 2 sources of log data:

  1. A simple client that acts as a Kinesis Streams producer, generating sensor readings and writing them to a stream
  2. Amazon Kinesis Agent monitoring a SYSLOG file and sending each log event to a stream

In both cases, the data is consumed from the stream using the same consumer, which adds some metadata to each entry and then stores it in MongoDB Atlas.

Create Kinesis IAM Policy in AWS

From the IAM section of the AWS console use the wizard to create a new policy. The policy should grant permission to perform specific actions on a particular stream (in this case “ClusterDBStream”) and the results should look similar to this:

Next, create a new user and associate it with the new policy. Important: Take a note of the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.

Create MongoDB Atlas Cluster

Register with MongoDB Atlas and use the simple GUI to select the instance size, region, and features you need (Figure 2).

create mongodb atlas cluster

Create a user with read and write privileges for just the database that will be used for your application, as shown in Figure 3.

Creating an Application user in MongoDB Atlas

Figure 3: Creating an Application user in MongoDB Atlas

You must also add the IP address of your application server to the IP Whitelist in the MongoDB Atlas security tab (Figure 4). Note that if multiple application servers will be accessing MongoDB Atlas then an IP address range can be specified in CIDR format (IP Address/number of significant bits).

Add App Server IP Address(es) to MongoDB Atlas

Figure 4: Add App Server IP Address(es) to MongoDB Atlas

If your application server(s) are running in AWS, then an alternative to IP Whitelisting is to configure a VPC (Virtual Private Cloud) Peering relationship between your MongoDB Atlas group and the VPC containing your AWS resources. This removes the requirement to add and remove IP addresses as AWS reschedules functions, and is especially useful when using highly dynamic services such as AWS Lambda.

Click the “Connect” button and make a note of the URI that should be used when connecting to the database (note that you will substitute the user name and password with ones that you’ve just created).

App Part 1 – Kinesis/Atlas Consumer

The code and configuration files in Parts 1 & 2 are based on the sample consumer and producer included with the client library for Node.js (MultiLangDaemon).

Install the Node.js client library:

git clone https://github.com/awslabs/amazon-kinesis-client-nodejs.git
cd amazon-kinesis-client-nodejs
npm install

Install the MongoDB Node.js Driver:

npm install --save mongodb

Move to the consumer sample folder:

cd samples/basic_sample/consumer/

Create a configuration file (“logging_consumer.properties”), taking care to set the correct stream and application names and AWS region:

The code for working with MongoDB can be abstracted to a helper file (“db.js”):

Create the application Node.js file (“logging_consumer_app.js”), making sure to replace the database user and host details in “mongodbConnectString” with your own:

Note that this code adds some metadata to the received object before writing it to MongoDB. At this point, it is also possible to filter objects based on any of their fields.

Note also that this Node.js code logs a lot of information to the “application log” file (including the database password!); this is for debugging and would be removed from a real application.

The simplest way to have the application use the user credentials (noted when creating the user in AWS IAM) is to export them from the shell where the application will be launched:

export AWS_ACCESS_KEY_ID=????????????????????
export AWS_SECRET_ACCESS_KEY=????????????????????????????????????????

Finally, launch the consumer application:

../../../bin/kcl-bootstrap --java /usr/bin/java -e -p ./logging_consumer.properties

Check the “application.log” file for any errors.

App Part 2 – Kinesis Producer

As for the consumer, export the credentials for the user created in AWS IAM:

cd amazon-kinesis-client-nodejs/samples/basic_sample/producer

export AWS_ACCESS_KEY_ID=????????????????????
export AWS_SECRET_ACCESS_KEY=????????????????????????????????????????

Create the configuration file (“config.js”) and ensure that the correct AWS region and stream are specified:

Create the producer code (“logging_producer.js”):

The producer is launched from “logging_producer_app.js”:

Run the producer:

node logging_producer_app.js

Check the consumer and producer “application.log” files for errors.

At this point, data should have been written to MongoDB Atlas. Using the connection string provided after clicking the “Connect” button in MongoDB Atlas, connect to the database and confirm that the documents have been added:

App Part 3 – Capturing Live Logs Using Amazon Kinesis Agent

Using the same consumer, the next step is to stream real log data. Fortunately, this doesn’t require any additional code as the Kinesis Agent can be used to monitor files and add every new entry to a Kinesis Stream (or Firehose).

Install the Kinesis Agent:

sudo yum install –y aws-kinesis-agent

and edit the configuration file to use the correct AWS region, user credentials, and stream in “/etc/aws-kinesis/agent.json”:

“/var/log/messages” is a SYSLOG file and so a “dataProcessingOptions” field is included in the configuration to automatically convert each log into a JSON document before writing it to the Kinesis Stream.

The agent will not run as root and so the permissions for “/var/log/messages” need to be made more permissive:

sudo chmod og+r /var/log/messages

The agent can now be started:

sudo service aws-kinesis-agent start

Monitor the agent’s log file to see what it’s doing:

sudo tail -f /var/log/aws-kinesis-agent/aws-kinesis-agent.log

If there aren’t enough logs being generated on the machine then extra ones can be injected manually for testing:

logger -i This is a test log

This will create a log with the “program” field set to your username (in this case, “ec2-user”). Check that the logs get added to MongoDB Atlas:

Checking the Data with MongoDB Compass

To visually navigate through the MongoDB schema and data, download and install MongoDB Compass. Use your MongoDB Atlas credentials to connect Compass to your MongoDB database (the hostname should refer to the primary node in your replica set or a “mongos” process if your MongoDB cluster is sharded).

Navigate through the structure of the data in the “clusterdb” database (Figure 5) and view the JSON documents.

Explore Schema Using MongoDB Compass

Figure 5: Explore Schema Using MongoDB Compass

Clicking on a value builds a query and then clicking “Apply” filters the results (Figure 6).

View Filtered Documents in MongoDB Compass

Figure 6: View Filtered Documents in MongoDB Compass

Add Document Validation Rules

One of MongoDB’s primary attractions for developers is that it gives them the ability to start application development without first needing to define a formal schema. Operations teams appreciate the fact that they don’t need to perform a time-consuming schema upgrade operation every time the developers need to store a different attribute.

This is well suited to the application built in this post as logs from different sources are likely to include different attributes. There are however some attributes that we always expect to be there (e.g., the metadata that the application is adding). For applications reading the documents from this collection to be able to rely on those fields being present, the documents should be validated before they are written to the database. Prior to MongoDB 3.2, those checks had to be implemented in the application but they can now be performed by the database itself.

Executing a single command from the “mongo” shell adds the document checks:

The above command adds multiple checks:

  • The “program” field exists and contains a string
  • There’s a sub-document called “metadata” containing at least 2 fields:
  • “mongoLabel” which must be a string
  • “timeAdded” which must be a date

Test that the rules are correctly applied when attempting to write to the database:

Cleaning Up (IMPORTANT!)

Remember that you will continue to be charged for the services even when you’re no longer actively using them. If you no longer need to use the services then clean up:

  • From the MongoDB Atlas GUI, select your Cluster, click on the ellipses and select “Terminate”.
  • From the AWS management console select the Kinesis service, then Kinesis Streams, and then delete your stream.
  • From the AWS management console select the DynamoDB service, then tables, and then delete your table.

Using MongoDB Atlas with Other Frameworks and Services

We have detailed walkthroughs for using MongoDB Atlas with several programming languages and frameworks, as well as generic instructions that can be used with others. They can all be found in Using MongoDB Atlas From Your Favorite Language or Framework.





Building Microservices with MongoDB, Docker, Kubernetes & Kafka

Building Microservices with Docker, Kubernetes, Kafka & MongoDB

Building Microservices with Docker, Kubernetes, Kafka & MongoDB

As part of MongoDB Europe on 15th November, I’ll be presenting on Microservices and some of the key technologies that enable them. Tickets are still available and the discount code andrewmorgan20 saves you 20% – register here.

Session Abstract

Organisations are building their applications around microservice architectures because of the flexibility, speed of delivery, and maintainability they deliver.

Want to try out MongoDB on your laptop? Execute a single command and you have a lightweight, self-contained sandbox; another command removes all trace when you’re done. Need an identical copy of your application stack in multiple environments? Build your own container image and then your entire development, test, operations, and support teams can launch an identical clone environment.

Containers are revolutionising the entire software lifecycle: from the earliest technical experiments and proofs of concept through development, test, deployment, and support. Orchestration tools manage how multiple containers are created, upgraded and made highly available. Orchestration also controls how containers are connected to build sophisticated applications from multiple, microservice containers.

This session introduces you to technologies such as Docker, Kubernetes & Kafka which are driving the microservices revolution. Learn about containers and orchestration – and most importantly how to exploit them for stateful services such as MongoDB.





The rise of microservices – containers and orchestration

Earlier this week, I presented on microservices at MongoDB’s Big Data event in Frankfurt. You can view the slides here.


Abstract

Organisations are building their applications around microservice architectures because of the flexibility, speed of delivery, and maintainability they deliver. In this session, the concepts behind containers and orchestration will be explained and how to use them with MongoDB.





Webinar Replay: Data Streaming with Apache Kafka & MongoDB

I recently co-presented a webinar with David Tucker from Confluent.

The replay is now available: Data Streaming with Apache Kafka & MongoDB.

Abstract

A new generation of technologies is needed to consume and exploit today’s real time, fast moving data sources. Apache Kafka, originally developed at LinkedIn, has emerged as one of these key new technologies.

This webinar explores the use-cases and architecture for Kafka, and how it integrates with MongoDB to build sophisticated data-driven applications that exploit new sources of data.

Watch the webinar to learn:

  • What MongoDB is and where it’s used
  • What data streaming is and where it fits into modern data architectures
  • How Kafka works, what it delivers, and where it’s used
  • How to operationalize the Data Lake with MongoDB & Kafka
    How MongoDB integrates with Kafka – both as a producer and a consumer of event – data

Slides





Using MongoDB Atlas From Your Favorite Language or Framework

Developers love working with MongoDB. One reason is the flexible data model, another is that there’s an idiomatic driver for just about every programming language and someone’s probably already built a framework on top of MongoDB that takes care of a lot of the grunt work. With high availability and scaling built in, they can also be confident that MongoDB will continue to meet their needs as their business grows.

MongoDB Atlas provides all of the features of MongoDB, without the operational heavy lifting required for any new application. MongoDB Atlas is available on demand through a pay-as-you-go model and billed on an hourly basis, letting you focus on what you do best.

It’s easy to get started – use a simple GUI to select the instance size, region, and features you need (Figure 1).

Create MongoDB Atlas Cluster

Figure 1: Create MongoDB Atlas Cluster

MongoDB Atlas provides:

  • Security features to protect access to your data
  • Built in replication for always-on availability, tolerating complete data center failure
  • Backups and point in time recovery to protect against data corruption
  • Fine-grained monitoring to let you know when to scale. Additional instances can be provisioned with the push of a button
  • Automated patching and one-click upgrades for new major versions of the database, enabling you to take advantage of the latest and greatest MongoDB features
  • A choice of cloud providers, regions, and billing options

This post provides instructions on how to use MongoDB Atlas directly from your application or how to configure your favorite framework to use it. It goes on to provide links to some worked examples for specific frameworks.

Worked Examples for Specific Frameworks

Detailed walkthroughs are available for specific programming languages and frameworks:

This list will be extended as new blog posts are produced. If your preferred language or framework isn’t listed above then read on as the following, generic instructions cover most other cases.

Preparing MongoDB Atlas For Your Application

Launch your MongoDB cluster using MongoDB Atlas and then (optionally) create a user with read and write privileges for just the database that will be used for your application, as shown in Figure 2.

Creating an Application user in MongoDB Atlas

Figure 2: Creating an Application user in MongoDB Atlas

You must also add the IP address of your application server to the IP Whitelist in the MongoDB Atlas security tab (Figure 3). Note that if multiple application servers will be accessing MongoDB Atlas then an IP address range can be specified in CIDR format (IP Address/number of significant bits).

Add App Server IP Address(es) to MongoDB Atlas

Figure 3: Add App Server IP Address(es) to MongoDB Atlas

Connecting Your Application (Framework) to MongoDB Atlas

The exact way that you specify how to connect to MongoDB Atlas will vary depending on your programming language and (optionally) the framework you’re using. However it’s pretty universal that you’ll need to provide a connection string/URI. The core of this URI can be retrieved by clicking on the CONNECT button for your cluster in the MongoDB Atlas GUI, selecting the MongoDB Drivers tab and then copying the string (Figure 4).

Copy MongoDB Atlas Connection String/URI

Figure 4: Copy MongoDB Atlas Connection String/URI

Note that this URI contains the administrator username for your MongoDB Atlas group and will connect to the admin database – you’ll probably want to change that.

Your final URI should look something like this:

mongodb://appuser:my_password@cluster0-shard-00-00-qfovx.mongodb.net:27017,cluster0-shard-00-01-qfovx.mongodb.net:27017,cluster0-shard-00-02-qfovx.mongodb.net:27017/appdatabase?ssl=true&authSource=admin'

The URI contains these components:

  • appuser is the name of the user you created in the MongoDB Atlas UI.
  • my_password is the password you chose when creating the user in MongoDB Atlas.
  • cluster0-shard-00-00-qfovx.mongodb.net, cluster0-shard-00-01-qfovx.mongodb.net, & cluster0-shard-00-02-qfovx.mongodb.net are the hostnames of the instances in your MongoDB Atlas replica set (click on the “CONNECT” button in the MongoDB Atlas UI if you don’t have these).
  • 27017 is the standard MongoDB port number.
  • appdatabase is the name of the database (schema) that your application or framework will use. Note that for some frameworks, this should be omitted and the database name configured separately – check the default configuration file or documentation for your framework to see if it’s possible to provide the database name outside of the URI.
  • To enforce security, MongoDB Atlas mandates that the ssl option is used.
  • admin is the database that’s being used to store the credentials for appuser.

Check Your Application Data

At this point, you should add some test data through your application and then confirm that it’s being correctly stored in MongoDB Atlas.

MongoDB Compass is the GUI for MongoDB, allowing you to visually explore your data and interact with your data with full CRUD functionality. The same credentials can be used to connect Compass to your MongoDB database (Figure 5).

Connect MongoDB Compass to MongoDB Atlas

Figure 5: Connect MongoDB Compass to MongoDB Atlas

Once connected, explore the data added to your collections (Figure 6).

Explore MongoDB Atlas Data Using MongoDB Compass

Figure 6: Explore MongoDB Atlas Data Using MongoDB Compass

It is also possible to add, delete, and modify documents (Figure 7).

Modify a Document in MongoDB Compass

Figure 7: Modify a Document in MongoDB Compass

You can verify that the document has really been updated from the MongoDB shell:

Cluster0-shard-0:PRIMARY> use appdatabase
Cluster0-shard-0:PRIMARY> db.simples.find({
    first_name: "Stephanie", 
    last_name: "Green"}).pretty()
{
    "_id" : ObjectId("57a206be0e8ecb0d5b5549f9"),
    "first_name" : "Stephanie",
    "last_name" : "Green",
    "email" : "sgreen1b@tiny.cc",
    "gender" : "Female",
    "ip_address" : "129.173.45.61",
    "children" : [
        {
            "first_name" : "Eugene",
            "birthday" : "8/25/1985"
        },
        {
            "first_name" : "Nicole",
            "birthday" : "12/29/1963",
            "favoriteColor" : "Yellow"
        }
    ]
}

Migrating Your Data to MongoDB Atlas

This post has assumed that you’re building a new application but what if you already have one, with data stored in a MongoDB cluster that you’re managing yourself? Fortunately, the process to migrate your data to MongoDB Atlas (and back out again if desired) is straightforward and is described in Migrating Data to MongoDB Atlas.

We offer a MongoDB Atlas Migration service to help you properly configure MongoDB Atlas and develop a migration plan. This is especially helpful if you need to minimize downtime for your application, if you have a complex sharded deployment, or if you want to revise your deployment architecture as part of the migration. Contact us to learn more about the MongoDB Atlas Migration service.

Next Steps

While MongoDB Atlas radically simplifies the operation of MongoDB there are still some decisions to take to ensure the best performance and reliability for your application. The MongoDB Atlas Best Practices white paper provides guidance on best practices for deploying, managing, and optimizing the performance of your database with MongoDB Atlas.

The guide outlines considerations for achieving performance at scale with MongoDB Atlas across a number of key dimensions, including instance size selection, application patterns, schema design and indexing, and disk I/O. While this guide is broad in scope, it is not exhaustive. Following the recommendations in the guide will provide a solid foundation for ensuring optimal application performance.





Configuring KeystoneJS to Use MongoDB Atlas

KeystoneJS is an open source framework for building web applications and Content Management Systems. It’s built on top of MongoDB, Express, and Node.js – key components of the ubiquitous MEAN stack.

This post explains why MongoDB Atlas is an ideal choice for KeystoneJS and then goes on to show how to configure KeystoneJS to use it.

Why are KeystoneJS and MongoDB Atlas a Good Match

The MEAN stack is extremely popular and well supported and it’s the go to platform when developing modern applications. For its part, MongoDB brings flexible schemas, rich queries, an idiomatic Node.js driver, and simple to use high availability and scaling.

MongoDB Atlas provides all of the features of MongoDB, without the operational heavy lifting required for any new application. MongoDB Atlas is available on demand through a pay-as-you-go model and billed on an hourly basis, letting you focus on what you do best.

It’s easy to get started – use a simple GUI to select the instance size, region, and features you need. MongoDB Atlas provides:

  • Security features to protect access to your data
  • Built in replication for always-on availability, tolerating complete data center failure
  • Backups and point in time recovery to protect against data corruption
  • Fine-grained monitoring to let you know when to scale. Additional instances can be provisioned with the push of a button
  • Automated patching and one-click upgrades for new major versions of the database, enabling you to take advantage of the latest and greatest MongoDB features
  • A choice of cloud providers, regions, and billing options

Like KeystoneJS, MongoDB Atlas is a natural fit for users looking to simplify their development and operations work, letting them focus on what makes their application unique rather than commodity (albeit essential) plumbing.

Installing KeystoneJS and Configuring it to Use MongoDB Atlas

Before starting with KeystoneJS, you should launch your MongoDB cluster using MongoDB Atlas and then (optionally) create a user with read and write privileges for just the database that will be used for this project, as shown in Figure 1. You must also add the IP address of your application server to the IP Whitelist in the MongoDB Atlas security tab.

Creating KeystoneJS user in MongoDB Atlas

Figure 1: Creating KeystoneJS user in MongoDB Atlas

If it isn’t already installed on your system, download and install Node.js:


You should then add the bin sub-folder to your .bash_profile file and then install KeystoneJS:

Before starting KeystoneJS you need to configure it with details on how to connect to your specific MongoDB Atlas cluster. This is done by updating the MONGO_URI value within the .env file:

The URI contains these components:

  • keystonejs_user is the name of the user you created in the MongoDB Atlas UI
  • my_password is the password you chose when creating the user in MongoDB Atlas
  • cluster0-shard-00-00-qfovx.mongodb.net, cluster0-shard-00-01-qfovx.mongodb.net, & cluster0-shard-00-02-qfovx.mongodb.net are the hostnames of the instances in your MongoDB Atlas replica set (click on the “CONNECT” button in the MongoDB Atlas UI if you don’t have these)
  • 27017 is the standard MongoDB port number
  • clusterdb is the name of the database (schema) that KeystoneJS will use (note that this must match the project name used when installing KeystoneJS as well as the database you granted the user access to)
  • To enforce security, MongoDB Atlas mandates that the ssl option is used
  • admin is the database that’s being used to store the credentials for keystonejs_user

Clients connect to KeystoneJS through port 3000 and so you must open that port in your firewall.

You can then start KeystoneJS:

$ node keystone

Testing the Configuration

Browse to the application at http://address-of-app-server:3000 as shown in Figure 2.

KeystoneJS Running on MongoDB Atlas

Figure 2: KeystoneJS Running on MongoDB Atlas

Sign in using the credentials shown and then confirm that you can upload some images to a gallery and create a new page as shown in Figure 3.

Create a Page in KeystoneJS with Data Stored in MongoDB Atlas

Figure 3: Create a Page in KeystoneJS with Data Stored in MongoDB Atlas

After saving the page, confirm that you can browse to the newly created post (Figure 4).

View KeystoneJS Post with Data Read from MongoDB Atlas

Figure 4: View KeystoneJS Post with Data Read from MongoDB Atlas

Optionally, confirm that, MongoDB Atlas really is being used by KeystoneJS, you can connect using the MongoDB shell:

To visually navigate through the schema and data created by KeystoneJS, download and install MongoDB Compass. The same credentials can be used to connect Compass to your MongoDB database – Figure 5.

Connect MongoDB Compass to MongoDB Atlas Database

Figure 5: Connect MongoDB Compass to MongoDB Atlas Database

Navigate through the structure of the data in the clusterdb database (Figure 6) and view the JSON documents (Figure 7).

Explore KeystoneJS Schema Using MongoDB Compass

Figure 6: Explore KeystoneJS Schema Using MongoDB Compass

View Documents Stored by KeystoneJS Using MongoDB Atlas

Figure 7: View Documents Stored by KeystoneJS Using MongoDB Atlas

Next Steps

While MongoDB Atlas radically simplifies the operation of MongoDB there are still some decisions to take to ensure the best performance and reliability for your application. The MongoDB Atlas Best Practices white paper provides guidance on best practices for deploying, managing, and optimizing the performance of your database with MongoDB Atlas.

The guide outlines considerations for achieving performance at scale with MongoDB Atlas across a number of key dimensions, including instance size selection, application patterns, schema design and indexing, and disk I/O. While this guide is broad in scope, it is not exhaustive. Following the recommendations in the guide will provide a solid foundation for ensuring optimal application performance.





Migrating Data to MongoDB Atlas

MongoDB Atlas was announced at this year’s MongoDB World. It’s great not just for new applications, but also your existing MongoDB databases running on other platforms. This post will focus on how you migrate your data and applications over to MongoDB Atlas.

What is MongoDB Atlas?

MongoDB Atlas provides all of the features of MongoDB, without the operational heavy lifting required for any new application. MongoDB Atlas is available on demand through a pay-as-you-go model and billed on an hourly basis, letting you focus on what you do best.

It’s easy to get started – use a simple GUI to select the instance size, region, and features you need. MongoDB Atlas provides:

  • Security features to protect access to your data
  • Built in replication for always-on availability, tolerating complete data center failure
  • Backups and point in time recovery to protect against data corruption
  • Fine-grained monitoring to let you know when to scale. Additional instances can be provisioned with the push of a button
  • Automated patching and one-click upgrades for new major versions of the database, enabling you to take advantage of the latest and greatest MongoDB features
  • A choice of cloud providers, regions, and billing options

But what if you already have application data held in your own on-prem or cloud-based MongoDB database – is it possible to safely migrate that data to MongoDB Atlas? What if your data is held in a 3rd party hosted MongoDB service such as Compose or mLab? Conversely, is it possible to build your application against MongoDB Atlas and then move the data to a MongoDB database running on another platform in the future?

The answer to all of those questions is “yes”. In the future you should expect this to be a highly automated process but right now it involves some manual steps – the purpose of this blog post is to describe the process.

Moving Your Application Data to MongoDB Atlas

The procedure is very straightforward, but if you can’t tolerate losing any of your updates then it does involve stopping application writes for a period. That means it’s vital that you prepare in advance in order to minimize the impact.

Pre-Migration Checklist

  • How long will writes need to be stopped? Perform a dry-run of the mongodump & mongorestore steps but without stopping application writes to answer this.
  • When will the stopping of writes have the smallest impact?
  • What can you change in the application to minimize the impact, e.g. provide a read-only version of the service when it isn’t possible to write to the database?
  • Will you warn users of planned maintenance ahead of time?
  • Do you have sufficient storage space to store the dumped data on the machine where you plan to run mongodump?
  • Once the data has been migrated to MongoDB Atlas, the application will need to switch its database connections to the new address; identify how this will be done.
  • List the IP Addresses of all the machines that will need to connect to MongoDB Atlas – this includes your application nodes as well as the machine where mongorestore will be run. These will need to be added to your MongoDB Atlas group’s whitelist.
  • Decide on what MongoDB Atlas instance size to use and, if necessary how many shards will be needed.
  • Decide on which region to use, e.g. co locating the MongoDB Atlas instances with your cloud-based application servers.

Execute the Migration

  • Create the MongoDB Atlas cluster.
  • Add the required IP Addresses to the whitelist in your group’s security tab.
  • Stop database writes to your existing database; either in your application logic or by blocking them for each of your databases (schemas) in the original MongoDB deployment:
laptop> mongo --host=ec2-52-208-185-213.eu-west-1.compute.amazonaws.com \
    --eval "db.fsyncLock()"
  • Back up the data from the existing database (writes the data to a directory named dump):
laptop> mongodump --host=ec2-52-208-185-213.eu-west-1.compute.amazonaws.com \
    --port=27017
  • Write the data to MongoDB Atlas (using the connection information provided in the Web UI):
mongorestore --ssl --host cluster0-shard-00-00-qfovx.mongodb.net \
    --port 27017 -u billy -p XXX dump
  • Switch the application’s database connections over to your MongoDB Atlas instance.

Want more help? We offer a MongoDB Atlas Migration service to help you properly configure MongoDB Atlas and develop a migration plan. This is especially helpful if you need to minimize downtime for your application, if you have a complex sharded deployment, or if you want to revise your deployment architecture as part of the migration. Contact us to learn more about the MongoDB Atlas Migration service.

Moving Your Application Data Out of MongoDB Atlas

To migrate data out, you can download a MongoDB Atlas backup and then copy the contents to the receiving MongoDB cluster; the documentation describes how to load the data into the receiving replica set. The backup can be either a periodic snapshot or a point-in-time view of the MongoDB Atlas database. If you can’t tolerate lost writes, they must be stopped by the application (fsyncLock is not available in MongoDB Atlas).

Getting the Best Out of MongoDB Atlas

While MongoDB Atlas radically simplifies the operation of MongoDB there are still some decisions to take to ensure the best performance and reliability for your application. The MongoDB Atlas Best Practices white paper provides guidance on best practices for deploying, managing, and optimizing the performance of your database with MongoDB Atlas.

The guide outlines considerations for achieving performance at scale with MongoDB Atlas across a number of key dimensions, including instance size selection, application patterns, schema design and indexing, and disk I/O. While this guide is broad in scope, it is not exhaustive. Following the recommendations in the guide will provide a solid foundation for ensuring optimal application performance.