Welcome!

Blog Feed Post

Programmatic API Management of your MongoDB Atlas Database Clusters - Part II

In the blog post, "Programmatic API Management of your MongoDB Atlas Database Clusters", we introduced how to use curl to launch and edit a MongoDB Atlas cluster. The concepts in that post helped establish the groundwork for being able to launch MongoDB Atlas clusters with many of the popular DevOps automation tools available in the market.

We recommend reading the post linked above as a prerequisite to what we will cover here — some common use cases with the MongoDB Atlas API.

The GUI versus API

While point-and-click actions in the MongoDB Atlas GUI are easy, they do not necessarily scale well when rapid deployments or modifications are required or when you’re managing many clusters. You may also prefer to initiate new database build outs or changes using pre-existing automation/orchestration tools and processes. The MongoDB Atlas API permits you to implement your database infrastructure as code by enabling you to programmatically define and execute on new requirements.

This allows you to easily integrate Atlas as part of your DevOps automation process and grants better visibility into how your application (and its associated data layer) evolves over time by permitting git check-ins with a changelog.

What's available in the MongoDB Atlas API

Atlas exposes a REST API that include the following features:

  • JSON entities - All entities are expressed in JSON.
  • Digest authentication - To ensure that your API key is never sent over the network, API requests are authenticated using HTTP Digest Authentication.
  • Browsable interface - Using a consistent linking mechanism, you can browse the entire API by starting at the root resource and following links to related resources.
  • Security - In addition to Digest Authentication, the API is only accessible via HTTPS, and certain calls requiring even more security are protected by user-defined whitelists. Further, an API user’s capabilities are restricted by their assigned role(s). For example, a user with the Read Only role within a particular group is not allowed to modify any resources within that group.
  • The Atlas API provides the following resources:

    Root The starting point for the Atlas API.
    Group IP Whitelist Retrieves and edits the IP whitelist, which controls client access to your MongoDB clusters.
    Clusters Provides access to your MongoDB cluster configuration
    Database Users Retrieves and edits the MongoDB users who have access to your MongoDB clusters.
    Alerts Retrieves and acknowledges alerts.
    Alert Configurations Retrieves and edits alert configurations, which define the conditions that trigger alerts and the methods of notification.


    As you can see, the API allows us to modify a number of different dimensions of our clusters.

    Informational Queries.

    First, click on your username on the upper right hand side of the browser; here you'll find the "Account" dropdown.


    Once in the "Account" page, you'll find a section called "Public API Access". Follow the instructions on creating a key and whitelisting it from the aforementioned post. I have found if using bash that creating a variable for the token is helpful.

    Example:

    bash-3.2$ export TOKEN="exam-plest-ring-foryour-token"
    bash-3.2$ echo $TOKEN
    exam-plest-ring-foryour-token
    



    I can now use the Clusters resource to get information about my MongoDB Atlas cluster. I will need my account's "group ID" in order to accomplish this. To get this ID, you can just review the URL in your browser and grab the appropriate string:

    Here's mine:

    https://cloud.mongodb.com/v2/588b776f96e82110b163ed93#clusters

    Like the token we used before, let's go ahead and turn this into a variable as well:

    bash-3.2$ export GROUPID="588b776f96e82110b163ed93"
    bash-3.2$ echo $GROUPID
    588b776f96e82110b163ed93
    



    Now that the proper info we need has been configured, we can now start querying details about our cluster (I used python -m json.tool to pretty print our output):

    bash-3.2$ curl -X GET -u "[email protected]:$TOKEN" --digest "https://cloud.mongodb.com/api/atlas/v1.0/groups/$GROUPID/clusters" |  python -m json.tool
    
    {
        "links": [
            {
                "href": "https://cloud.mongodb.com/api/atlas/v1.0/groups/588b776f96e82110b163ed93/clusters?pageNum=1&itemsPerPage=100",
                "rel": "self"
            }
        ],
        "results": [
            {
                "backupEnabled": false,
                "diskSizeGB": 10.0,
                "groupId": "588b776f96e82110b163ed93",
                "links": [
                    {
                        "href": "https://cloud.mongodb.com/api/atlas/v1.0/groups/588b776f96e82110b163ed93/clusters/records",
                        "rel": "self"
                    }
                ],
                "mongoDBMajorVersion": "3.4",
                "mongoDBVersion": "3.4.6",
                "mongoURI": "mongodb://records-shard-00-00-x8fks.mongodb.net:27017,records-shard-00-01-x8fks.mongodb.net:27017,records-shard-00-02-x8fks.mongodb.net:27017",
                "mongoURIUpdated": "2017-04-12T18:33:00Z",
                "name": "records",
                "numShards": 1,
                "providerSettings": {
                    "diskIOPS": 100,
                    "encryptEBSVolume": false,
                    "instanceSizeName": "M10",
                    "providerName": "AWS",
                    "regionName": "US_EAST_1"
                },
                "replicationFactor": 3,
                "stateName": "IDLE"
            },
            {
                "backupEnabled": false,
                "diskSizeGB": 20.0,
                "groupId": "588b776f96e82110b163ed93",
                "links": [
                    {
                        "href": "https://cloud.mongodb.com/api/atlas/v1.0/groups/588b776f96e82110b163ed93/clusters/platespace-v2",
                        "rel": "self"
                    }
                ],
                "mongoDBMajorVersion": "3.4",
                "mongoDBVersion": "3.4.6",
                "mongoURI": "mongodb://platespace-v2-shard-00-00-x8fks.mongodb.net:27017,platespace-v2-shard-00-01-x8fks.mongodb.net:27017,platespace-v2-shard-00-02-x8fks.mongodb.net:27017",
                "mongoURIUpdated": "2017-06-18T18:03:53Z",
                "name": "platespace-v2",
                "numShards": 1,
                "providerSettings": {
                    "diskIOPS": 100,
                    "encryptEBSVolume": false,
                    "instanceSizeName": "M20",
                    "providerName": "AWS",
                    "regionName": "US_EAST_1"
                },
                "replicationFactor": 3,
                "stateName": "IDLE"
            }
        ],
        "totalCount": 2
    }
    



    We can see from above that I have two separate clusters currently running in this group. This exercise can be useful if you want information on all existing infrastructure because it's easy to run this query and export the data into a larger report. If you're running a number of different services with different providers, you can simplify the process of researching the specs by running an API call from something such as Jenkins.

    Modifying the database cluster

    Modifying the database cluster With the information I've received from the API, I'm able to get full specs on the records cluster in my group. In this example, we'll focus on a very common need — disk space. The disk space output from our API states the following: "diskSizeGB": 20.0

    Let’s assume that we're running low on hard disk and want to utilize an API call to increase the size to 30.0 GB. The API resource for diskSizeGB can be found in the "Modify a Cluster" section of the documentation.

    The syntax here requires that I POST to the API information that includes my user, my group id, my token and a payload that includes details of what modifications should be made.

    I've put the following line together to execute with the curl binary that will modify my disk size to 32GB.

    bash-3.2$ curl -i -u "[email protected]:$TOKEN" --digest -H "Content-Type: application/json" -X PATCH "https://cloud.mongodb.com/api/atlas/v1.0/groups/$GROUPID/clusters/records" --data '{"diskSizeGB" : 32}'
    



    Once I execute this, I get a JSON document back indicating that the state is currently changing and our requested modification is underway:

    {
        "backupEnabled": false,
        "diskSizeGB": 32.0,
        "groupId": "588b776f96e82110b163ed93",
        "links": [
            {
                "href": "https://cloud.mongodb.com/api/atlas/v1.0/groups/588b776f96e82110b163ed93/clusters/records",
                "rel": "self"
            }
        ],
        "mongoDBMajorVersion": "3.4",
        "mongoDBVersion": "3.4.6",
        "mongoURI": "mongodb://records-shard-00-00-x8fks.mongodb.net:27017,records-shard-00-01-x8fks.mongodb.net:27017,records-shard-00-02-x8fks.mongodb.net:27017",
        "mongoURIUpdated": "2017-04-12T18:33:00Z",
        "name": "records",
        "numShards": 1,
        "providerSettings": {
            "diskIOPS": 100,
            "encryptEBSVolume": false,
            "instanceSizeName": "M10",
            "providerName": "AWS",
            "regionName": "US_EAST_1"
        },
        "replicationFactor": 3,
        "stateName": "UPDATING"
    }
    



    Curl is easy? But what's next?

    One of the keys to a fast moving DevOps environment is automating common tasks. Many of the common DevOps tools, like Chef, have functionality to implement an HTTP GET or POST to an API. By utilizing the Atlas API, you can quickly create, modify, and destroy clusters as part of your configuration management process.

    Here are some HTTP functions embedded in each of the major configuration management tools which you can leverage with the Atlas API to work with your cluster:

    Chef http_request
    Puppet puppet-http or Exec
    Ansible uri Module
    Terraform HTTP Provider


    By leveraging the information from this blog post and these tools, you can implement a fully codified version of your infrastructure, reduce the time it takes to build and modify your database, and seamlessly launch, scale, or turn down clusters based on your needs.

    Get to work

    Get started for free or click here to learn how you may be eligible for 3 free months if you migrate an existing workload.

    Read the original blog entry...

    More Stories By Mat Rider

    MongoDB is a document database with the scalability and flexibility that you want with the querying and indexing that you need.

    Latest Stories
    SYS-CON Events announced today that SourceForge has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SourceForge is the largest, most trusted destination for Open Source Software development, collaboration, discovery and download on the web serving over 32 million viewers, 150 million downloads and over 460,000 active development projects each and every month.
    There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
    Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
    SYS-CON Events announced today that Daiya Industry will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Daiya Industry specializes in orthotic support systems and assistive devices with pneumatic artificial muscles in order to contribute to an extended healthy life expectancy. For more information, please visit https://www.daiyak...
    SYS-CON Events announced today that Nihon Micron will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Nihon Micron Co., Ltd. strives for technological innovation to establish high-density, high-precision processing technology for providing printed circuit board and metal mount RFID tags used for communication devices. For more inf...
    SYS-CON Events announced today that Massive Networks, that helps your business operate seamlessly with fast, reliable, and secure internet and network solutions, has been named "Exhibitor" of SYS-CON's 21st International Cloud Expo ®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. As a premier telecommunications provider, Massive Networks is headquartered out of Louisville, Colorado. With years of experience under their belt, their team of...
    SYS-CON Events announced today that Suzuki Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Suzuki Inc. is a semiconductor-related business, including sales of consuming parts, parts repair, and maintenance for semiconductor manufacturing machines, etc. It is also a health care business providing experimental research for...
    "Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
    Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
    Many organizations adopt DevOps to reduce cycle times and deliver software faster; some take on DevOps to drive higher quality and better end-user experience; others look to DevOps for a clearer line-of-sight to customers to drive better business impacts. In truth, these three foundations go together. In this power panel at @DevOpsSummit 21st Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, industry experts will discuss how leading organizations build application success from all...
    SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
    Cloud-based disaster recovery is critical to any production environment and is a high priority for many enterprise organizations today. Nearly 40% of organizations have had to execute their BCDR plan due to a service disruption in the past two years. Zerto on IBM Cloud offer VMware and Microsoft customers simple, automated recovery of on-premise VMware and Microsoft workloads to IBM Cloud data centers.
    SYS-CON Events announced today that Cedexis will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Cedexis is the leader in data-driven enterprise global traffic management. Whether optimizing traffic through datacenters, clouds, CDNs, or any combination, Cedexis solutions drive quality and cost-effectiveness.
    Elon Musk is among the notable industry figures who worries about the power of AI to destroy rather than help society. Mark Zuckerberg, on the other hand, embraces all that is going on. AI is most powerful when deployed across the vast networks being built for Internets of Things in the manufacturing, transportation and logistics, retail, healthcare, government and other sectors. Is AI transforming IoT for the good or the bad? Do we need to worry about its potential destructive power? Or will we...