Welcome!

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Post

Glue Records and Why They Are Crucial | @DevOpsSummit #DevOps #WebPerf

A lot has been written and discussed about Domain Name System (DNS) in the past few days

Glue Records and Why They Are Crucial
By Nilabh Mishra

A lot has been written and discussed about Domain Name System (DNS) in the past few days. The DDoS attacks on one of the major managed DNS Providers a while ago just made us all take DNS issues seriously once again.

So why so much emphasis on getting DNS Right? Like a lot of other people in this Ecosystem, we believe that DNS is not just a metric but a lifeline; a backbone for our online systems. It is extremely important to the Internet as it lays the foundation for the WWW (World Wide Web).

DNS, in simple terms, translates Host names to IP Addresses. The objective of DNS seems straight forward and simple, yet in real life, it has grown to become one of the most complex systems we have today.

All these add more complexity to an already complex system.

  • Domain Registries
  • Global Top Level Domains (gTLDs)
  • Numerous Country Code Top Level Domains (ccTLDs)
  • An ever-growing list of all the new TLDs (.space, .photography etc.)

Since DNS is not restricted to a single machine (being a distributed, coherent, and hierarchical database) and involves multiple hierarchies and entities, ensuring that every hierarchy and entity involved in managing the system is working efficiently becomes crucial. At the top of the hierarchy is:

  • Root(.)
  • gTLD servers
  • Authoritative Nameservers for domains

Every level in this hierarchy has an important role to play in the resolution process of a Domain Name:

  • The Registries (Verisign managing .COM and .NET)
  • Registrars (GoDaddy and Namecheap)
  • Registrants (those register a Domain Name)
  • ISPs
  • Managed DNS Service Providers

We all are a part of this system and it becomes extremely important for us, as Registrants, to keep an eye on how these multiple components are functioning to ensure that we have a stable and well-functioning system.

In this article, we will focus on a very important concept in DNS known as “Additional Records,” or “Glue Records.”

Additional Records or Glue Records
In simplest of terms, Glue records are A records or IP Addresses that are assigned or mapped to a Domain Name or a sub-domain. Glue records become extremely important when the Nameservers for a domain name are the sub-domains of the domain name itself.

The Glue records can be seen under the “Additional Section” of a DNS Response.

Let’s take an example to understand how Glue Records work; assume you have a domain name called “yourdomain.com” for which you are using the following set of Nameservers:

ns1. yourdomain.com

ns2. yourdomain.com

In the DNS Resolution process, the authoritative nameservers for yourdomain.com are ns1.yourdomain.com and ns2.yourdomain.com. The DNS resolution for ns1.yourdomain.com would first require the resolution of yourdomain.com, which returns the authoritative nameservers as ns1 and ns2.yourdomain.

As you may have already noticed, this creates a circular dependency, or other words a Loop, and the resolution never succeeds.

Glue records help in breaking this dependency by providing the IP Addresses for ns1.yourdomain.com and ns2.yourdomain.com in the lookup process, this breaks the loop from getting created as we no longer need to resolve the nameservers for the IP Addresses – these addresses are already provided in the form of “Glue Records”.

image2

In the example above, we see that Glue records helped remove the circular dependency by providing the A Records for ns1.ctrls.in and ns2.ctrls.in which were returned as the Authoritative Nameservers for the domain: ctrls.in. If this was not the case, the DNS Lookup would have failed because of a circular dependency.

For Domain names, which do not use sub-domains of the same domain as Authoritative Nameservers, Glue records help in reducing the number of lookups by providing the IP Addresses for the authoritative Nameservers. Here is an example for Wikipedia.com.

image1

In this case, Wikipedia.org returned ns1.wikimedia.org, ns2.wikimedia.org and ns3.wikimedia.org as the authoritative nameservers for the domain. This would have required an additional level of DNS lookup for Wikimedia.org to get the A/AAAA record for the domain name initially queried for i.e. Wikipedia.org.

One of our customers, a leading CDN provider headquartered in China, reached out to us a while ago, complaining that the A records being returned for two of their Nameservers were incorrect (Old IPs).

When investigating this case, we observed that when doing a DNS Experience test for the Nameservers, the IPs being returned by the authoritative nameservers were correct. However, when running a DNS Direct test to the Nameservers of the Domain using any of the gTLDs (a-m.gtld-servers.net.), the IPs returned were the incorrect IPs.

Digs to the domain name using the command: dig “domain name here” @a.root-servers.net returned the same response as Catchpoint’s DNS tests.

Further investigation led us to believe that this was one of those cases where the changes to the GLUE/Additional record at the Domain Registrar’s end was not pushed to the gTLD Servers.

Catchpoint DNS Monitors
Experience DNS Test For DNS tests that use the experience monitor, Catchpoint randomly selects a server from each level of the DNS route and queries it for the domain.
Direct DNS Test This test provides the complete query and response from the DNS server specified for the test along with the length of time it took to complete the test and any errors received during testing.


What fixed this issue?
Based on our recommendations, our Client reached out to the Domain Registrar for the domain and got the Glue records updated for the Domain. The change made was pushed to all the gTLD servers and the issue was resolved.

This incident emphasizes the importance of monitoring each level as well as each component of this amazingly vast system we know as DNS. Having a Monitoring strategy focused around DNS is not just recommended but is crucial to discover issues that may be under our control or out of our control.

The post Glue Records and Why They are Crucial appeared first on Catchpoint's Blog.

Read the original blog entry...

More Stories By Mehdi Daoudi

Catchpoint radically transforms the way businesses manage, monitor, and test the performance of online applications. Truly understand and improve user experience with clear visibility into complex, distributed online systems.

Founded in 2008 by four DoubleClick / Google executives with a passion for speed, reliability and overall better online experiences, Catchpoint has now become the most innovative provider of web performance testing and monitoring solutions. We are a team with expertise in designing, building, operating, scaling and monitoring highly transactional Internet services used by thousands of companies and impacting the experience of millions of users. Catchpoint is funded by top-tier venture capital firm, Battery Ventures, which has invested in category leaders such as Akamai, Omniture (Adobe Systems), Optimizely, Tealium, BazaarVoice, Marketo and many more.

Latest Stories
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Busine...
In his session at @ThingsExpo, Eric Lachapelle, CEO of the Professional Evaluation and Certification Board (PECB), provided an overview of various initiatives to certify the security of connected devices and future trends in ensuring public trust of IoT. Eric Lachapelle is the Chief Executive Officer of the Professional Evaluation and Certification Board (PECB), an international certification body. His role is to help companies and individuals to achieve professional, accredited and worldwide re...
SYS-CON Events announced today that TechTarget has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
"Loom is applying artificial intelligence and machine learning into the entire log analysis process, from start to finish and at the end you will get a human touch,” explained Sabo Taylor Diab, Vice President, Marketing at Loom Systems, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
SYS-CON Events announced today that Datanami has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datanami is a communication channel dedicated to providing insight, analysis and up-to-the-minute information about emerging trends and solutions in Big Data. The publication sheds light on all cutting-edge technologies including networking, storage and applications, and the...
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists loo...
SYS-CON Events announced today that EnterpriseTech has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. EnterpriseTech is a professional resource for news and intelligence covering the migration of high-end technologies into the enterprise and business-IT industry, with a special focus on high-tech solutions in new product development, workload management, increased effi...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Cloud promises the agility required by today’s digital businesses. As organizations adopt cloud based infrastructures and services, their IT resources become increasingly dynamic and hybrid in nature. Managing these require modern IT operations and tools. In his session at 20th Cloud Expo, Raj Sundaram, Senior Principal Product Manager at CA Technologies, will discuss how to modernize your IT operations in order to proactively manage your hybrid cloud and IT environments. He will be sharing bes...
A look across the tech landscape at the disruptive technologies that are increasing in prominence and speculate as to which will be most impactful for communications – namely, AI and Cloud Computing. In his session at 20th Cloud Expo, Curtis Peterson, VP of Operations at RingCentral, highlighted the current challenges of these transformative technologies and shared strategies for preparing your organization for these changes. This “view from the top” outlined the latest trends and developments i...
Automation is enabling enterprises to design, deploy, and manage more complex, hybrid cloud environments. Yet the people who manage these environments must be trained in and understanding these environments better than ever before. A new era of analytics and cognitive computing is adding intelligence, but also more complexity, to these cloud environments. How smart is your cloud? How smart should it be? In this power panel at 20th Cloud Expo, moderated by Conference Chair Roger Strukhoff, paneli...
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.