Click here to close now.




















Welcome!

Related Topics: @ThingsExpo, Microservices Expo, Microsoft Cloud, Containers Expo Blog, Agile Computing, SDN Journal

@ThingsExpo: Blog Post

The Internet of Things and DNS

The Internet of Things will result in an increasing need for scalable DNS services

JANUARY 8, 2014 02:00 PM EST

When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center.

What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to execute a DNS query. Every. Single. Thing.

Maybe that's because we assume DNS can handle the load. So far it's done well. You rarely, if ever, hear of disruptions or outages due directly to the execution of DNS. Oh, there has been some issues with misconfiguration of DNS and of exploitation of DNS (hijacking, illicit use in reflection attacks, etc...) but in general there's rarely a report that a DNS service was overwhelmed by traffic and fell over.

dns-center-iotThat is exactly the problem. We've been successful with scaling DNS.

"Success breeds complacency. Complacency breeds failure. Only the paranoid survive." - Andrew Grove.

In the face of rapidly expanding endpoints (things), it behooves us all to take a second look at DNS and ensure it's ready to meet the challenge.

This is not just about availability. Remember operational axiom #2 - as load increases, performance decreases. That's true for DNS, too. It doesn't get a pass. That's why it's called an axiom, after all, because it's kind of the law, like gravity.

Browsers do a good job of obfuscating the latency incurred by DNS, and native mobile applications never show such gory details, so it's difficult for a user to separate latency associated with an overloaded DNS service from a generally poorly performing application. Not that they care, actually. A slow app is a slow app to an end user. They aren't interested in the gory details, they're interested in speedy applications. Period.

Interestingly, though, the Internet of Things is made up of more than just users. Lots of devices and applications make up the myriad endpoint "overlay" network created by connections between these devices and "things".

Devices don't care about latency (unless of course they're being driven by users, then the users care, but the devices surely don't). But the thing about DNS is that the latency is generally incurred at initial connection time. There's no way to differentiate before a connection is made whether it's a device or a real, live person on the other end. Even after a connection is made, UDP isn't exactly the most verbose of protocols and it's nearly impossible to differentiate via UDP, too. You've only got a few headers, and none of them offer insight into what kind of endpoint is making the request.

The imperative, then, is to ensure really fast connections and responses to every single query.

That may mean you need to reevaluate your DNS infrastructure to ensure it's ready to handle the coming flood of "things". Test and verify the maximum queries per second (QPS) your systems can manage while maintaining what your business defines as acceptable latency. Make sure to plot out latency based on connections and queries per second to get an idea of at what point your DNS starts to become part of the performance problem.

As the Internet expands and more devices and users are accessing your applications, it would be a mistake to forget about DNS. We all know the old saying about "assuming" things - and that certainly holds true when you simply assume your DNS is able to handle the increasing load.

Be paranoid. Test often. CYA(pps).

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Palerra, the cloud security automation company, announced enhanced support for Amazon AWS, allowing IT security and DevOps teams to automate activity and configuration monitoring, anomaly detection, and orchestrated remediation, thereby meeting compliance mandates within complex infrastructure deployments. "Monitoring and threat detection for AWS is a non-trivial task. While Amazon's flexible environment facilitates successful DevOps implementations, it adds another layer, which can become a ...
With SaaS use rampant across organizations, how can IT departments track company data and maintain security? More and more departments are commissioning their own solutions and bypassing IT. A cloud environment is amorphous and powerful, allowing you to set up solutions for all of your user needs: document sharing and collaboration, mobile access, e-mail, even industry-specific applications. In his session at 16th Cloud Expo, Shawn Mills, President and a founder of Green House Data, discussed h...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
SYS-CON Events announced today that MobiDev, a software development company, will exhibit at the 17th International Cloud Expo®, which will take place November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software development company with representative offices in Atlanta (US), Sheffield (UK) and Würzburg (Germany); and development centers in Ukraine. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobi...
There are many considerations when moving applications from on-premise to cloud. It is critical to understand the benefits and also challenges of this migration. A successful migration will result in lower Total Cost of Ownership, yet offer the same or higher level of robustness. In his session at 15th Cloud Expo, Michael Meiner, an Engineering Director at Oracle, Corporation, analyzed a range of cloud offerings (IaaS, PaaS, SaaS) and discussed the benefits/challenges of migrating to each offe...
Chuck Piluso presented a study of cloud adoption trends and the power and flexibility of IBM Power and Pureflex cloud solutions. Prior to Secure Infrastructure and Services, Mr. Piluso founded North American Telecommunication Corporation, a facilities-based Competitive Local Exchange Carrier licensed by the Public Service Commission in 10 states, serving as the company's chairman and president from 1997 to 2000. Between 1990 and 1997, Mr. Piluso served as chairman & founder of International Te...
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
In their session at 17th Cloud Expo, Hal Schwartz, CEO of Secure Infrastructure & Services (SIAS), and Chuck Paolillo, CTO of Secure Infrastructure & Services (SIAS), provide a study of cloud adoption trends and the power and flexibility of IBM Power and Pureflex cloud solutions. In his role as CEO of Secure Infrastructure & Services (SIAS), Hal Schwartz provides leadership and direction for the company.
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducte...
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usag...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Providing the needed data for application development and testing is a huge headache for most organizations. The problems are often the same across companies - speed, quality, cost, and control. Provisioning data can take days or weeks, every time a refresh is required. Using dummy data leads to quality problems. Creating physical copies of large data sets and sending them to distributed teams of developers eats up expensive storage and bandwidth resources. And, all of these copies proliferating...