Welcome!

Related Topics: Microservices Expo, Java IoT, Machine Learning , Agile Computing

Microservices Expo: Article

2014 Super Bowl Tips to Avoid Ad Site Fails

Tracking the ads for the Super Bowl can be tough as some advertisers don’t indicate whether they are advertising during the game

This year, the Seattle Seahawks dealt Denver one of the worst beatings in recent Super Bowl history; however, the only highlights of the broadcast were the commercials. They ranged from serious and thought-provoking to funny and quirky. Each ad was meant to do one thing: drive eyes to a brand. With most of the population watching with their phones and tablets, every advertiser's site had to be ready for those eyeballs.

Everyone wants to interview the winners and losers after the game. There is a dissection of every drive as analysts want to understand key aspects of success and failure:

  1. MVPs and who's to blame
  2. The Breakdown on both sides
  3. What to do for next season

I, of course, love football, but I also love watching Super Bowl ads and how they perform. I love looking at who was the fastest, who was the slowest, and understand why. The Internet is a level playing field on which everyone (with enough money) has the same options as everyone else; so when it comes to game time strategy, why do sites perform so differently?

MVPs and Who's to Blame

To review our full wrap-up of how the ad websites aired during the course of the game, click here:

Tracking the ads for the Super Bowl can be tough as some advertisers don't indicate whether or not they are advertising during the Super Bowl; others promote their ads well in advance. To compensate, our team added tests during the game as the ads aired but the methodology we used was the same for all.

We tested the ad URLs using real browser agents from end-user locations across the US. The tests ran from the following locations every 10 minutes during the game:

  • CA: Los Angeles - Verizon
  • CA: San Jose - AT&T
  • FL: Miami - Internap
  • IL: Chicago - Level3
  • MO: St. Louis - SAVVIS
  • NY: New York - Sprint
  • TX: Dallas - AT&T
  • VA: Reston - Savvis
  • WA: Seattle - Internap

We call this methodology a "9" Box as it divides the US into East, Mid-West and West; with three locations in each area running north to south. This gives us good coverage across the continental US; we recommend this approach for basic synthetic monitoring.

The browser agents doing the tests are the same as a real user opening a browser and making a connection to the page. It performs actions such as resolving the DNS address(s) for the ad as well as the ad's content including third parties; establishing the TCP connection(s) to all the domains contributing to the page; downloading the base ad page and reads, the HTML, executing all the JavaScript and CSS; downloading all the images and content being requested by the HTML and JavaScript; calculating how much time it takes the server to respond to request (First Byte Time) and then how much time it takes to download all of the content requested by the page. This allows us to understand which company had the fastest response time, which had the slowest, and how each got that way.

The Breakdown on Both Sides
For additional details on our impression of the Super Bowl advertisers and of the holiday season retailers, and practical advice we can all benefit from - click here for the full article.

More Stories By David Jones

David Jones is the Director of Sales Engineering and APM Evangelism for Dynatrace. He has been with Dynatrace for 10 years, and has 20 years’ experience working with web and mobile technologies from the first commercial HTML editor to the latest web delivery platforms and architectures. He has worked with scores of Fortune 500 organizations providing them the most recent industry best practices for web and mobile application delivery. Prior to Dynatrace he has worked at Gomez (Waltham), S1 Corp (Atlanta), Broadvision (Bay Area), Interleaf/Texcel (Waltham), i4i (Toronto) and SoftQuad (Toronto).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...
Digital transformation has increased the pace of business creating a productivity divide between the technology haves and have nots. Managing financial information on spreadsheets and piecing together insight from numerous disconnected systems is no longer an option. Rapid market changes and aggressive competition are motivating business leaders to reevaluate legacy technology investments in search of modern technologies to achieve greater agility, reduced costs and organizational efficiencies. ...
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure...
Fact: storage performance problems have only gotten more complicated, as applications not only have become largely virtualized, but also have moved to cloud-based infrastructures. Storage performance in virtualized environments isn’t just about IOPS anymore. Instead, you need to guarantee performance for individual VMs, helping applications maintain performance as the number of VMs continues to go up in real time. In his session at Cloud Expo, Dhiraj Sehgal, Product and Marketing at Tintri, sha...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and...
Containers, microservices and DevOps are all the rage lately. You can read about how great they are and how they’ll change your life and the industry everywhere. So naturally when we started a new company and were deciding how to architect our app, we went with microservices, containers and DevOps. About now you’re expecting a story of how everything went so smoothly, we’re now pushing out code ten times a day, but the reality is quite different.
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
You want to start your DevOps journey but where do you begin? Do you say DevOps loudly 5 times while looking in the mirror and it suddenly appears? Do you hire someone? Do you upskill your existing team? Here are some tips to help support your DevOps transformation. Conor Delanbanque has been involved with building & scaling teams in the DevOps space globally. He is the Head of DevOps Practice at MThree Consulting, a global technology consultancy. Conor founded the Future of DevOps Thought Leade...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...