Blog Feed Post

11 keystrokes that made my jQuery selector run 10x faster

As an ASP.NET developer working on the client-side, one problem you’ll encounter is how to reference the HTML elements that ASP.NET web controls generate. All too often, you find yourself wasting time trying to reference TextBox1, when the element is actually rendered as ctl00_panel1_wizard1_TextBox1.

Much has been written about this, including a post of my own, so I won’t go into detail about many of the workarounds. Instead, I want to take a closer look at the performance drawbacks of one popular solution: the [attribute$=value] selector.

By specifying id as the attribute in this selector, you can avoid ASP.NET’s ClientID issues completely. No matter what the framework prefixes your rendered elements with, they still “end with” the ID you specify at design time. This makes the “ends with” selector a convenient alternative to injecting a control’s ClientID property via angle-brackets.

However, are we trading performance for this convenience? If so, how much?

When Craig Shoemaker asked that question while interviewing me for an upcoming episode of Polymorphic Podcast, I realized I didn’t know the answer as clearly as I’d like. So, I decided to do a bit of benchmarking.

In this post, I’ll share the results of that benchmarking, and show you one way to significantly improve the performance of this convenient selector.

The test scenario

One difficulty when analyzing selector performance is that they all perform well on small test pages. Most performance issues aren’t readily apparent until a page grows in complexity and contains many elements. This can easily leave you overly confident in techniques that survive a simple proof of concept, but don’t scale well to practical usage.

Rather than construct a complex demonstration page from scratch to test against, I decided to use an existing page. With over 160 comments at the time of writing (and testing), the Highslide JS .NET project page is an ideal candidate. Its 1,000+ DOM elements are well suited to expose poorly performing selectors.

Enclosing each comment on the page, there’s a div like this one:

<div id="div-comment-35496" class="comment">
  <!-- Comment content here -->

If you imagine that the “div-“ prefix is “ctl00_”, these IDs are a great substitute for the ClientIDs that ASP.NET controls render within naming containers.

So for purposes of testing, I attempted to select the element div-comment-35496 on that page, assuming that I knew its ID of comment-35496 at design time and that the “div-“ prefix was added at run time. This is identical to the process of selecting the div rendered by an ASP.NET Panel control within a Content Page, for example.

Test methodology

To benchmark each selector, I used the following JavaScript:

var iterations = 100;
var totalTime = 0;
// Repeat the test the specified number of iterations.
for (i = 0; i < iterations; i++) {
  // Record the starting time, in UTC milliseconds.
  var start = new Date().getTime();
  // Execute the selector. The result does not need
  //  to be used or assigned to determine how long 
  //  the selector itself takes to run.
  // Record the ending time, in UTC milliseconds.
  var end = new Date().getTime();
  // Determine how many milliseconds elapsed and
  //  increment the test's totalTime counter.
  totalTime += (end - start);
// Report the average time taken by one iteration.
alert(totalTime / iterations);

This simply executes the selector 100 times, recording how many milliseconds each run takes, and then determines the average.

I had originally recorded and displayed an array of each individual time, but found there was very little variation and stopped tracking each execution. The average is good enough for these purposes.

For each selector, I ran the test five times and then averaged the results, resulting in an average across 500 separate executions in semi-isolated batches. Then, I repeated the process for each major browser.

CERN probably won’t be flying me out to work on the LHC with this level of rigor, but it’s accurate enough for measuring the relative performance change between different selectors.

The baseline

jQuery’s #id selector leverages browsers’ native getElementById routine. Though slower than calling getElementById directly, this is a very fast way to reference our div as a jQuery object:


As you may expect, this browser assisted selector is fastest. In fact, every major browser consistently performed this in less than one millisecond in my testing. Most in less than a third of a millisecond.

To safely use this selector in ASP.NET, we have to manually inject the a control’s ClientID property. Otherwise, any naming container added, removed, or renamed would break our client-side code.

You’ve probably seen that accomplished like this:

$('#<%= comment-35496.ClientID %>')

It looks messy and requires a bit more effort, but it’s fast.

ASP.NET 4.0’s ClientIDMode property promises to eliminate this inconvenience in the long-term, but we’re stuck with it for now. For that matter, with many projects still using ASP.NET 1.x and 2.0, slow to adopt the latest version, we may be stuck with the problem for quite some time to come.


To avoid the messy work of manually injecting ASP.NET’s rendered ClientIDs, you may have seen the suggestion that this is an easier way:


Indeed, this will successfully select the element that we’re after. Since its ID is div-comment-35496, searching for an element whose ID “ends with” comment-35496 works as desired.

Eliminating the angle-brackets is aesthetically pleasing and it’s less up-front work, but what about performance?

jQuery’s selector engine implements “ends with” by performing this test on every element in question, where value is the attribute you’ve specified and check is the string that you’re searching for:

value.substr(value.length - check.length) === check

That’s not so bad if you’re only doing it once, but doing it for every element on the page is a different story. Remember our test page has thousands of elements.

What’s worse, jQuery has no way to know that only one element should match the pattern. So, it must continue iterating through to the end of the page, even after locating the single element that we’re actually interested in.

How bad is it? Over the course of several hundred executions on the test page, I obtained these average speeds for a single $= selector execution in each browser:

The Chrome numbers are unsurprising, given its speedy V8 JavaScript engine. IE8 and Firefox 3.5 perform admirably too. These newer browsers all executed the selector very nearly as quickly as document.getElementById.

However, IE7 and Firefox 3.0 are substantially slower when executing the “ends with” selector. Remembering the non-trivial difference that Firebug made in this interesting post about JSON parsing speeds, I also tested with and without Firebug enabled in Firefox 3.0 (which turned out to be a significant factor here too).

Even though we’re only talking about milliseconds, the penalty in older browsers is too large to ignore — especially when the vast majority of users are still on IE6, IE7, or Firefox 3.0, not the new generation of faster browsers.

Convenience optimized

Remembering that the #id selector is quick because it leverages the native speed of getElementById, we can help jQuery execute the $= selector more efficiently.

If we modify the selector to descend from an HTML tag before performing the “ends with” search, jQuery can use getElementsByTagName to pre-filter the set of elements to search within. Since getElementsByTagName is a native browser routine, it is much faster than jQuery’s interpreted selector engine.

For example, since we know the element we’re after is a div, we could optimize the previous $= selector like this:


The benefit is substantial:

The optimized selector runs more than twice as quickly in IE7 and Firefox 3.0!

Along the same lines, jQuery’s Sizzle engine is able to select CSS classes faster than it can perform substring searches within arbitrary attributes. So, the selector can be further optimized by descending from both an HTML tag and a CSS class.

Since the particular div we’re testing against has a CSS class of comment, let’s try this selector:


The results?

IE7 doesn’t improve much, but Firefox 3.0 shows another excellent increase in its performance. The reason that Firefox shines here is that it implements a native getElementsByClassName routine that IE7 doesn’t (though IE8 does).

While slower than the getElementById powered #id selector, these optimizations have given us a 3-10x speed increase while referencing exactly the same element as the original $= selector. That’s a pretty good return on the investment of typing a measly eleven characters (div.comment) in front of the selector!

A mountain out of a molehill?

This post may seem like a lot of effort to spend on a seemingly tiny performance differential. In fact, I almost considered leaving this one on the shelf, because it’s often difficult to sell the importance of milliseconds.

But, you know what? Milliseconds count! A performance difference of a few dozen milliseconds is a perceptible delay. A few hundred will alienate your users.

Research has consistently shown a strong correlation between fast sites and higher conversion rates, more user actions per visit, and user satisfaction. Compounded across the thousands or millions of times a particular function in your application will be used, it is absolutely worthwhile to invest a few minutes in order to save a few milliseconds.


I hope you’ve found this information useful. Without hard data, it’s difficult to decide which optimizations are premature and which are worthwhile. Especially since your page is likely to grow more complex over time, I think the data clearly shows that selectors which don’t directly target an ID should always descend from an HTML tag when possible.

Perhaps the most important takeaway is that you must keep in mind how easy it is to write one succinct line of jQuery code that results in a non-trivial loop. The real danger lies in putting a selector like $= within a loop yourself, unaware that what appears to be a simple loop is actually a relatively sluggish nested loop.

The other lesson here is that if speed is crucial, you should inject ClientIDs and use the #id selector (as in the baseline shown above). Even the most optimized “ends with” selector in this post still runs at least one order of magnitude slower than the direct #id selector.

Instead of simply posting that div.class[id$=id] is faster than [id$=id], I wanted to explain the sequence of events that led me to that determination. Armed with knowledge of how I optimized the “ends with” selector, I hope that you have a few new optimization tricks up your sleeve now.


Originally posted at Encosia. If you're reading this elsewhere, come on over and see the original.

11 keystrokes that made my jQuery selector run 10x faster

Read the original blog entry...

More Stories By Dave Ward

Dave Ward wrote his first computer program in 1981, using good ‘ol Microsoft Color BASIC and cassette tapes for data storage. Over the years since then, he has had the opportunity to work on projects ranging from simple DOS applications to global telecommunications networks spanning multiple platforms.

Latest Stories
DX World EXPO, LLC, a Lighthouse Point, Florida-based startup trade show producer and the creator of "DXWorldEXPO® - Digital Transformation Conference & Expo" has announced its executive management team. The team is headed by Levent Selamoglu, who has been named CEO. "Now is the time for a truly global DX event, to bring together the leading minds from the technology world in a conversation about Digital Transformation," he said in making the announcement.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Conference Guru has been named “Media Sponsor” of the 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to gre...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develop...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
Companies are harnessing data in ways we once associated with science fiction. Analysts have access to a plethora of visualization and reporting tools, but considering the vast amount of data businesses collect and limitations of CPUs, end users are forced to design their structures and systems with limitations. Until now. As the cloud toolkit to analyze data has evolved, GPUs have stepped in to massively parallel SQL, visualization and machine learning.
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"ZeroStack is a startup in Silicon Valley. We're solving a very interesting problem around bringing public cloud convenience with private cloud control for enterprises and mid-size companies," explained Kamesh Pemmaraju, VP of Product Management at ZeroStack, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...