Table Of Contents Discussion Resumed
Yesterday I’ve posted about the pros and cons of basing WordPress plugin functionality on JavaScript. Now Ted has written a follow up article responding to mine, from which I’ve got the slight impression he might have mistaken my post for a critique of his plugin, which it definitely wasn’t meant to be. Besides, there are also some clear misinterpretations of my arguments in his post. Here I’m taking the time to hopefully correct both aspects.
Foreword
I would honestly like to make sure somehow this whole discussion doesn’t escalate, but at the same time there are things I wouldn’t want to leave uncorrected. I may not be able to choose the right words to achieve both (sometimes it happens my own temper stands in my way, sigh), but I sure am starting this post with the best of intentions. Let’s see how it works out…
About Motivation (Again)
Ted starts his post with the words “You’ve done something noteworthy when people are willing to tell you publicly that they don’t like it”. Two things:
1. If the perceived message of my post was that I didn’t like his plugin – that is not true. If it was available a year or so ago, I would have gladly used it. At least until I would have stumbled over the need for a TOC on multi-page posts.
2. My prior post’s goal was to make a principle comparison of two basic technological choices, PHP and JavaScript, in the context of a WordPress plugin. That I was referring to Ted’s plugin was purely by accident, simply because it happened to be the fresh example at hand. It could have been any other plugin as well that uses JavaScript the same way if it had caught my attention. (I admit though, after reading my post again, that some parts could have allowed for a different impression, however unintended.)
Obviously, writing explicitly (and in bold) what my intention was and what it wasn’t didn’t get through somehow, so Ted continued to misinterpret my arguments in the rest of his post. That’s a pitty, but here it goes.
Technical Arguments
Of the three technical arguments Ted lists as “mine” and responds to, the first two are completely mistaken.
Javascript makes page load slow
Well, okay, it can, but I use
jQuery.ready
, which doesn’t. I don’t really care if the TOC is visible the second the page renders, so I’m willing to wait.
My argument was that JavaScript may make the page loading appear slower, if there is a lot of JavaScript, and/or it’s poor code that does not perform well. (Ted’s code isn’t poor, and I gladly repeat that here.) And by page load time I mean the total time until all parts of the page are completely visible to the user, certainly not just the time by which a page skeleton is displayed, while anything real is being pulled in via AJAX or so. (True, that doesn’t apply to Ted’s code, but it’s the general case I’m elaborating on.)
However, to say that jQuery.ready
does not impact page load performance is simply… nonsense, and it is very important to me to get this right to my readers: jQuery.ready
is no silver bullet against slow page loads. It determines when code is executed, not how well that code performs. In Ted’s case you won’t notice a delay, but there are some Web 2.0 JavaScript UI frameworks out there that consume significant amounts of time to construct the UI, and invoking them within jQuery.ready won’t make things any faster. Some users have complained to me that even jQuery itself, though famous for its low memory footprint, slows down page loading too much for them (which is why I introduced the “No Effects” option in my plugin to suppress jQuery inclusion).
Generally speaking, JavaScript has not a single, but a double impact on page load time, and webmasters have only 25% influence on that. Here's why»
DOM manipulation is browser specific
Well, okay, it can, but the DOM is an open W3C standard, and modern browsers do a good enough job that I don’t expect there to be problems. Besides, jQuery sorts out most of the difference that matter.
Actually, direct DOM manipulation was my argument pro JavaScript and jQuery, as it should have been clear from the way I wrote it. Exactly because DOM is an open standard, the DOM structure is represented the same on all browsers (most of it at least), and it’s much easier and safer to manipulate the DOM than to manipulate HTML text. Indeed, there are no problems here, so there should be absolutely nothing to argue about whatsoever.
Of Freedoms And Bazaars
Ted proceeds to philosophize about what makes up “Freedom Software” and the “Bazaar Model”, which I guess is his way of responding to my side-note about “re-inventing the wheel”. Now it is true I’m not the type of guy who likes to “re-invent the wheel”. I always try existing solutions before I start coding on my own (even if most of the time I’m absolutely convinced I could implement things better than anyone else, but I guess that’s a syndrome common to many (most?) software developers ;-)).
That said though, I don’t dispute anybody’s right to re-invent whatever they want. Indeed there are sometimes very good reasons (or “itches”, in Ted’s words) for it, of which learning and trying out new stuff I consider amongst the most important.
Conclusion
I understand that people will always defend their own solutions no matter what, but my summary is this nonetheless:
1. Ted’s plugin is great in itself, but its design choices definitely stand in the way of significantly extending it in any ways beyond UI enhancements. (He would say he doesn’t need to extend it, and that’s OK too.)
2. My advice to plugin authors still stands: Think twice before deciding for a design that moves too much functionality to JavaScript and away from WordPress (though it is very tempting to do so, as I know very well).
First it takes time to load the JavaScript code. Fortunately, this usually happens only once. After the code is loaded, the browser retrieves it from its cache. (It will only reload a cached resource from the web when it expires.) JavaScript loading time depends on server speed, connection speed, and how powerful the user’s computer is. Of these three, at least server speed is in control of the webmaster, so he has a chance to somewhat compensate for huge JS code being loaded.
Second, JS code takes time to execute. This takes place on every page load. Computation-intensive JavaScript frameworks (like Qooxdoo, for ex.) do introduce a notable delay due to execution time. Naturally, JS execution is only dependent on how powerful the user’s computer (and his browser) is – a property that is absolutely out of control of the webmaster.
Powered by Hackadelic Sliding Notes 1.6.5
IN a more general sense, plug-ins do slow WordPress down, which constrains their utility since I hazard the opinion that speed is absolutely the most important thing about a website. No one is going to read a website that is slow, and slow is something less than 3-5 seconds IMHO.
At the same time, plug-ins are preferable to hard coding in that they are transportable across themes. So most users must prioritize the plug-ins they use. I hazard to guess that is why we see relatively few Table of Contents on sites.
I read an interesting discussion about conditionally loading plug-in code from Beer Planet showing a couple ways to avoid load bloat as well as ensuring that code doesn’t get loaded twice.
I hope it’s germane to your discussion.
Jim, I do value performance, but I wouldn’t say it’s the most important thing about a website. Page loading needs to be fast enough, but not faster. Beyond that, sacrificing flexibility for speed makes no sense.
It is true that WP’s plugin load mechanism imposes some overhead, but alas, many plugins (and themes) suffer from performance problems because of the way they are coded in the first place.
I do strive for the most simple solutions, both with the plugins I implement and those I choose to use on my blog, so I can probably afford to have a few plugins more 😉