John Smith's Blog

Ramblings (mostly) about technical stuff

gl.enableVertexAttribArray() gotcha

Posted by John Smith on

Another post mainly in the hope that I might save someone else the wasted time and head-scratching I spent in fixing this...

I've been continuing playing with WebGL, and as well as experimenting with new (to me) functionality, in parallel I've started building up a library to tidy up the repetitious boilerplate that has been largely common to all my experiments to date. Until now, this has been a fairly mundane and trouble-free job, but I managed to cause myself a lot of pain and anguish last night, when some of my library code wasn't completely right.

I had a vertex buffer that contained 4 elements per vertex, a three-element (x,y,z) coordinate, and a single-element greyscale value. On initial run-through, the coordinates were rendered correctly, but the greyscale value was not at all how I expected. Rather than coming out in a shade of grey, my pixels were being rendered as white.

As far as I could tell, the code to push the vertex data through to OpenGL was fine, and not really any different to a number of earlier successful experiments: gl.bindBuffer(gl.ARRAY_BUFFER, bottomFace.vertexPositionBuffer); gl.vertexAttribPointer(shaderProgram.attributes["aVertexPosition"], 3, // vec3 - (x, y, z) gl.FLOAT, false, (4*4), // total size of 'vertex-input' aka stride 0); // offset within 'vertex-input' gl.vertexAttribPointer(shaderProgram.attributes["aVertexGreyness"], 1, // float gl.FLOAT, false, (4*4), // total size of 'vertex-input' aka stride 12); // offset within 'vertex-input' gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, bottomFace.vertexIndexBuffer); gl.drawElements(gl.TRIANGLES, bottomFace.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); I've had problems with this sort of code before, so started fiddling with the arguments to the second gl.vertexAttribPointer() to see if I could provoke it into doing something that would give some insight into what was going wrong, but it steadfastly refused to render anything differently.

One thing that was curious, was swapping the ordering of the two attribute declarations in the vertex shader. As expected, this caused the attribute index values to flip between 0 and 1, but also this seemed to be passed through to the shader, causing my pixels to render as either black or white.

Chrome's WebGL inspector didn't show anything unusual, and indicated that my vertex array had the expected values, so I was at a bit of a loss. Eventually I started hacking around with some older working code, to find out where things were going wrong, and stumbled across the cause.

It transpired that when I was initially getting my attribute index values, I wasn't also enabling them as vertex attribute arrays - or rather, this was happening for the (x,y,z) coordinate attribute (thanks to some legacy code that I thought wasn't getting called), but not for the greyscale attribute. Updating my attribute initialization code fixed the problem: for (var i=0; i<attrNames.length; i++) { attrDict[attrNames[i]] = gl.getAttribLocation(shaderProgram, attrNames[i]); gl.enableVertexAttribArray(attrDict[attrNames[i]]); // THIS LINE ADDED }

No errors are caused by not calling gl.enableVertexAttribArray(), and I don't currently know of any reason why you wouldn't want an attribute enabled, but without this rather boring line, you get mysterious failures as I unfortunately found out :-(

Hassles with array access in WebGL, and a couple of workarounds

Posted by John Smith on

I've been pottering around a bit more with WebGL for a personal project/experiment, and came across a hurdle that I wasn't expecting, involving arrays. Maybe my Google-fu was lacking, but this doesn't seem to be widely documented online - and I get the distinct impression that WebGL implementations might have changed over time, breaking some older example code -so here's my attempt to help anyone else who comes across the same issues. As I'm far from being an expert in WebGL or OpenGL, it's highly likely that some of the information below could be wrong or sub-optimal - contact me via Twitter if you spot anything that should be corrected.

I want to write my own implementation of Voronoi diagrams as a shader, with JavaScript doing little more than setting up some initial data and feeding it into the shader. Specifically, JavaScript would generate an array of random (x,y) coordinates, which the shader would then process and turn into a Voronoi diagram. I was aware that WebGL supported array types, but I hadn't used them at all in my limited experimentation, so rather than jumping straight in, I thought it would be a good idea to write a simple test script to verify I understood how arrays work.

This turned out to be a wise decision, as it turns out that what would be basic array access in pretty much any other language, is not possible in WebGL. (Or OpenGL ES - I'm not sure where exactly the "blame" lies.) Take the following fragment shader code: uniform float uMyArray[16]; ... void main(void) { int myIndex = int(mod(floor(gl_FragCoord.y), 16.0)); float myVal = uMyArray[myIndex] ... The intention is to obtain an arbitrary value from the array - simple enough, right?

However, in both Firefox 12 and Chromium 19, this code will fail at the shader-compilation stage, with the error message '[]': index expression must be constant. Although I found it hard to believe initially, this is more-or-less what it seems - you can't access an element in an array using a regular integer variable as the index.

My uninformed guess is that this is some sort of security/leak prevention mechanism, to stop you reading memory you shouldn't be able to, e.g. by making the index variable negative or bigger than 16 in this case. I haven't seen any tangible confirmation of this though.

Help is at hand though, as it turns out that "constant" isn't quite the same as you might expect from other languages. In particular, a counter in a for loop is considered constant, I guess because it is constant within the scope of the looped clause. (I haven't actually tried to alter the value of the counter within a loop though, so I could well be wrong.)

This led me to my first way of working around this limitation: int myIndex = int(mod(floor(gl_FragCoord.y), 16.0)); float myVal; for (int i=0; i<16; i++) { if (i==myIndex) { myVal = myArray[i]; break; } } This seems to be pretty inefficient, conceptually at least. There's a crude working example here, if you want to see more. Note that it's not possible to optimize the for loop - that statement also suffers from similar constant restrictions.

Whilst Googling for ways to fix this, I also found reference to another limitation, regarding the size of array that a WebGL implementation is guaranteed to support. I don't have a link to hand, but IIRC it was 128 elements - which isn't a problem for this simple test, but could well be a problem for my Voronoi diagram. The page in question suggested that using a texture would be a way to get around this.

As such, I've implemented another variant on this code, using a texture instead of an array. This involves quite a bit more effort, especially on the JavaScript side, but seems like it should be more efficient on the shader side. (NB: I haven't done any profiling, so I could be talking rubbish.)

Some example code can be found here, but the basic principle is to

  1. Create a 2D canvas in JavaScript that is N-pixels wide and 1-pixel deep.
  2. Create an ImageData object, and write your data into the bytes of that object. (This is relatively easy or hard depending on whether you have integer or float values, and how wide those values are.)
  3. Convert the Sampler2D uniform. Note you (probably) won't need any texture coordinates or the usual paraphernalia associated with working with textures.
  4. In your shader, extract the value from the shader by using the texture2D function, with vec2(myIndex/arraySize, 0.0) as the second argument.
  5. As the values in the returned vec4 are floats in the range (0.0, 1.0), you'll probably want to decode them into whatever the original number format was.

To go into more detail, here are the relevant bits of code from the previously linked example>.

Creating our array of data

Firstly, we generate the data into a typed array: function createRandomValues(numVals) { // For now, keep to 4-byte values to match RGBA values in textures var buf = new ArrayBuffer(numVals * 4); var buf32 = new Uint32Array(buf); for (var i=0; i < numVals; i++) { buf32[i] = i * 16; } return buf32; } ... var randVals = createRandomValues(16); This is nothing special, and is really only included for completeness, and for reference for anyone unfamiliar with typed arrays. Observant readers will note that the function name createRandomValues is somewhat of a misnomer ;-)

Creating the canvas

The array of data is then fed into a function which creates a canvas/ImageData object big enough to store it: function calculatePow2Needed(numBytes) { /** Return the length of a n*1 RGBA canvas needed to * store numBytes. Returned value is a power of two */ var numPixels = numBytes / 4; // Suspect this next check is superfluous as affected values // won't be powers of two. if (numPixels != Math.floor(numPixels)) { numPixels = Math.floor(numPixels+1); } var powerOfTwo = Math.log(numPixels) * Math.LOG2E; if (powerOfTwo != Math.floor(powerOfTwo)) { powerOfTwo = Math.floor(powerOfTwo + 1); } return Math.pow(2, powerOfTwo); } function createTexture(typedData) { /** Create a canvas/context contain a representation of the * data in the suppled TypedArray. The canvas will be 1 pixel * deep; it will be a sufficiently large power-of-two wide (although * I think this isn't actually needed). */ var numBytes = typedData.length * typedData.BYTES_PER_ELEMENT; var canvasWidth = calculatePow2Needed(numBytes); var cv = document.createElement("canvas"); cv.width = canvasWidth; cv.height = 1; var c = cv.getContext("2d"); var img = c.createImageData(cv.width, cv.height); var imgd = img.data; ... ... createTexture(randVals); The above code creates a canvas sized to be a power-of-two, as WebGL has restrictions with textures that aren't sized that way. As it happens, those restrictions aren't actually applicable in the context we are using the texture, so this is almost certainly unnecessary.

Storing the array data in the canvas

The middle section of createTexture() is fairly straightforward, although a more real-world use would involve a bit more effort. var offset = 0; // Nasty hack - this currently only supports uint8 values // in a Uint32Array. Should be easy to extend to larger unsigned // ints, floats a bit more painful. (Bear in mind that you'll // need to write a decoder in your shader). for (offset=0; offset<typedData.length; offset++) { imgd[offset*4] = typedData[offset]; imgd[(offset*4)+1] = 0; imgd[(offset*4)+2] = 0; imgd[(offset*4)+3] = 0; } // Fill the rest with zeroes (not strictly necessary, especially // as we could probably get away with a non-power-of-two width for // this type of shader use for (offset=typedData.length*4; offset < canvasWidth; offset++) { imgd[offset] = 0; } I made my life easy by just having 8-bit integer values stored in a 32-bit long, as this maps nicely to the 4-bytes used in an (R,G,B,A) canvas. Storing bigger integer or float values, or packing these 8-bit values, could be done with a few more lines. One potential gotcha could be that the typed array values are represented in whatever the host machine's native architecture dictates, so just simply dumping byte values directly into the ImageData object could have "interesting" effects on alternative hardware platforms.

Convert the canvas into a texture

createTexture() concludes by converting the canvas/ImageData into a texture // Convert to WebGL texture myTexture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, myTexture); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img); /* These params let the data through (seemingly) unmolested - via * http://www.khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences#Non-Power_of_Two_Texture_Support */ gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.bindTexture(gl.TEXTURE_2D, null); // 'clear' texture status } The main lines to note are the gl.texParameteri() calls, which are to tell WebGL not to do any of the usual texture processing stuff like mipmapping. As we want to use the original fake (R,G,B,A) values unmolested, the last thing we want is for OpenGL to try to be helpful and feeding our shader code some modified version of these values.

EDIT 2012/07/10: I found the example parameter code above wasn't enough for another program I was working on. Adding an addition setting for gl.TEXTURE_MAX_FILTER fixed it: gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAX_FILTER, gl.NEAREST); I suspect gl.NEAREST might be generally better than gl.LINEAR for this sort of thing - however I haven't done thorough tests to properly evaluate this.

Extract the data from the texture

Just for completeness, here's the mundane code to pass the texture from JavaScript to the shader: gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, myTexture); gl.uniform1i(gl.getUniformLocation(shaderProgram, "uSampler"), 0);

Of more interest is the shader code to reference the "array element" from our pseudo-texture: uniform sampler2D uSampler; ... void main(void) { float myIndex = mod(floor(gl_FragCoord.y), 16.0); // code for a 16x1 texture... vec4 fourBytes = texture2D(uSampler, vec2(floatIndex/16.0, 0.0)); ... This now gives us four floats which we can convert into something more usable. Note that as we are (ab)using the RGBA space, remember that these four values are in the range 0.0 to 1.0.

Decode the data

Now, this is where I cheated somewhat in my test code, as I just used the RGBA value pretty much as-was to paint a pixel: gl_FragColor = vec4(fourBytes.r, fourBytes.g, fourBytes.b, 1.0); To turn it back into an integer, I'd need something along the lines of: int myInt8 = int(fourBytes.r * 255.0); int myInt16 = int(fourBytes.r * 65536.0) + int(fourBytes.g * 255.0); The above code is untested, and I suspect 65536 might be the wrong value to multiply by - it could be (255*255) or (256*255). In a similar vein, a more intelligent packing system could have got 4 different 8-bit values into a single texel, pulling out the relevant (R,G,B,A) value as appropriate.

I wouldn't be surprised if some of the details above could be improved, as this seems a very long-winded way of doing something that seems like it should be completely trivial. However, as it stands, it should at least get anyone suffering the same issues as me moving forward.

Now, to get back to the code I originally wanted to write...

Parallax starfield and texture mask effect in WebGL

Posted by John Smith on

I've been pottering around a bit with WebGL lately, really just playing with 2D stuff, and this is probably the most notable thing I've hacked up so far.

Screengrab from a WebGL demo, showing the text Hello World using a starfield effect

It's vaguely inspired by stuff like the Sid Sutton Doctor Who titles from the early '80s and Numb Res (especially the bit with letters forming around 2'30" in) - but it's incredibly crude compared to either of those.

If you care to view source on that page and/or the JavaScript that drives it, it should hopefully be fairly easy to follow, but in summary:

  • There are no true 3D objects or particles, instead the code sets up 6 triangles that make up 3 rectangles, each of which fill the full WebGL canvas area. These form 3 parallax layers, which form the starfield(s) - with only minor tweaks, the number of layers could be increased, which would probably improve the believability of the effect a fair bit.
  • The stars are rendered entirely in the fragment shader, using a pseudo-random algorithm based on the X&Y; pixel position and the frame number. The increments to the frame number are what give the scrolling effect, and a speed value supplied via the vertex shader is what causes each layer to scroll at a different rate.
  • The "Hello world" text is rendered into a hidden 2D canvas object and converted to a WebGL texture at startup. The fragment shader reads the texture, and if that particular pixel was set on the texture, increases the probability of a star being shown.
  • Things would probably look better if I added a bit of pseudo-randomness to make the stars twinkle. Unfortunately I was getting a bit bored with the whole thing by the point it came to do this part ;-)

Some observations from my ill-informed stumbling around in a technology I don't really understand:

  • Performance seems fine on the 2 of the three machines I've tested it on so far - a dual-boot Linux/Win7 box with AMD hexacore and nVidia GTS450 and 2008 white MacBook are quite happy; a Celeron netbook with integrated graphics understandably less so, although still churning out an acceptable framerate.
  • Curiously the dual-boot box reporting consuming way more CPU when running Firefox or Chromium under Linux compared to Windows 7. I'm not quite sure why, as CPU usage should be pretty minimal - all the "clever" stuff should be happening on the graphics card, and all that the CPU should be doing each frame is updating a counter and getting the graphics card to re-render based on that updated counter. (Both operating systems have fairly up-to-date nVidia drivers, with the browsers configured to use "proper" OpenGL as opposed to ANGLE or suchlike.) What the cause is, I haven't yet investigated - it could be some quirky difference in the way CPU usage is reported.
  • I reduced Chromium CPU usage from mid-30s to mid-20s (as measured on the Linux box) by moving code out of the main animation loop that didn't need to be there - stuff that defined the geometry, pushed the texture through, etc.
  • I still need to find a way to mentally keep track of the various different coordinate systems in use - vertex shader uses -1.0 to 1.0, fragment shader uses 0.0 to 1.0, plus also remembering the real pixels. (And not to mention that 2D canvas is inverted compared to WebGL canvas!)
  • It feels a bit odd to me that *GL makes it easier to use floating-point rather than integer values. I guess I'm still stuck in an '80s mentality of writing stuff for the 6502 (which didn't really have 16-bit integers, never mind floating point) and stuff like Bresenham's algorithm. (Ironically enough, Bresenham's algorithm was a topic of discussion just last night, in the talk about Raspberry Pi at this month's Hacker News London event.)
  • In a similar vein, I was a tad surprised to find minimal support in shaders for doing bit manipulation stuff like you see in "pure CPU" Satori demos. The same goes for the lack of a proper random number generator, although in the context of this experiment, my controllable pseudo-random numbers were probably a better fit. (I get the impression that this functionality is available in newer versions of regular OpenGL, just not the variant supported by WebGL?)

In praise of help() in Python's REPL

Posted by John Smith on

For various reasons, I'm doing a bit of JavaScript/CoffeeScript work at the moment, which involves use of some functions in the core libraries which I'd not really used in the past. A minor aspect of this involves logarithmic values, and I was a bit surprised and then disappointed that JavaScript's Math.log() isn't as flexible as its Python near-namesake math.log(): # Python >>> math.log(256,2) # I want the result for base 2 8.0 versus # CoffeeScript coffee> Math.log(256,2) Math.log(256,2) 5.545177444479562

Now, it's probably unreasonable of me to expect the JavaScript version of this function to behave exactly the same as the Python version, especially as the (presumably) underlying C function only takes a single value argument. (Although it might have been nice to get a warning about the ignored second argument, rather than silence...)

On the other hand though, it reminded me of how much more civilized Python is compared to JavaScript. When I'm hacking around, I almost always have a spare window open with a running REPL process, that allows me to quickly check and test stuff, and can very easily pull up the docs via the help() function if I need further info. In contrast, to do the same in JavaScript I have to move over to a browser window and search for info on sites like MDN, or resort to my trusty copies of The Definitive Guide, neither of which are anywhere near as convenient.

After a brief bit of Googling and a plea for help on Twitter, I was unable to find any equivalent to this functionality in the JavaScript world - and let's face it, help() is pretty basic stuff when compared to what the likes of IPython and bpython offer the fortunate Python developer.

I'd love to be corrected on this, and be told about some nice CLI-tool for JavaScript that can help me out. (But not some overblown IDE that would require me to radically change my established development environment, I hasten to add!) I'm not expecting this to happen though - Python's help() relies heavily on docstrings, and I'm not aware that anything such as JsDoc is in common usage in the JavaScript community?

Visualization of, and musings on, recent Hacker News threads about liked and disliked languages

Posted by John Smith on

For a while now, I've been itching to find an excuse to something in SVG again, so when there were a couple of threads last week on Hacker News about people's most liked and most disliked languages, it felt like an ideal opportunity.

You can view a wider, more legible, version of the scatter plot via this link. I've used logarithmic scaling, as using a regular linear scale, there was just a huge mess in the bottom left corner.

'Like' votes are measured horizontally, 'dislikes' vertically - so the ideal place to be is low down on the right, and the worst is high up on the left. The results are as captured at 2012/03/28 - I'd taken a copy a couple of days earlier, and there had been some changes in the interim, but only by single-digit percentages.

Some thoughts and observations:

  • The poll this data comes from is somewhat imperfect, as already mentioned in the comments in the thread itself. I should also point out that another poster on that thread also did a similar like vs dislike analysis, but I didn't see that post until I'd already started on this.
  • HN is a very pro-Python place - just compare all the threads related to PyCon 2012 versus the lack of noise after most other conferences - so it's hardly surprising who the "winner" is in such a voter base. I do find it odd though that Python doesn't seem to have such a good showing in other corners of the HN world. e.g. of the (relatively few) HN London events I've been to, I don't recall hearing many (any?) of the speakers using Python for their projects/companies - whereas "losers" such as Java and PHP do get namechecked fairly often.
  • I'm amused that CoffeeScript is liked at exactly the same ratio as JavaScript - 76%.
  • I was tempted to do some sort of colour-coding by language type (interpreted vs compiled), age etc - but at initial glance, I don't see any real trends that might indicate why a certain school/group of languages do well or badly.

Artificial pagination of articles on news sites - thanks, but no thanks

Posted by John Smith on

TL; DR: the bazillionth whinge about content sites with paginated articles, but picking on businessweek.com as they seem to be doing something particularly iniquitous, that I haven't seen before. (Although it could just be that I'm unobservant or behind-the-times, and that this is old news.)

I was in a branch of W H Smith yesterday, and had a quick flick through the latest issue of Bloomberg Business Week magazine. As usual, there were a number of articles that looked like they'd be worth reading, so I handed over £3.30 for the dead tree edition made a mental note to visit businessweek.com later to digest those articles properly.

I got round to visiting the site just now, and it looked like they might have had a minor redesign since I last visited - nothing particularly outrageous though. The first article I checked was this week's cover story about Twitter. Reading down the first page, it was quite a good piece, if not telling me anything I hadn't previously been aware of.

As I scrolled down to the bottom of the page, there was a pagination nav that indicated the article had been split into four pages. As I clicked on "next >" to go to the following page, I was surprised about how quickly the second page appeared - suspiciously quick, in fact.

Screen grab of a browser showing the bottom of an article at businessweek.com, specifically the page navigation

Being a nosey bugger, I viewed the HTML source of the page, and found that in fact, all of the article content is present in the "first" page, and it isn't even sectionalized in any way.

Screen grab of a browser 'View Source' window, showing that there is article content beyond that shown to the user on a page of a businessweek.com article

It wasn't a big surprise to me to find out that if I turned off JavaScript, my browser would now show the whole story on a single page, with no pagination controls visible anywhere. On my 1050x1680 portrait display, the full story runs for 8 screens in Firefox, which doesn't strike me as especially long and in need of breaking up. (NB: I have Ghostery blocking Disqus comments in Firefox; when I viewed the regular paginated site in a browser, I found that just the first of the four pages ran to 6 screensworth, of which around 50% were the - mostly inane - Disqus user comments.)

Screen grab of the businessweek.com story with JavaScript disabled, now showing all the article content on a single page

I haven't bothered to check the site's JavaScript code, but I assume there's something that measures the height of the <p> elements, and once the height goes above a certain point, starts splitting/hiding them into pages, and inserts the page navigation controls.

Now, pagination of online content is a complicated subject, and I'm no UX guru, conversion wizard or SEO charlatan who can confidently spout chapter-and-verse about what you should or shouldn't do when building a content site. What I do know is that as a user, I don't like having to continually click-scroll-click-scroll-click-scroll to get through an article that could have easily been scrolled through. And I'm pretty sure I'm not alone.

The usual excuse for pagination is that it increases the number of page impressions or ads that can be shown, but I don't think that's valid here:

  • As there's no new page being loaded when I click 'next', a page hit in the traditional sense isn't occurring. I'm sure there'll be some JavaScript analytics code sending something back to the server when I navigate to another page, but surely this could be done by handling onscroll events, similar to how people such as Twitter (ironically enough) implement infinite scrolling.
  • The ad space on a long single page is pretty much the same as on multiple short pages, so the same number of ads could be run. Now, after viewing the story on BusinessWeek a few times, it looks very much like there are only a very small number of ads being repeated on each sub-page of the story, and having the same ads repeatedly shown on a longer single page would look pretty dumb, but this seems to me to be more a failure of their ad sales or syndication systems, than anything else.
  • Most sites that use pagination - Ars Technica is the whipping boy that usually comes to mind - do at least keep up the pretence of having to download new content (whether by a traditional page load, or via Ajax), but this is the first time I've been aware of a site using JavaScript to IMHO actively make things worse for end users.

Disparity in requestAnimationFrame behaviour between Chrome and Firefox

Posted by John Smith on

This is a brief prelude to another post I hope to make in a couple of days or so, once I've solved my problem to my satisfaction. In the meantime, here's a related curio that I hadn't seen documented online before I had to start digging...

requestAnimationFrame is something that's been pushed in the last year or two as a more efficient way of doing animations in JavaScript than the traditional technique of using setInterval. In particular, it aims to avoid having your machine burn CPU on executing animations in a tabbed page that's not currently visible.

At time of writing, only Firefox and Chrome seem to actually support this function, albeit with moz and webkit vendor prefixes. caniuse.com doesn't have too much information about future support in other browsers - it'll appear in IE10, but it's unclear about Safari or Opera. Certainly the Opera Next 12.0 I downloaded yesterday doesn't appear to have it.

Now, for the most part this isn't the end of the world, as there are published shims to implement a workable alternative using setInterval() or setTimeout(). Unfortunately, these will just churn away as normal in a background tab, whereas what I wanted to do was to see how things were different in a background tab.

It turns out that the two implementations we have so far differ in their behaviour. Chrome comes to a dead stop when a page is in a background tab, which is probably what you'd naively expect to happen. Firefox on the other hand does some gradual throttling - you'll get one frame in the first second of being backgrounded, then another after a further two seconds, then a further four seconds, eight seconds, sixteen seconds, etc.

I knocked up a very rough demo for this, so that you can see for yourself - take a look here, and see what happens when you r-click the link on the page and open it in another tab - the function called via requestAnimationFrame() updates the page title, so you can see how often it gets called from the text in the tab.

I'm not completely clear why Mozilla have implemented this the way they have - I've not dug out any official specs, but going by the year old Chromium issue to add this functionality, I don't expect this behaviour to show up in Chrome/Chromium.

In the next post I'll elaborate on the problem I've been trying to solve - suffice to say, using requestAnimationFrame was a bit of a hacky way of trying to achieve something that I'd have thought should have been extremely straightforward...

Incomplete implementation of getElementsByClassName for SVG in IE9

Posted by John Smith on

Now that IE9 has been officially released, I thought it would be wise to check that this blog looked OK in it - it certainly didn't in older versions, but given the audience I'm writing for, I wasn't especially bothered about fixing things.

For the most part it seems acceptable - it's mainly the cosmetic CSS stuff like transitions and gradients that aren't working properly. However, skimming through my older posts, I noticed a glitch which is more subtle than you might expect...

In this post I have an SVG file with some perfunctory interactivity implemented via JavaScript. Some of the more mundane functionality works - hovering over a segment in the bar chart changes the segment colour to indicate that it is clickable. However, clicking has no effect, whereas this works fine in other modern browsers.

Investigation shows that IE9's implementation of getElementsByClassName isn't all there for SVG. Within an SVG file, you can do document.getElementsByClassName("foo") to get all the matching elements in the entire SVG document, and that works fine, but document.getElementById("foo").getElementsByClassName("bar") to get the matching child elements within an element doesn't work.

There's a cut-down test file here. It works fine in Chrome, Safari, Firefox (4+) and Opera, but in IE9, its developer console reports this: Screengrab of IE9 showing it reporting a JavaScript error on a test SVG file that uses getElementsByClassName

A similar test on an HTML5 document's element does work fine in IE9, so it would seem to be an issue with their implementation of the SVG DOM. As this seems to only affect SVG, and an uncommon usage of the method - it looks like most people will use it against the document, not an element - this problem is unlikely to affect a large number of people, but it's certainly a pain in the backside for those of us who like to play with SVG :-(

Experimental stacked bar chart in SVG with JavaScript interactivity

Posted by John Smith on

Stacked bar charts are quite nice for presenting a relatively large amount of data in a compact space, but they have a major failing, in that it's difficult to compare similar values in different columns, unless the value is the bottom-most one in the bars.

As an experiment, I've played with adding some basic interactivity to try to address this. Below is an SVG chart that initially appears to be nothing out of the ordinary.

However, you can click on the individual parts of each bar to align baseline of the similar parts in the other bars, which then makes comparisons much easier. Clicking on a part twice reverts the alignment to the default state. This is all done via standard JavaScript and DOM event handlers - the main pain was that no browser supports CSS3 transitions in SVG, so I had to knock up some simple animation code. (I'm sure jQuery and similar libraries also facilitate this, but I wanted to have a completely standalone file.) EDIT: On reflection, I think I'm talking rubbish - CSS3 transitions can't be used on shape positions/sizes because they are part of the element itself, not a separate styling that's applied to it.

This is obviously super-basic; it might be good to do things like adjusting the Y-axis label numbering so that 0 is aligned with the selected element's baseline. However, I'm bored of this now, so I'm posting this up before I forget about it completely :-) The SVG code is currently entirely handcrafted, but now that all the basic concepts are in place, creating a graph from some datasource would be a fairly mundane exercise.

EDIT: I've just realized the SVG file has some console.log()s still in, which caused the animation not to work on Firefox 3.x - should now be fixed.

Animation of newsprint reel change in SVG and JavaScript

Posted by John Smith on

Just been mucking around with SVG and JavaScript, and knocked out this fairly basic animation...

Something that's a tad annoying is that if you include an SVG file via an <img> tag, it doesn't seem to execute any JavaScript that it might contain - as such, I've had to resort to an <iframe> above. Whilst I can understand why an <img> isn't executed, it's mildly annoying - animated GIFs work, after all.

Just in case browser support does change, here's the SVG again, but this time pulled in as an <img>... SVG animation of the reel change process in a newspaper printing plant

Anyway, this is all well and good, but I imagine that >99% of the people who might view this will have no idea of what this is supposed to be. In brief (?), it's a very simplified side-on view of how paper is consumed in a newspaper printing press.

To avoid having to literally "stop the press", each press tower can hold two (or less commonly, three) reels of paper. One of these reels will be "webbed up", and unwinds the reel to feed the paper into the press. Note that these reels can weight between 500kg and two tonnes, with - very approximately - 20 kilometres of paper on them, which takes around 20 minutes to be consumed when a press is running at full tilt. (Bear in mind that I'm referring to the specifications of the presses and reels at a place I used to work, and other print sites are likely to differ considerably.)

As one is about to expire, the other moves into position to take over. There's some clever mechanical/electrical engineering magic involving double sided sticky tape (no joke) and automated cutters, which I'm not going to mention further, as I don't know enough about it to speak with any degree of accuracy. The finished reel drops down into a bin below, for later removal to a waste area, and a new reel takes its place.

The SVG/JS code is pretty hacky, and could do with a lot of improvement; however this was primarily for me to have a proper play with SVG for the first time, rather than to produce something "professional".

About this blog

This blog (mostly) covers technology and software development.

Note: I've recently ported the content from my old blog hosted on Google App Engine using some custom code I wrote, to a static site built using Pelican. I've put in place various URL manipulation rules in the webserver config to try to support the old URLs, but it's likely that I've missed some (probably meta ones related to pagination or tagging), so apologies for any 404 errors that you get served.

RSS icon, courtesy of www.feedicons.com RSS feed for this blog

About the author

I'm a software developer who's worked with a variety of platforms and technologies over the past couple of decades, but for the past 7 or so years I've focussed on web development. Whilst I've always nominally been a "full-stack" developer, I feel more attachment to the back-end side of things.

I'm a web developer for a London-based equities exchange. I've worked at organizations such as News Corporation and Google and BATS Global Markets. Projects I've been involved in have been covered in outlets such as The Guardian, The Telegraph, the Financial Times, The Register and TechCrunch.

Twitter | LinkedIn | GitHub | My CV | Mail

Popular tags

Other sites I've built or been involved with

Work

Most of these have changed quite a bit since my involvement in them...

Personal/fun/experimentation