John Smith's Blog

Ramblings (mostly) about technical stuff

Test post to verify migration to App Engine High-Replication Datastore worked OK

Posted by John Smith on

The App Engine console is a paragon of crapness at the best of times, but the functionality to do a migration from the old master/slave datastore is in a class of its own.

Hopefully I'll only have to do this the once - at least for this blog, I've still got a bunch of other apps that in theory should be migrated, although a number of them don't actually use the datastore, so fingers-crossed I can just leave them as-is.

Fixing slow emacs startup on Linux under VMWare

Posted by John Smith on

TL; DR: try adding the host name to /etc/hosts

For various reasons I can't be bothered to go into now, on an intermittent basis I end up doing a lot of development on Linux VMs on Windows - mostly Fedora on VMWare. For a while I've noticed that emacs (my editor of choice for the past 20+ years) has been intermittently very slow to start up - 10 seconds plus, I'd guess - taking much longer than a programs I'd assume to be much more hefty, such as web browsers or GIMP. Although I've got a few modes and other customizations in place, I wasn't aware that my emacs configuration was anything unusual, and it ran fine on "native" hardware that was much slower or lacking in RAM than my VMs.

Anyway, I finally decided to get off my backside and find out what the problem was, and ideally fix it... Googling for variants of emacs vmware slow startup failed to find anything useful - which is my main motivation for writing this up.

The first thing I tried was some basic profiling of the startup, as per tip #5 found here. This wasn't of any help though, telling me that .emacs was loaded in 0 seconds, which might have been reasonably true, but certainly wasn't anything like the real-world time between entering emacs myfile.txt and an editor window popping up.

Next thing I tried was a tip from an Ubuntu forum, but again this had no noticeable effect. I was about to try compiling all my .el files to be .elcs, but then it struck me that it might be worth trying a different angle of attack...

It's been quite a while since I last used strace, or similar tools such as , truss, etc - and certainly I've rarely used it on programs that I haven't written the original source code for. However, if this showed the process hanging on a particular system call, then that would certainly go a long way to understanding what was happening.

Unfortunately, I don't have the output to cut'n'paste here, but it was immediately apparent that something related to the hostname could be the cause - the process was hanging on poll() calls, and a couple of lines further up in the logging were references to the host name. My initial suspicion was that maybe it was something related to hostname lookups?

A quick viewing of /etc/hosts showed that there were only entries for localhost and variations - there were none for the real host name. The VM was configured to acquire a dynamic IP via DHCP, but I decided to me a naughtly boy and quickly hack the IP and hostname into /etc/hosts to test the theory. Lo and behold, emacs suddenly popped into life almost immediately!

I can't believe that the VMWare DNS server is so slow as to be the cause of this problem, but now that I have the desired end-result, I'm not wasting any more time on it. I tidied up my mess, by reconfiguring the OS in the VM to have a static address rather than a dynamic one, and all is now fine.

In retrospect, this static/dynamic difference probably explains why I'd experienced the problem intermittently over time - most of my VMs are just for testing, and don't have much configuration from the default, whereas any VM that I use for "real work" will almost certainly have been given a static IP so that I can more easily access it from the host OS.

gl.enableVertexAttribArray() gotcha

Posted by John Smith on

Another post mainly in the hope that I might save someone else the wasted time and head-scratching I spent in fixing this...

I've been continuing playing with WebGL, and as well as experimenting with new (to me) functionality, in parallel I've started building up a library to tidy up the repetitious boilerplate that has been largely common to all my experiments to date. Until now, this has been a fairly mundane and trouble-free job, but I managed to cause myself a lot of pain and anguish last night, when some of my library code wasn't completely right.

I had a vertex buffer that contained 4 elements per vertex, a three-element (x,y,z) coordinate, and a single-element greyscale value. On initial run-through, the coordinates were rendered correctly, but the greyscale value was not at all how I expected. Rather than coming out in a shade of grey, my pixels were being rendered as white.

As far as I could tell, the code to push the vertex data through to OpenGL was fine, and not really any different to a number of earlier successful experiments: gl.bindBuffer(gl.ARRAY_BUFFER, bottomFace.vertexPositionBuffer); gl.vertexAttribPointer(shaderProgram.attributes["aVertexPosition"], 3, // vec3 - (x, y, z) gl.FLOAT, false, (4*4), // total size of 'vertex-input' aka stride 0); // offset within 'vertex-input' gl.vertexAttribPointer(shaderProgram.attributes["aVertexGreyness"], 1, // float gl.FLOAT, false, (4*4), // total size of 'vertex-input' aka stride 12); // offset within 'vertex-input' gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, bottomFace.vertexIndexBuffer); gl.drawElements(gl.TRIANGLES, bottomFace.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); I've had problems with this sort of code before, so started fiddling with the arguments to the second gl.vertexAttribPointer() to see if I could provoke it into doing something that would give some insight into what was going wrong, but it steadfastly refused to render anything differently.

One thing that was curious, was swapping the ordering of the two attribute declarations in the vertex shader. As expected, this caused the attribute index values to flip between 0 and 1, but also this seemed to be passed through to the shader, causing my pixels to render as either black or white.

Chrome's WebGL inspector didn't show anything unusual, and indicated that my vertex array had the expected values, so I was at a bit of a loss. Eventually I started hacking around with some older working code, to find out where things were going wrong, and stumbled across the cause.

It transpired that when I was initially getting my attribute index values, I wasn't also enabling them as vertex attribute arrays - or rather, this was happening for the (x,y,z) coordinate attribute (thanks to some legacy code that I thought wasn't getting called), but not for the greyscale attribute. Updating my attribute initialization code fixed the problem: for (var i=0; i<attrNames.length; i++) { attrDict[attrNames[i]] = gl.getAttribLocation(shaderProgram, attrNames[i]); gl.enableVertexAttribArray(attrDict[attrNames[i]]); // THIS LINE ADDED }

No errors are caused by not calling gl.enableVertexAttribArray(), and I don't currently know of any reason why you wouldn't want an attribute enabled, but without this rather boring line, you get mysterious failures as I unfortunately found out :-(

nVidia Linux v302 drivers and dual-head/rotated monitor setups

Posted by John Smith on

I recently upgraded to the 302.17 nVidia Linux drivers, which broke my dual head setup somewhat, and required a bit of manual reconfiguration to get things working properly again. Although all the information is available online, I thought it worthwhile writing a quick post in the event someone else has a similar issue.

Firstly, a quick overview of my setup. I have a GeForce GTS450, although up until a month ago I was using a GeForce 315 with the same config, so I imagine the particular model of nVidia card is largely irrelevant. Out of this card I have two LCD monitors connected via DVI and HDMI:

  • on the left, an HP2045w rotated 90 degrees, so that the screen area is 1050 pixels across by 1680 pixels deep
  • on the right, a Dell S2209W, resolution is 1920x1080
I find this quite a nice configuration, generally Firefox fills the portrait screen, and various terminals or other apps are on the right landscape screen. Most webpages still run on 960 pixel grids, so I rarely lose anything in my browsing - other than ads ;-) - and if I need to use Firebug or similar dev tools, there's plenty of space to see both the page content and the tool.

In Linux/X11, the screens are set up separately, so I can switch between workspaces on just one screen, which I find very useful. The only downside is that it's not possible (AFAIK?) to drag a window from one screen to the other, but this is a hardship I've learnt to live with. (NB: When this machine dual-boots into Windows, I have a more standard "one big desktop", but as I rarely use Windows for "real work", I've never bothered to investigate what other options are available.)

Anyway, after upgrading the drivers, and letting the nVidia installer update my xorg.conf - I was disappointed to find that my portrait monitor was showing Xfce at a 90 degree angle, and on the "wrong side" of the displays. (I don't normally let the nVidia installer alter my xorg.conf, but as this version of the drivers apparently fixes/improves RandR support, I thought it best to let the installer do its own thing.

Now, these issues wouldn't be the end of the world, but it seems that nvidia-settings hasn't been upgraded to align with how this latest driver works. As such, it came down to a bit of manual editing of xorg.conf...

Firstly, fixing the relative positioning of the screens was a fairly straightforward edit to the ServerLayout section: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 1680 0 Screen 1 "Screen1" LeftOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Looking at some older versions of the config, I reckon this broke because the enumerations of the two monitors got swapped around. Not quite sure how/why that happened, or how the card/driver is supposed to work out which is screen 0 and which is screen 1, but as it's a simple fix, it's not worth wasting much thought on.

More troublesome was getting my portrait screen displaying correctly. Firstly I verified that portrait display was still possible, via xrandr from the shell prompt: xrandr -d :0.1 -o left (BTW, one minor thing to be wary of when doing this, was that using the configuration I had at the time, it created a "dead space" between the two screens such that it was not possible to move the mouse from one screen to the other.)

With this proven, it was time to work out why my old configuration wasn't working. Previously I'd had this for my portrait screen: Section "Screen" ... Option "RandRRotation" "on" Option "Rotate" "CCW" ... EndSection A look at the Xorg log in /var/log/ gave an indicator as to why this was no longer working: [ 135.480] (WW) NVIDIA(0): Option "RandRRotation" is not used This change in functionality is also mentioned in the release notes, albeit buried a long way down:

Removed the "RandRRotation" X configuration option. This enabled configurability of X screen rotation via RandR 1.1. Its functionality is replaced by the "Rotation" MetaMode attribute and RandR 1.2 rotation support. See the README for details.
I never actually came across that README file, but I managed to Google some online documentation, that I imagine is probably identical content-wise. This led me to altering xorg.conf as follows: # Screen 1 = HP Section "Screen" ... Option "metamodes" "DFP-0: nvidia-auto-select { Rotation=left} +0+0" ... EndSection A quick Ctrl-Alt-Backspace later, and I had my portrait monitor viewable without having to hold my head at a funny angle :-)

Unfortunately, this wasn't the end of the story, as a lot of the text on the portrait display was appearing much smaller than it should have. This appeared to be due to a confusion over the DPI and physical size of the monitor screen: Screengrab of nvidia-settings window, showing two radically different values for vertical and horizontal DPI, and a width/height that doesn't make sense for a portrait display xdpyinfo showed similarly incorrect information for this screen: [john@hamburg X11]$ xdpyinfo | more ... screen #1: dimensions: 1050x1680 pixels (430x270 millimeters) resolution: 62x158 dots per inch ... Now, I'd previously put in a hacked DisplaySize value in the relevant Monitor section, as a means to make a bitmap font appear the way I wanted, but after fiddling with this value, it seems to no longer have any effect.

In the end, I found a reference to a DPI setting in the xorg.conf man page, and adding this to the Screen section fixed things nicely: # Screen 1 = HP Section "Screen" ... Option "DPI" "100 x 100" ... EndSection And now I have things back working as they were before!

Unfortunately, none of the positive changes I'd hoped for have occurred - I'm still getting nasty tearing, even though V-sync appears to be enabled everywhere it can be. Maybe one day I'll get that resolved...

Hassles with array access in WebGL, and a couple of workarounds

Posted by John Smith on

I've been pottering around a bit more with WebGL for a personal project/experiment, and came across a hurdle that I wasn't expecting, involving arrays. Maybe my Google-fu was lacking, but this doesn't seem to be widely documented online - and I get the distinct impression that WebGL implementations might have changed over time, breaking some older example code -so here's my attempt to help anyone else who comes across the same issues. As I'm far from being an expert in WebGL or OpenGL, it's highly likely that some of the information below could be wrong or sub-optimal - contact me via Twitter if you spot anything that should be corrected.

I want to write my own implementation of Voronoi diagrams as a shader, with JavaScript doing little more than setting up some initial data and feeding it into the shader. Specifically, JavaScript would generate an array of random (x,y) coordinates, which the shader would then process and turn into a Voronoi diagram. I was aware that WebGL supported array types, but I hadn't used them at all in my limited experimentation, so rather than jumping straight in, I thought it would be a good idea to write a simple test script to verify I understood how arrays work.

This turned out to be a wise decision, as it turns out that what would be basic array access in pretty much any other language, is not possible in WebGL. (Or OpenGL ES - I'm not sure where exactly the "blame" lies.) Take the following fragment shader code: uniform float uMyArray[16]; ... void main(void) { int myIndex = int(mod(floor(gl_FragCoord.y), 16.0)); float myVal = uMyArray[myIndex] ... The intention is to obtain an arbitrary value from the array - simple enough, right?

However, in both Firefox 12 and Chromium 19, this code will fail at the shader-compilation stage, with the error message '[]': index expression must be constant. Although I found it hard to believe initially, this is more-or-less what it seems - you can't access an element in an array using a regular integer variable as the index.

My uninformed guess is that this is some sort of security/leak prevention mechanism, to stop you reading memory you shouldn't be able to, e.g. by making the index variable negative or bigger than 16 in this case. I haven't seen any tangible confirmation of this though.

Help is at hand though, as it turns out that "constant" isn't quite the same as you might expect from other languages. In particular, a counter in a for loop is considered constant, I guess because it is constant within the scope of the looped clause. (I haven't actually tried to alter the value of the counter within a loop though, so I could well be wrong.)

This led me to my first way of working around this limitation: int myIndex = int(mod(floor(gl_FragCoord.y), 16.0)); float myVal; for (int i=0; i<16; i++) { if (i==myIndex) { myVal = myArray[i]; break; } } This seems to be pretty inefficient, conceptually at least. There's a crude working example here, if you want to see more. Note that it's not possible to optimize the for loop - that statement also suffers from similar constant restrictions.

Whilst Googling for ways to fix this, I also found reference to another limitation, regarding the size of array that a WebGL implementation is guaranteed to support. I don't have a link to hand, but IIRC it was 128 elements - which isn't a problem for this simple test, but could well be a problem for my Voronoi diagram. The page in question suggested that using a texture would be a way to get around this.

As such, I've implemented another variant on this code, using a texture instead of an array. This involves quite a bit more effort, especially on the JavaScript side, but seems like it should be more efficient on the shader side. (NB: I haven't done any profiling, so I could be talking rubbish.)

Some example code can be found here, but the basic principle is to

  1. Create a 2D canvas in JavaScript that is N-pixels wide and 1-pixel deep.
  2. Create an ImageData object, and write your data into the bytes of that object. (This is relatively easy or hard depending on whether you have integer or float values, and how wide those values are.)
  3. Convert the Sampler2D uniform. Note you (probably) won't need any texture coordinates or the usual paraphernalia associated with working with textures.
  4. In your shader, extract the value from the shader by using the texture2D function, with vec2(myIndex/arraySize, 0.0) as the second argument.
  5. As the values in the returned vec4 are floats in the range (0.0, 1.0), you'll probably want to decode them into whatever the original number format was.

To go into more detail, here are the relevant bits of code from the previously linked example>.

Creating our array of data

Firstly, we generate the data into a typed array: function createRandomValues(numVals) { // For now, keep to 4-byte values to match RGBA values in textures var buf = new ArrayBuffer(numVals * 4); var buf32 = new Uint32Array(buf); for (var i=0; i < numVals; i++) { buf32[i] = i * 16; } return buf32; } ... var randVals = createRandomValues(16); This is nothing special, and is really only included for completeness, and for reference for anyone unfamiliar with typed arrays. Observant readers will note that the function name createRandomValues is somewhat of a misnomer ;-)

Creating the canvas

The array of data is then fed into a function which creates a canvas/ImageData object big enough to store it: function calculatePow2Needed(numBytes) { /** Return the length of a n*1 RGBA canvas needed to * store numBytes. Returned value is a power of two */ var numPixels = numBytes / 4; // Suspect this next check is superfluous as affected values // won't be powers of two. if (numPixels != Math.floor(numPixels)) { numPixels = Math.floor(numPixels+1); } var powerOfTwo = Math.log(numPixels) * Math.LOG2E; if (powerOfTwo != Math.floor(powerOfTwo)) { powerOfTwo = Math.floor(powerOfTwo + 1); } return Math.pow(2, powerOfTwo); } function createTexture(typedData) { /** Create a canvas/context contain a representation of the * data in the suppled TypedArray. The canvas will be 1 pixel * deep; it will be a sufficiently large power-of-two wide (although * I think this isn't actually needed). */ var numBytes = typedData.length * typedData.BYTES_PER_ELEMENT; var canvasWidth = calculatePow2Needed(numBytes); var cv = document.createElement("canvas"); cv.width = canvasWidth; cv.height = 1; var c = cv.getContext("2d"); var img = c.createImageData(cv.width, cv.height); var imgd = img.data; ... ... createTexture(randVals); The above code creates a canvas sized to be a power-of-two, as WebGL has restrictions with textures that aren't sized that way. As it happens, those restrictions aren't actually applicable in the context we are using the texture, so this is almost certainly unnecessary.

Storing the array data in the canvas

The middle section of createTexture() is fairly straightforward, although a more real-world use would involve a bit more effort. var offset = 0; // Nasty hack - this currently only supports uint8 values // in a Uint32Array. Should be easy to extend to larger unsigned // ints, floats a bit more painful. (Bear in mind that you'll // need to write a decoder in your shader). for (offset=0; offset<typedData.length; offset++) { imgd[offset*4] = typedData[offset]; imgd[(offset*4)+1] = 0; imgd[(offset*4)+2] = 0; imgd[(offset*4)+3] = 0; } // Fill the rest with zeroes (not strictly necessary, especially // as we could probably get away with a non-power-of-two width for // this type of shader use for (offset=typedData.length*4; offset < canvasWidth; offset++) { imgd[offset] = 0; } I made my life easy by just having 8-bit integer values stored in a 32-bit long, as this maps nicely to the 4-bytes used in an (R,G,B,A) canvas. Storing bigger integer or float values, or packing these 8-bit values, could be done with a few more lines. One potential gotcha could be that the typed array values are represented in whatever the host machine's native architecture dictates, so just simply dumping byte values directly into the ImageData object could have "interesting" effects on alternative hardware platforms.

Convert the canvas into a texture

createTexture() concludes by converting the canvas/ImageData into a texture // Convert to WebGL texture myTexture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, myTexture); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img); /* These params let the data through (seemingly) unmolested - via * http://www.khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences#Non-Power_of_Two_Texture_Support */ gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.bindTexture(gl.TEXTURE_2D, null); // 'clear' texture status } The main lines to note are the gl.texParameteri() calls, which are to tell WebGL not to do any of the usual texture processing stuff like mipmapping. As we want to use the original fake (R,G,B,A) values unmolested, the last thing we want is for OpenGL to try to be helpful and feeding our shader code some modified version of these values.

EDIT 2012/07/10: I found the example parameter code above wasn't enough for another program I was working on. Adding an addition setting for gl.TEXTURE_MAX_FILTER fixed it: gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAX_FILTER, gl.NEAREST); I suspect gl.NEAREST might be generally better than gl.LINEAR for this sort of thing - however I haven't done thorough tests to properly evaluate this.

Extract the data from the texture

Just for completeness, here's the mundane code to pass the texture from JavaScript to the shader: gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, myTexture); gl.uniform1i(gl.getUniformLocation(shaderProgram, "uSampler"), 0);

Of more interest is the shader code to reference the "array element" from our pseudo-texture: uniform sampler2D uSampler; ... void main(void) { float myIndex = mod(floor(gl_FragCoord.y), 16.0); // code for a 16x1 texture... vec4 fourBytes = texture2D(uSampler, vec2(floatIndex/16.0, 0.0)); ... This now gives us four floats which we can convert into something more usable. Note that as we are (ab)using the RGBA space, remember that these four values are in the range 0.0 to 1.0.

Decode the data

Now, this is where I cheated somewhat in my test code, as I just used the RGBA value pretty much as-was to paint a pixel: gl_FragColor = vec4(fourBytes.r, fourBytes.g, fourBytes.b, 1.0); To turn it back into an integer, I'd need something along the lines of: int myInt8 = int(fourBytes.r * 255.0); int myInt16 = int(fourBytes.r * 65536.0) + int(fourBytes.g * 255.0); The above code is untested, and I suspect 65536 might be the wrong value to multiply by - it could be (255*255) or (256*255). In a similar vein, a more intelligent packing system could have got 4 different 8-bit values into a single texel, pulling out the relevant (R,G,B,A) value as appropriate.

I wouldn't be surprised if some of the details above could be improved, as this seems a very long-winded way of doing something that seems like it should be completely trivial. However, as it stands, it should at least get anyone suffering the same issues as me moving forward.

Now, to get back to the code I originally wanted to write...

Reinvented the wheel and built my own IP address checker

Posted by John Smith on

I've recently started started using a VPN for the first time in years, and was using WhatIsMyIP to sanity check that I was indeed seeing the net via a different IP than that provided by my ISP. However, there were a few things I wasn't too happy about:

  • I was concerned that my repeated queries to that site might be detected as abusive.
  • Alternatively, I might be seeing cached results from an earlier query on a different network setup.
  • As someone happiest using the Unix command line, neither switching to a browser window, nor using curl and parsing the HTML output, were ideal.

So, I spent a few hours knocking up my own variation of this type of service, doubtless the gazillionth implementation clogging up the internet, which you can find here. While it's still pretty basic, there are a couple of features that I haven't noticed in other implementations:

  • A Geo-IP lookup is done, to identify the originating country, region, city and latitude and longitude. This data is obtained via a Google API, so it's probably as accurate as these things get - which isn't very much, at least at the lat/long level. (The main motivation for adding this functionality was to help analyse if my VPN can be abused to break region restrictions on sites like Hulu ;-)
  • To make things more convenient for non-browser uses, multiple output formats are supported (HTML, plain text, CSV, XML and JSON), which can be specified either by an old-school format=whatever CGI argument, or a more RESTful way using the HTTP Accept header.

Here are a couple of examples of usage: [john@hamburg ~]$ curl -H "Accept: text/plain" "http://report-ip.appspot.com" IP Address: x.x.x.x Country: GB Region: eng City: london Lat/Long: 51.513330,-0.088947 Accept: text/plain Content-Type: ; charset="utf-8" Host: report-ip.appspot.com User-Agent: curl/7.21.3 (x86_64-redhat-linux-gnu) libcurl/7.21.3 NSS/3.13.1.0 zlib/1.2.5 libidn/1.19 libssh2/1.2.7 [john@hamburg ~]$ curl "http://report-ip.appspot.com/?format=json" { "ipAddress": "x.x.x.x", "country": "GB", "region": "eng", "city": "london", "latLong": "51.513330,-0.088947", "headers": { "Accept": "*/*", "Content-Type": "; charset="utf-8"", "Host": "report-ip.appspot.com", "User-Agent": "curl/7.21.3 (x86_64-redhat-linux-gnu) libcurl/7.21.3 NSS/3.13.1.0 zlib/1.2.5 libidn/1.19 libssh2/1.2.7" } }

I've created a project on GitHub, so you can see how minimal the underlying Python code is. The README has some notes about what extra stuff I might add in at some point, in the event I can be bothered.

As the live app is just running off an unbilled App Engine instance, it won't take much traffic before hitting the free quota limits. As such, in the unlikely event that someone out there wants to make use of this, you might be better off grabbing the code from the repo and deploying it to your own App Engine instance.

Parallax starfield and texture mask effect in WebGL

Posted by John Smith on

I've been pottering around a bit with WebGL lately, really just playing with 2D stuff, and this is probably the most notable thing I've hacked up so far.

Screengrab from a WebGL demo, showing the text Hello World using a starfield effect

It's vaguely inspired by stuff like the Sid Sutton Doctor Who titles from the early '80s and Numb Res (especially the bit with letters forming around 2'30" in) - but it's incredibly crude compared to either of those.

If you care to view source on that page and/or the JavaScript that drives it, it should hopefully be fairly easy to follow, but in summary:

  • There are no true 3D objects or particles, instead the code sets up 6 triangles that make up 3 rectangles, each of which fill the full WebGL canvas area. These form 3 parallax layers, which form the starfield(s) - with only minor tweaks, the number of layers could be increased, which would probably improve the believability of the effect a fair bit.
  • The stars are rendered entirely in the fragment shader, using a pseudo-random algorithm based on the X&Y; pixel position and the frame number. The increments to the frame number are what give the scrolling effect, and a speed value supplied via the vertex shader is what causes each layer to scroll at a different rate.
  • The "Hello world" text is rendered into a hidden 2D canvas object and converted to a WebGL texture at startup. The fragment shader reads the texture, and if that particular pixel was set on the texture, increases the probability of a star being shown.
  • Things would probably look better if I added a bit of pseudo-randomness to make the stars twinkle. Unfortunately I was getting a bit bored with the whole thing by the point it came to do this part ;-)

Some observations from my ill-informed stumbling around in a technology I don't really understand:

  • Performance seems fine on the 2 of the three machines I've tested it on so far - a dual-boot Linux/Win7 box with AMD hexacore and nVidia GTS450 and 2008 white MacBook are quite happy; a Celeron netbook with integrated graphics understandably less so, although still churning out an acceptable framerate.
  • Curiously the dual-boot box reporting consuming way more CPU when running Firefox or Chromium under Linux compared to Windows 7. I'm not quite sure why, as CPU usage should be pretty minimal - all the "clever" stuff should be happening on the graphics card, and all that the CPU should be doing each frame is updating a counter and getting the graphics card to re-render based on that updated counter. (Both operating systems have fairly up-to-date nVidia drivers, with the browsers configured to use "proper" OpenGL as opposed to ANGLE or suchlike.) What the cause is, I haven't yet investigated - it could be some quirky difference in the way CPU usage is reported.
  • I reduced Chromium CPU usage from mid-30s to mid-20s (as measured on the Linux box) by moving code out of the main animation loop that didn't need to be there - stuff that defined the geometry, pushed the texture through, etc.
  • I still need to find a way to mentally keep track of the various different coordinate systems in use - vertex shader uses -1.0 to 1.0, fragment shader uses 0.0 to 1.0, plus also remembering the real pixels. (And not to mention that 2D canvas is inverted compared to WebGL canvas!)
  • It feels a bit odd to me that *GL makes it easier to use floating-point rather than integer values. I guess I'm still stuck in an '80s mentality of writing stuff for the 6502 (which didn't really have 16-bit integers, never mind floating point) and stuff like Bresenham's algorithm. (Ironically enough, Bresenham's algorithm was a topic of discussion just last night, in the talk about Raspberry Pi at this month's Hacker News London event.)
  • In a similar vein, I was a tad surprised to find minimal support in shaders for doing bit manipulation stuff like you see in "pure CPU" Satori demos. The same goes for the lack of a proper random number generator, although in the context of this experiment, my controllable pseudo-random numbers were probably a better fit. (I get the impression that this functionality is available in newer versions of regular OpenGL, just not the variant supported by WebGL?)

In praise of help() in Python's REPL

Posted by John Smith on

For various reasons, I'm doing a bit of JavaScript/CoffeeScript work at the moment, which involves use of some functions in the core libraries which I'd not really used in the past. A minor aspect of this involves logarithmic values, and I was a bit surprised and then disappointed that JavaScript's Math.log() isn't as flexible as its Python near-namesake math.log(): # Python >>> math.log(256,2) # I want the result for base 2 8.0 versus # CoffeeScript coffee> Math.log(256,2) Math.log(256,2) 5.545177444479562

Now, it's probably unreasonable of me to expect the JavaScript version of this function to behave exactly the same as the Python version, especially as the (presumably) underlying C function only takes a single value argument. (Although it might have been nice to get a warning about the ignored second argument, rather than silence...)

On the other hand though, it reminded me of how much more civilized Python is compared to JavaScript. When I'm hacking around, I almost always have a spare window open with a running REPL process, that allows me to quickly check and test stuff, and can very easily pull up the docs via the help() function if I need further info. In contrast, to do the same in JavaScript I have to move over to a browser window and search for info on sites like MDN, or resort to my trusty copies of The Definitive Guide, neither of which are anywhere near as convenient.

After a brief bit of Googling and a plea for help on Twitter, I was unable to find any equivalent to this functionality in the JavaScript world - and let's face it, help() is pretty basic stuff when compared to what the likes of IPython and bpython offer the fortunate Python developer.

I'd love to be corrected on this, and be told about some nice CLI-tool for JavaScript that can help me out. (But not some overblown IDE that would require me to radically change my established development environment, I hasten to add!) I'm not expecting this to happen though - Python's help() relies heavily on docstrings, and I'm not aware that anything such as JsDoc is in common usage in the JavaScript community?

Enhanced version of Python's SimpleHTTPServer that supports HTTP Range

Posted by John Smith on

I've just uploaded a small personal project to GitHub here. It's basically a very crude webserver that allows me to share audio files on my Linux boxes to my iOS devices, using Mobile Safari.

The main reason for noting this is that the code may be of more general interest because it implements an improved version of Python stdlib's SimpleHTTPServer module, that implements basic support for the Range header in HTTP requests, which is necessary for Mobile Safari on some MP3 files.

During early development, I found that some MP3 files would refuse to play in Mobile Safari when served by SimpleHTTPServer. The same file would play fine if served by Apache. Because debugging mobile web browsers is a PITA (caveat: I've haven't kept up with the latest-and-greatest in this area), I ended up resorting to Wireshark to see what was going on.

Wireshark indicated that Mobile Safari would request chunks of the MP3 file (initially just the first couple of bytes), but SimpleHTTPServer would always serve the entire file, because it never checked for the existence of the Range header. On certain files, this wouldn't bother Mobile Safari, but on others it would cause the audio player widget to show an unhelpful generic error.

Once I understood what the problem was, I found that I'm not the first person to get caught out by this, and that Apple themselves state that servers need to support Range to keep Mobile Safari happy.

To solve the problem, I wrote a new class HTTPRangeRequestHandler that is a direct replacement for SimpleHTTPServer. In my app code proper, I then (try to) pull in my enhanced handler as follows: try: import HTTPRangeServer inherited_server = HTTPRangeServer.HTTPRangeRequestHandler except ImportError: logging.warning("Unable to import HTTPRangeServer, using stdlib's " + "SimpleHTTPServer") import SimpleHTTPServer inherited_server = SimpleHTTPServer.SimpleHTTPRequestHandler ... class MySpecificHandler(inherited_server): ... def main(port=12345): Handler = EnhancedRequestHandler httpd = SocketServer.TCPServer(("", port), Handler) Arguably it might be better for the code to die if HTTPRangeServer cannot be imported, but as the stdlib SimpleHTTPServer is good enough for many browser clients, it doesn't seem too unreasonable to use it as a fallback.

This code is currently OK for most uses, but currently it doesn't support all variations of the Range header as described at aforementioned W3C spec page. It does however support all the requests variants I've seen in my - admittedly very cursory - browser testing, and any requests that it can't parse will instead get the full file served, which is the same behaviour as SimpleHTTPServer.

The musicsharer application that's built on this class is even rougher, but as it's really just intended for my own personal use, you shouldn't hold your breath waiting for me to tart it up...

What's the best way of including SVGs in a responsive web page?

Posted by John Smith on

TL; DR: <object> seems the best bet - although Safari 5.1 has issues compared to the other browsers. Second choice is having the SVG inline in the HTML, but that has issues for WebKit and IE.

As a diversion from my more usual diet of Python, I've spent a fair bit of time over the past week or so revisiting SVG. My previous experiments have usually been done using fixed sizes, but I've always wanted to do something that fits in better with what these days is called "responsive design", especially given as these are supposed to be scalable vector graphics.

For an example of the sort of thing I've been aiming for, take a look at the main chart on a Google Finance page. This Flash chart resizes horizontally as you change the size of the browser window. (NB: the chart makes extra data visible as you make the browser window wider, which isn't exactly what I want, but you should get the idea.)

Unless I've been particularly boneheaded about the way I've investigated this, this isn't as straightforward a problem as it first seemed. If you put a regular bitmap image in an HTML page with something like <img src="whatever.png" width="100%" /> the image will scale as you'd hope when the browser window is resized. This isn't necessarily the case with SVG.

As outlined in the W3C docs, there are five ways that you can pull SVGs into a page in a modern browser:

  • <embed>
  • <frame> / <iframe>
  • <object>
  • <img>
  • Inline <svg>
(You can also create an SVG via DOM function calls, but I'm not particularly interested in that approach right now.)

I've built tests for each of these, and run them through the latest versions of the five main browsers. This obviously ignores a lot of issues with older browsers and mobile browsers, many of which don't even support SVG at all, but as it turns out, the "big 5" are enough to worry about on their own :-(

In the following sections, when I refer to a particular browser, the tests were done in the following versions, which (AFAIK) are the current ones as of early April 2012:

  • Firefox 10
  • Opera 11.62
  • Chromium 17 or 19 (I didn't notice any difference between the two
  • Internet Explorer 9
  • Safari 5.1
Where relevant, issues are illustrated with screengrabs from the Windows version of a browser, but much of my testing was done with Linux versions - except in the case of IE and Safari. I also tested whether JavaScript functionality worked - both within the SVG itself, and from the enclosing document trying to manipulate the SVG.

Just for clarity's sake: all of this messing around is (probably?) only needed if you want an SVG to be scalable within your web page. If you're happy for it to be a fixed size, then you shouldn't have to worry about any of the following stuff.

<embed>

Test link: http://js-test.appspot.com/svg/embedscaling.html

The image gets scaled correctly in Firefox, Chromium and IE. The image does not get scaled correctly in Opera or Safari.

Screengrab of SVGs using the embed tag in Opera 11.62 Screengrab of SVGs using the embed tag in Safari 5.1

<iframe>

Test link: http://js-test.appspot.com/svg/iframescaling.html

(I've assumed <frame>s behave the same; I couldn't be bothered to write a separate test for them.)

Only Chromium rendered the page completely as desired. Screengrab of SVGs using the iframe tag in Chromium 19

IE and Safari both failed to scale the images properly, but were able to modify the SVGs from the enclosing document's JavaScript. Screengrab of SVGs using the iframe tag in IE9 Screengrab of SVGs using the iframe tag in Safari 5.1

Firefox and Opera failed to scale the images, or modify them via the JavaScript in the enclosing document. I'm not sure if the JS issue is down to some DOM API difference and/or security problem - but as the scaling is broken, I couldn't be bothered to investigate further. Screengrab of SVGs using the iframe tag in Firefox 10 Screengrab of SVGs using the iframe tag in Opera 11.72

(All browsers did at least run the JavaScript contained within the SVG files.)

<object>

Test link: http://js-test.appspot.com/svg/objectscaling.html

Only Safari lets the side down, by failing to scale the SVGs. All other browsers work as desired. Screengrab of SVGs using the iframe tag in Safari 5.1

<img>

Test link: http://js-test.appspot.com/svg/imgscaling.html

If you have any JavaScript-based interactivity, forget about using <img> tags - the JS in the SVGs won't be run, and the JS in the document doesn't do anything either. WebKit-based browsers also have weird issues with additional padding and squashed images. You might hope that the preserveAspectRatio might be able to solve that, but I was unable to find any value which fixed things. Screengrab of SVGs using the img tag in Chromium 19 Screengrab of SVGs using the img tag in Safari 5.1

Inline <svg>

Test link: http://js-test.appspot.com/svg/inlinexhtmlscaling.html

This method is what the BBC uses for the position chart in its football tables, which is the highest profile use of SVG in a mainstream site that I know of.

Scaling works in all browsers, except IE. Screengrab of SVGs using inline SVG in IE9

However, WebKit browsers suffer from excessive vertical padding - it seems that WebKit assumes the height of the image is the same as the browser window, rather than the viewBox attribute in the <svg>. A slightly messy fix is to manually alter the height of the SVG elements after the page has loaded - this wasn't incorporated into this particular test, but an example from another test is here, and other people have documented similar workarounds. There are some open bugs on the WebKit tracker that might be related, here and here. Screengrab of SVGs using inline SVG in Chromium 19 Screengrab of SVGs using inline SVG in Safari 5.1

Whilst doing some earlier tests in this area, I also found a couple of bugs in Opera where

  • An HTML5 page wouldn't properly render, and would never trigger the load event - but an effectively identical XHTML page was fine. Warning: this bug also causes Opera to consume 100% CPU on the processor it is running on Screengrab showing bug in Opera in HTML5 page with embedded SVGs
  • If a page had multiple copies of the same SVG pulled in via the <img> tag, then if the page was reloaded, most of the duplicates would not appear: Screengrab showing bug in Opera after a page with duplicate SVGs is reloaded
Both of these have been reported to Opera via the tool built into their browser

Summary

As can be seen, of the five methods available, none is 100% foolproof. It seems to me that the best bet is <object>, as it works fine on all browsers except Safari, without need for JavaScript hacks. Second-place is embedding the SVG into the HTML, which entails a simple hack for WebKit browsers - but unfortunately is still broken in IE.

Page 1 / 6 »

About this blog

This blog (mostly) covers technology and software development.

Note: I've recently ported the content from my old blog hosted on Google App Engine using some custom code I wrote, to a static site built using Pelican. I've put in place various URL manipulation rules in the webserver config to try to support the old URLs, but it's likely that I've missed some (probably meta ones related to pagination or tagging), so apologies for any 404 errors that you get served.

RSS icon, courtesy of www.feedicons.com RSS feed for this blog

About the author

I'm a software developer who's worked with a variety of platforms and technologies over the past couple of decades, but for the past 7 or so years I've focussed on web development. Whilst I've always nominally been a "full-stack" developer, I feel more attachment to the back-end side of things.

I'm a web developer for a London-based equities exchange. I've worked at organizations such as News Corporation and Google and BATS Global Markets. Projects I've been involved in have been covered in outlets such as The Guardian, The Telegraph, the Financial Times, The Register and TechCrunch.

Twitter | LinkedIn | GitHub | My CV | Mail

Popular tags

Other sites I've built or been involved with

Work

Most of these have changed quite a bit since my involvement in them...

Personal/fun/experimentation