It's vaguely inspired by stuff like the Sid Sutton Doctor Who titles from the early '80s and Numb Res (especially the bit with letters forming around 2'30" in) - but it's incredibly crude compared to either of those.
- There are no true 3D objects or particles, instead the code sets up 6 triangles that make up 3 rectangles, each of which fill the full WebGL canvas area. These form 3 parallax layers, which form the starfield(s) - with only minor tweaks, the number of layers could be increased, which would probably improve the believability of the effect a fair bit.
- The stars are rendered entirely in the fragment shader, using a pseudo-random algorithm based on the X&Y; pixel position and the frame number. The increments to the frame number are what give the scrolling effect, and a speed value supplied via the vertex shader is what causes each layer to scroll at a different rate.
- The "Hello world" text is rendered into a hidden 2D canvas object and converted to a WebGL texture at startup. The fragment shader reads the texture, and if that particular pixel was set on the texture, increases the probability of a star being shown.
- Things would probably look better if I added a bit of pseudo-randomness to make the stars twinkle. Unfortunately I was getting a bit bored with the whole thing by the point it came to do this part ;-)
Some observations from my ill-informed stumbling around in a technology I don't really understand:
- Performance seems fine on the 2 of the three machines I've tested it on so far - a dual-boot Linux/Win7 box with AMD hexacore and nVidia GTS450 and 2008 white MacBook are quite happy; a Celeron netbook with integrated graphics understandably less so, although still churning out an acceptable framerate.
- Curiously the dual-boot box reporting consuming way more CPU when running Firefox or Chromium under Linux compared to Windows 7. I'm not quite sure why, as CPU usage should be pretty minimal - all the "clever" stuff should be happening on the graphics card, and all that the CPU should be doing each frame is updating a counter and getting the graphics card to re-render based on that updated counter. (Both operating systems have fairly up-to-date nVidia drivers, with the browsers configured to use "proper" OpenGL as opposed to ANGLE or suchlike.) What the cause is, I haven't yet investigated - it could be some quirky difference in the way CPU usage is reported.
- I reduced Chromium CPU usage from mid-30s to mid-20s (as measured on the Linux box) by moving code out of the main animation loop that didn't need to be there - stuff that defined the geometry, pushed the texture through, etc.
- I still need to find a way to mentally keep track of the various different coordinate systems in use - vertex shader uses -1.0 to 1.0, fragment shader uses 0.0 to 1.0, plus also remembering the real pixels. (And not to mention that 2D canvas is inverted compared to WebGL canvas!)
- It feels a bit odd to me that *GL makes it easier to use floating-point rather than integer values. I guess I'm still stuck in an '80s mentality of writing stuff for the 6502 (which didn't really have 16-bit integers, never mind floating point) and stuff like Bresenham's algorithm. (Ironically enough, Bresenham's algorithm was a topic of discussion just last night, in the talk about Raspberry Pi at this month's Hacker News London event.)
- In a similar vein, I was a tad surprised to find minimal support in shaders for doing bit manipulation stuff like you see in "pure CPU" Satori demos. The same goes for the lack of a proper random number generator, although in the context of this experiment, my controllable pseudo-random numbers were probably a better fit. (I get the impression that this functionality is available in newer versions of regular OpenGL, just not the variant supported by WebGL?)