John Smith's Blog

Ramblings (mostly) about technical stuff

Converting old App Engine code to Python 2.7/Django 1.2/webapp2

Posted by John Smith on

I'm borrowing the code for this blog for another project I'm working on, and it seemed to make sense to take the opportunity to bring it up to speed with the latest-and-greatest in the world of App Engine, which is:

  • Python 2.7 (the main benefit for me; I don't like having to dick around with 2.6 or 2.5 installations)
  • multithreading (not really needed for the negligible traffic I get, but worth having, especially given that the new billing scheme seems to assume you'll have this enabled if you don't want to be ripped off)
  • webapp2 (which seems to the recommended serving mechanism if you're not going to a "proper" Django infrastructure)
  • Django 1.2 templating (I'd used this on a work project a few months ago, but the blog was still using 0.96

Of course, having so many changed elements in the mix in a single hit is a recipe for disaster; with things breaking left, right and centre, trying to work out what the cause was was a bit needle-in-a-haystackish. It didn't help that the Py2.7 docs on the official site are still very sketchy, so I ended up digging through the library code quite a bit to suss out what was happening.

As far as I can tell, I've now got everything fixed and working - although this site is still running the old code, as the Python 2.7 runtime has a dependency on the HR datastore, and this app is still using Master/Slave.

I ended up writing a mini-app, in order to develop and test the fixes without all the cruft from my blog code, which I'll see about uploading to my GitHub account at some point. In the mean-time, here are my notes about the stuff I changed. I'm sure there are things which are sub-optimal or incomplete, but hopefully they might save someone else time...

app.yaml

  • Change runtime from python to python27
  • Add threadsafe: true
  • Add a libraries section: libraries: - name: django version: "1.2"
  • Change handler script references from foo.py to foo.app
  • Only scripts in the top-level directory work as handlers, so if you have any in subdirectories, they'll need to be moved, and the script reference changed accordingly: - url: /whatever # This doesn't work ... # script: lib/some_library/handler.app # ... this does work script: handler.app

Templates

  • In Django 1.2 escaping is enabled by default. If you need HTML to be passed through unmolested, use something like: {% autoescape off %} {{ myHTMLString }} {% endautoescape %}
  • If you're using {% extends %}, paths are referenced relative to the template base directory, not to that file. Here's an table showing examples of the old and new values:
    File Old {% extends %} value New {% extends %} value
    base.html N/A N/A
    admin/adminbase.html "../base.html" "base.html"
    admin/index.html "adminbase.html" "admin/adminbase.html"
  • If you have custom tags or filters, you need to {% load %} them in the template, rather than using webapp.template.register_template_library() in your main Python code.
    e.g.
    Old code (in your Python file): webapp.template.register_template_library('django_custom_tags') New code (in your template): {% load django_custom_tags %} (There's more that has to be done in this area; see below.)

Custom tag/filter code

  • Previously you could just have these in a standalone .py file which would be pulled in via webapp.template.register_template_library(). Instead now you'll have to create an Django app to hold them:
    1. In a Django settings.py file, add the new app to INSTALLED_APPS e.g.: INSTALLED_APPS = ('customtags')
    2. Create an app directory structure along the following lines: customtags/ customtags/__init__.py customtags/templatetags/ customtags/templatetags/__init__.py customtags/templatetags/django_custom_tags.py Both the __init__.py files can be zero-length. Replace customtags and django_custom_tags with whatever you want - the former is what should be referenced in INSTALLED_APPS, the latter is what you {% load "whatever" %} in your templates.
    3. In your file(s) in the templatetags/ directory, you need to change the way the new tags/filters are registered at the top of the file.
      Old code: from google.appengine.ext.webapp import template register = template.create_template_register() New code: from django.template import Library register = Library() The register.tag() and register.filter() calls will then work the same as previously.

Handlers

  • Change from google.appengine.ext import webapp to import webapp2 and change your RequestHandler classes and WSGIApplication accordingly
  • If your WSIApplication ran from within a main() function, move it out.
    e.g.
    Old code:
    def main(): application = webapp.WSGIApplication(...) wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main() New code: app = webapp2.WSGIApplication(...) Note in the new code:
    1. The lack of a run() call
    2. That the WSGIApplication must be called app - if it isn't, you'll get an error like: ERROR 2012-01-29 22:17:37,607 wsgi.py:170] Traceback (most recent call last): File "/proj/3rdparty/appengine/google_appengine_161/google/appengine/runtime/wsgi.py", line 168, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/proj/3rdparty/appengine/google_appengine_161/google/appengine/runtime/wsgi.py", line 220, in _LoadHandler raise ImportError('%s has no attribute %s' % (handler, name)) ImportError: has no attribute app
  • Any 'global' changes you might make at the main level won't be applied across every invocation of the RequestHandlers - I'm thinking of things like setting a different logging level, or setting the DJANGO_SETTINGS_MODULE. These have to be done within the methods of your handlers instead. As this is obviously painful to do for every handler, you might consider using custom handler classes to handle the burden - see below.

Rendering Django templates

The imports and calls to render a template from a file need changing.
Old code: from google.appengine.ext.webapp import template ... rendered_content = template.render(template_path, {...}) New code: from django.template.loaders.filesystem import Loader from django.template.loader import render_to_string ... rendered_content = render_to_string(template_file, {...}) As render_to_string() doesn't explicitly get told where your templates live, you need to do this in settings.py: import os PROJECT_ROOT = os.path.dirname(__file__) TEMPLATE_DIRS = (os.path.join(PROJECT_ROOT, "templates"),)

Custom request handlers

As previously mentioned, where previously you could easily set global environment stuff, these now have to be done in each handler. As this is painful, one nicer solution is to create a special class to set all that stuff up, and then have your handlers inherit from that rather than webapp2.RequestHandler.

Here's a handler to be more talkative in the logs, and which also sets up the DJANGO_SETTINGS_MODULE environment variable. class LoggingHandler(webapp2.RequestHandler): def __init__(self, request, response): self.initialize(request, response) logging.getLogger().setLevel(logging.DEBUG) self.init_time = time.time() os.environ["DJANGO_SETTINGS_MODULE"] = "settings" def __del__(self): logging.debug("Handler for %s took %.2f seconds" % (self.request.url, time.time() - self.init_time)) A couple of things to note:

  1. the webapp2.RequestHandler constructor takes request and response parameters, whereas webapp.RequestHandler just took a single self parameter
  2. Use the .initialize() method to set up the object before doing your custom stuff, rather than __init__(self)

Twitter feed on this blog broken

Posted by John Smith on

A lame update in just about every respect...

I've belatedly noticed that the tweet panel on the right hasn't been updated for a week or so. Investigating this using App Engine's local development server though, the latest tweets were pulled in fine.

The logs on the live server indicated that Twitter was returning an HTTP 400 'Bad Request' status, leading me to suspect that maybe something within the Google infrastructure was mangling the request in some way. Only by dumping the HTTP headers returned from Twitter did I find that my requests were actually being refused due to a rate-limit being breached - which means that the 400 status was a complete red herring, something like 509 'Bandwidth Limit Exceeded' would be far more accurate about describing the true cause of the problem.

Digging up the Twitter API docs, it seems that unauthenticated GET requests are rate-limited by IP address. Given that thousands of App Engine apps will effectively all share the same IP, it's hardly surprising that the limit of 150 requests (per hour?) has already been reached every time I make a request. I've been caught out by this before with other APIs, but this is the first time I've been aware of any such problems with Twitter. I guess I'll have to upgrade my code to use the OAuth or something - assuming I can be bothered.

In this supposed era of cloud computing, rate-limiting by IP seems a bit of a crummy thing to do. It'll probably be reasonable if/when IPv6 becomes the default addressing method, but when just about everyone is still using IPv4, it's a colossal pain in the backside.

The saga of getting Fedora 14 running on a Dell Mini 10 netbook - part 6 of 6

Posted by John Smith on

It's over two months since I first started this series of posts, and to be honest, anything that hasn't already been written up, I've largely forgotten about. Not to mention, Fedora 15 has since come out, so anything specific to version 14 is already out-of-date. However, after writing up so much already, it seems only right to try to come to some sort of conclusion.

First off - although it didn't come into play until later in the process - Fedora 14 initially shipped with a broken version of the pyxf86config package. This caused a failure to set the display refresh rate correctly when I got the proper drivers installed, so before doing anything, yum update that to a fixed version.

This machine uses a "Poulsboro" (aka Poulsborough aka Poulsbo) chipset, which seems notoriously ill-supported on Linux. However, there are (non-free) packages available for this chipset, I ended up installing the following:

  • kmod-psb
  • kmod-psb-PAE
  • limdrm-poulsbo
  • livna-config-display
  • psb-firmware
  • xorg-x11-drv-psb
  • xpsb-glx
The first two are "metapackages", with the actual packages being stuff with horrendous names like "kmod-psb-2.6.35.13-91.fc14.i686-4.41.1-14.fc14.11.i686".

When all this is in place, Fedora switches to the native resolution of 1366x768, and the machine is pretty much as usable as you might reasonably hope. Because of the way I installed the wifi drivers, every time I've upgraded to a new kernel, the wifi has broken. So far I've just kept with the older kernels - by selecting them via the relevant grub boot option - but I assume it's just a case of recompiling the modules under the new kernel, or hopefully replacing them with a package which will automatically update itself in line with the kernel.

Anyway, hope all this is/was of some use to someone out there...

The saga of getting Fedora 14 running on a Dell Mini 10 netbook - part 5 of several

Posted by John Smith on

To recap from the previous post in this series, I now had a machine with Fedora 14 installed and running, but with no networking, and a generic graphics driver that failed to detect the netbook's native resolution.

I assumed that getting wired Ethernet working would be easier than wifi, but this turned out not to be the case, or so it seemed at the time. The manual building of a driver as outlined below never worked, and at the time it seemed there was nothing shipping with Fedora that supported the Ethernet networking. However, as I write this post (nearly a month after the original install), I find that wired networking is indeed running happily. Not sure what happened - maybe the Poulsbo drivers (to be documented) also cover this aspect of the device?

The upshot is, much of the stuff below may well be irrelevant, but without knowing precisely why stuff wasn't working before, but is now, I'm inclined to keep it all here.

The wired Ethernet chipset reports as a "Realtek RTL8102E/RTL8103E PCI-Express" card, and I downloaded the RTL8101-1.020.00 driver from realtek.com. I couldn't locate a driver that was explicitly marked as being for 8102E or 8103E.

(By the way, am I alone in finding mobo/gfx card manufacturer sites horrendous from a user-experience perspective? They all seem to send you round in circles and make it difficult to know whether you've found what you're looking for...

Building from source obviously requires a number of tools and libraries, according to my bash history, I ended up installing the following from the RPMs on the DVD:

  • kernel-devel-2.6.35.6-45.fc13.i686.rpm
  • kernel-headers-2.6.35.6-45.fc14.i686.rpm
  • gcc-4.5.1-4.fc14.i686.rpm
  • binutils-2.20.51.0.7-5.fc14.i686.rpm
  • cloog-ppl-0.15.7-2.i686.rpm
  • cpp-4.5.1-4.fc14.i686.rpm
  • libmpc-0.8.1-fc13.i686.rpm
  • glibc-devel-2.12.90-17.i686.rpm

Despite all this, Ethernet networking resolutely failed to work at the time, so I elected to try the wifi instead.

Having never really bothered with Linux wifi since failing to get it working on an HP laptop around 2004/2005, I'm pretty ignorant of how it works. As far as I can tell, there are some open libraries that interface with the closed drivers that a vendor supplies?

In Windows XP, the wifi card is reported as a "Dell Wireless 1397 WLAN mini-card", which appears to just be a rebranding. Fedora's Network Manager states that it is a "Broadcom Corporation BCM4312 802.11b/g LP-PHY", and this Linux Forums thread indicates that it can be made to work with Fedora. With no operational networking whatsoever, it wasn't possible simply to add RPM Fusion to the list of repos, and instead I'd have to download the RPMs on another machine and transfer them via a USB drive.

Unfortunately, my initial Googling came up with inconsistent results. This FC14 RPM Fusion page listed a Broadcom driver, but with 'fc13' in the package name.

Some kmod-wl RPMs also wouldn't install without a wl-kmod-common package, which wasn't listed on that page.

Eventually I found that there's a "meta-package" broadcom-wl here that provides 3 sub-packages:

  • broadcom-wl
  • config(broadcom-wl)
  • wl-kmod-common

Anyway at this point I was pretty fed up and gave up on this angle of attack. (Sound familiar?) I found some binary drivers on the Broadcom site and using the instructions in their README I was able to build the driver and get it working - albeit at the second attempt.

I hacked in the following lines to /etc/rc.local: modprobe lib80211 insmod /root/wifi/broadcom/wl.ko This probably isn't the most elegant solution, but it got wifi networking operating seamlessly from boot, which would then allow me to tackle the graphics driver...

Subversion deletion annoyance

Posted by John Smith on

A brief detour from the Fedora/Dell posts...

Maybe there's something about me that's cursed, but it seems that the deletion functionality in Subversion is pretty broken. Pretty much every time I delete a directory, the repo gets out-of-sync, and I have to spend 10 minutes working out how to resolve something that should surely have worked properly first time.

Here's an example. I had a directory called dell in my personal notes archive, that contained a single file, that made more sense to be moved to a different directory in the repo. This was moved with no issues: [john@sofia personal]$ svn mv dell/vodafone.txt dellmini10/ A dellmini10/vodafone.txt D dell/vodafone.txt [john@sofia personal]$ svn commit -m "moved some dell files around" Deleting dell/vodafone.txt Adding dellmini10/vodafone.txt Transmitting file data ... Committed revision 38. Note that I wasn't even attempting to do anything particularly adventurous, such as deleting the directory in the same commit.

I then tried to get rid of this now-empty directory: [john@sofia personal]$ ls dell [john@sofia personal]$ svn rmdir dell Unknown command: 'rmdir' Type 'svn help' for usage. [john@sofia personal]$ svn del dell D dell [john@sofia personal]$ svn commit -m "deleted empty dell directory from repo" Deleting dell svn: Commit failed (details follow): svn: Directory '/dell' is out of date WTF!?! (I hasten to add that no-one else is using this repo.)

There's then lots of random flailing around to try to coax it into working: [john@sofia personal]$ svn up C dell At revision 38. Summary of conflicts: Tree conflicts: 1 [john@sofia personal]$ svn up dell At revision 38. [john@sofia personal]$ svn commit -m "deleted empty dell directory from repo" svn: Commit failed (details follow): svn: Aborting commit: '/data/personal/dell' remains in conflict [john@sofia personal]$ svn stat ... D C dell > local delete, incoming edit upon update ... [john@sofia personal]$ svn resolve --accept mine dell svn: 'mine' is not a valid --accept value [john@sofia personal]$ svn help resolve ... [john@sofia personal]$ svn resolve --accept mine-full dell svn: warning: Tree conflicts can only be resolved to 'working' state; 'dell' not resolved [john@sofia personal]$ svn resolve --accept working dell Resolved conflicted state of 'dell' [john@sofia personal]$ svn status ... D dell ... [john@sofia personal]$ svn up At revision 38. [john@sofia personal]$ svn commit -m "deleted empty dell dir" Deleting dell Committed revision 39.

I guess the svn resolve --accept working {dir} did the trick, but I fail to understand why it was ever necessary. Guess I should really move to git or Mercurial...

The saga of getting Fedora 14 running on a Dell Mini 10 netbook - part 4 of several

Posted by John Smith on

Netbooks obviously have a bit of a flaw when it comes to installing Linux distros - namely their lack of an optical drive. Many distros now offer easy means of making a USB key, but Fedora's process is a bit of a faff-around.

In particular, I was rather annoyed to find that it wouldn't actually fit on the 4GB USB drive I had earmarked for it - as well as the ~3.5GB ISO image, there's a separate boot image that you have to install, that pushes it over the age. Fortunately, I did have some 16GB drives kicking around, and after clearing out some old files, I had enough space for Fedora 14.

Unfortunately, even though I was able to boot from the Fedora'd USB drive on a regular PC, I was unable to convince the netbook to boot from it, despite fidding around with BIOS settings and the like. (Not that that was necessary for Ubuntu.) I did contemplate trying a network install, but ultimately decided to buy my way out of the problem, and acquired a cheap(-ish) external USB DVD drive. As I've got 3 netbooks, hopefully it might get some long-term use, but I can't actually recall the last time I used optical media on a PC other than for installing operating systems or burning backups...

I then had a minor mis-step - which was nothing to do with Fedora per se - in that I tried to boot from an x86_64 DVD, but the Atom CPU in the netbook seemed to only want to work with 32-bit binaries. Given that I have a Mini-ITX Atom motherboard that quite happily runs a 64-bit OS, I'd naively assumed that all Atom chips were 64-bit, but evidently not.

Luckily I'd already got a 32-bit DVD to hand from some time back, so I was able to get Fedora installed with no further problems. Unlike Ubuntu, it defaulted to having a separate /boot filesystem, so I can be sure that I can easily get rid of Fedora should I ever choose to. The boot installer recognized the Windows XP drive fine, but it seems that - for now at least - the Dell backup stuff is still out of reach :-(

On booting Fedora from the hard-drive, I wasn't in any way surprised to find that it failed to use the proper graphics drivers - defaulting to a non-native resolution - and was also lacking in any sort of working networking drivers. Evidently the USB drives were going to come in useful after all...

The saga of getting Fedora 14 running on a Dell Mini 10 netbook - part 3 of several

Posted by John Smith on

I pretty much stick to Fedora/Red Hat/CentOS for my primary Linux machines, and just use VMs when experimenting with other distros. This isn't due to any intrinsic brilliance in the Red Hat way of doing things; it's more a case that I'm familiar with how those distros are organized, and can quickly get them configured in a way that I want.

However, whilst the puritanical approach of not including binary/non-free drivers is to be admired, it's not necessarily ideal when it comes to getting a working OS on a machine which might have some 'unusual' chipsets, and many laptops and netbooks fall into this category. As such, I thought it worth installing Ubuntu, due to its more 'pragmatic' approach - I'd previously installed Ubuntu on an Asus EeePC and had encountered no problems.

Trial-running with Ubuntu running from the USB drive was basically fine, so I took the plunge and installed it on the netbook's hard drive, repartitioning the existing Windows XP installation. N.B. at this point in time, I didn't realize that Dell had things up with separate boot and backup partitions, as documented in the previous post.

The hard drive install was fine, other than the expected caveats of not running in the native resolution, and not having fully functional networking - if memory serves, wired Ethernet worked, but not wifi. However Ubuntu popped up an alert asking if I wanted to download the binary drivers for these, and soon all was running happily.

That release of Ubuntu though was around 5 months old, and so it seemed prudent to upgrade the installed packages to their latest versions - big mistake. On reboot, the login prompt (gdm I guess) wouldn't respond to keyboard or mouse. Attaching an external USB keyboard did get it to respond a bit more, but I was unable to get it allow me to enter a username or password. Trying the Ctrl-Alt-function keys to get a non-X11 prompt wasn't any use either.

Ubuntu had created some 'safe-mode' style booting options in the GRUB menu, but these were no good either - booting would halt with some error message I've forgotten, long before getting anywhere near a loging prompt. After faffing around for a while, and failing to find anything useful on Google, I decided to reinstall. However, the Ubuntu installer seemed to be a bit confused, and only offered me the choice of repartitioning the already-repartitioned WinXP disc, not to overwrite the borked Ubuntu install.

At this point I decided to give up on Ubuntu, and try Fedora instead. Before doing this though, I had the forethought to check beforehand how exactly the disk was partitioned. (If I'd done this before starting on this whole process, it would have probably saved a lot of time and stress...)

The hard drive was split up as follows (this was the first time I'd realized there had been more than just a single Windows filesystem originally):

  1. The tiny Dell boot partition
  2. The main Windows XP partition
  3. The Ubuntu partition I wanted to get rid of
  4. The Dell restore partition
What I'd like to have done prior to attempting to install Fedora, was to restore the machine back to its factory state, but this wasn't possible - the Ubuntu installer had rewritten the MBR, circumventing the Dell boot partition that would have given me this option. In retrospect, I should have backed up the MBR with dd before installing Ubuntu, which would have now allowed me to restore it via a live distro running from a USB drive.

Worse, Ubuntu's installer not creating a separate /boot filesystem meant that the whole boot process - even for Windows XP - was dependent on this Ubuntu filesystem I wanted to get rid of. (Fedora does create a separate /boot by default, so it's easy to get rid of the main Fedora installation with no impact on the prior OSes.) Possibly I could have restored an MBR from a different machine, but given the non-vanilla Dell configuration, I had doubts that this would be problem-free.

In the end, I decided that the only real option was to go ahead with installing Fedora, and hope that it wouldn't make any more of a mess than was there already...

The saga of getting Fedora 14 running on a Dell Mini 10 netbook - part 2 of several

Posted by John Smith on

Continuing on from the introductory post in this series...

This post is really only of relevance if you intend to retain the original OS and dual-boot - if you're happy to obliterate all traces of Windows, then you can happily skip this one.

I'm not really up on current best-practice for PC vendors, but traditionally I'd expect a new machine to ship with the hard drive fully formatted with a C: drive, whether FAT32 or NTFS. However, this Dell netbook appears to have the disk split into three partitions, as follows:

  1. A pretty tiny 'Dell Utility' filesystem
  2. The actual Windows NTFS, taking up the bulk of the disk
  3. what purports to be a CP/M (!) filesystem, containing (presumably) an image of Windows XP that can be reinstalled. As I haven't used it, I've no idea if this is a vanilla install, or if it has the specific drivers for this machine pre-baked in
Here's how fdisk -l reports these filesystems; note that this was run after I'd created a Linux partition, hence the end of the NTFS filesystem not matching the start of the CP/M one. /dev/sda1 1 5 40131 de Dell Utility /dev/sda2 * 6 11643 93478796+ 7 HPFS/NTFS /dev/sda3 18184 19457 10233405 db CP/M / CTOS / ...

The machine isn't currently powered up, so I can't check the exact figures, but I'd made a backup of these files, and the first partition has just 9.7MB of files, the backup partition has 4.1GB - I'd assume the partitions are only slightly bigger. (By the way, both Ubuntu 10 and Fedora 14 seemed quite happy to mount these filesystems, despite their slightly obscure formatting.)

What I'm guessing is the 'Dell Utility' filesystem is like a /boot Linux filesystem with grub or lilo on. It would seem to look for some key being pressed at boot time - F8 judging by various pages on the net - and if so, runs some restore process using the images on the CP/M filesystem. (Note that F8 itself doesn't seem to be picked up by the BIOS, which only mentions and responds to F2 and F12.) Unfortunately, all this is a guess, because neither the Ubuntu 10 nor Fedora 14 installers seem to recognize the true nature of these filesystems! :-(

What happens is that the installers detect the existing XP install, and when they configure the bootloader, give the choice between booting Linux or Windows - and by Windows, I mean the OS on the NTFS filesystem, not the Dell Utility stub. As such, while XP boots up happily, it seems that I've now lost the ability to restore the system to the factory default using the Dell tools. Whilst I don't have any plans to do this, it would be nice to have the option.

In retrospect, what I should have done was to first back up the MBR using a live distro, which in theory would allow me to undo any "damage" inflicted by a Linux installer - or a regular reinstallation of Windows for that matter. As I've never actually restored an MBR, I don't know if various pages on the net are accurate, but the commands seem to be: # create a backup file called sda-mbr.bin in /tmp dd if=/dev/sdX of=/tmp/sda-mbr.bin bs=512 count=1 and # restore the backup file dd if=sda-mbr.bin of=/dev/sdX bs=1 count=64 skip=446 seek=446

At some point I'll do some experiments with the grub configuration to see if it can be coerced into booting either of these extra filesystems - but for the most part, I've given up on any expectation that they ever be usable again.

As a postscript, this information may not be applicable to newer Dell netbooks. Some page I read - this one perhaps? - mentioned that when the restore stuff is running during the boot process, it briefly flashes up some white text on a blue background. I do recall seeing something like this on this netbook, but it doesn't do it now, which makes sense given the mangling of the original boot sequence. However, my 11" Celeron/Win7 Dell doesn't flash this up at all, so whether they're doing something different now, I don't know. I'll probably do some digging with the aid of a live distro on it, but right now I'm still more interested in getting this 10" machine fully up to speed.

In the next exciting installment, I'll bitch about Ubuntu...

The saga of getting Fedora 14 running on a Dell Mini 10 netbook - part 1 of several

Posted by John Smith on

Amongst my ever growing, never diminishing, pile of hardware are a couple of 1-2 year old Dell netbooks - an Atom based 10" with Win XP Home, and a Celeron 11" with Windows 7. I never had any special plans for either of these machines, they were mainly bought because they were amazingly cheap - the 10" was £209 in Autumn 2009, the 11" £229 in Summer 2010, both of which were around £50-100 less than the going rate for comparable hardware at the time.

Since getting the 11" machine, the 10" has had pretty minimal use. Even with a supposedly "low-end" OS like XP Home, the crummy CPU coupled with 1GB of RAM means it's a pretty chuggy experience - probably not helped by the mountains of pre-installed crap that Dell shoved on it. (The Win 7 machine was much, much cleaner in this regard.)

Now, I do have an Atom based desktop machine running Fedora 11, and that's perfectly usable for the most part - albeit with a slightly better spec Atom processor, but with the same 1GB of RAM. As such, it seemed to make sense to look at getting Linux on the unused netbook, which might give it a bit more regular workout. Both of the netbooks have had a few distros installed within VMWare, but unsurprisingly they don't run that great, performance-wise, but I'm hopeful that Linux running natively should be a decent experience.

The last time I installed Linux natively on a laptop was around 2004/5, and whilst I was happy with how it run, I never got the wifi working. In all honesty, I don't think I actually spent much - if any - effort trying to fix it; I was quite content to use a wired connection. However, the experience made me wary of expecting too much by way of Linux compatibility on laptop hardware; hence why I've generally stuck to running distros within VMs, and letting the underlying pre-installed OS worry about the hardware.

Also, I'm loathe to ever get rid of the originally installed OS on any machines I buy, so I run them as dual-boot. I don't think I'd ever had much of an issue on this before now, but that was before I encountered the mysterious of Dell's restore functionality...

All this blather is leading up to a series of posts I'm planning, that document the trials and tribulations of doing what in theory should be a fairly straightforward task - getting a modern, well-establish and widely used Linux distro (Fedora 14) running on what would seem to be fairly mundane, well understood and supported mass-market hardware (Dell netbook). Unfortunately, this isn't the case :-( Most or all of the areas I'll cover in this series is documented on the net, but it's all rather disjointed, so hopefully I can collate it all here for any other lost souls who set on this path.

To be continued...

Incomplete implementation of getElementsByClassName for SVG in IE9

Posted by John Smith on

Now that IE9 has been officially released, I thought it would be wise to check that this blog looked OK in it - it certainly didn't in older versions, but given the audience I'm writing for, I wasn't especially bothered about fixing things.

For the most part it seems acceptable - it's mainly the cosmetic CSS stuff like transitions and gradients that aren't working properly. However, skimming through my older posts, I noticed a glitch which is more subtle than you might expect...

In this post I have an SVG file with some perfunctory interactivity implemented via JavaScript. Some of the more mundane functionality works - hovering over a segment in the bar chart changes the segment colour to indicate that it is clickable. However, clicking has no effect, whereas this works fine in other modern browsers.

Investigation shows that IE9's implementation of getElementsByClassName isn't all there for SVG. Within an SVG file, you can do document.getElementsByClassName("foo") to get all the matching elements in the entire SVG document, and that works fine, but document.getElementById("foo").getElementsByClassName("bar") to get the matching child elements within an element doesn't work.

There's a cut-down test file here. It works fine in Chrome, Safari, Firefox (4+) and Opera, but in IE9, its developer console reports this: Screengrab of IE9 showing it reporting a JavaScript error on a test SVG file that uses getElementsByClassName

A similar test on an HTML5 document's element does work fine in IE9, so it would seem to be an issue with their implementation of the SVG DOM. As this seems to only affect SVG, and an uncommon usage of the method - it looks like most people will use it against the document, not an element - this problem is unlikely to affect a large number of people, but it's certainly a pain in the backside for those of us who like to play with SVG :-(

« Page 3 / 6 »

About this blog

This blog (mostly) covers technology and software development.

Note: I've recently ported the content from my old blog hosted on Google App Engine using some custom code I wrote, to a static site built using Pelican. I've put in place various URL manipulation rules in the webserver config to try to support the old URLs, but it's likely that I've missed some (probably meta ones related to pagination or tagging), so apologies for any 404 errors that you get served.

RSS icon, courtesy of www.feedicons.com RSS feed for this blog

About the author

I'm a software developer who's worked with a variety of platforms and technologies over the past couple of decades, but for the past 7 or so years I've focussed on web development. Whilst I've always nominally been a "full-stack" developer, I feel more attachment to the back-end side of things.

I'm a web developer for a London-based equities exchange. I've worked at organizations such as News Corporation and Google and BATS Global Markets. Projects I've been involved in have been covered in outlets such as The Guardian, The Telegraph, the Financial Times, The Register and TechCrunch.

Twitter | LinkedIn | GitHub | My CV | Mail

Popular tags

Other sites I've built or been involved with

Work

Most of these have changed quite a bit since my involvement in them...

Personal/fun/experimentation