John Smith's Blog

Ramblings (mostly) about technical stuff

Reinvented the wheel and built my own IP address checker

Posted by John Smith on

I've recently started started using a VPN for the first time in years, and was using WhatIsMyIP to sanity check that I was indeed seeing the net via a different IP than that provided by my ISP. However, there were a few things I wasn't too happy about:

  • I was concerned that my repeated queries to that site might be detected as abusive.
  • Alternatively, I might be seeing cached results from an earlier query on a different network setup.
  • As someone happiest using the Unix command line, neither switching to a browser window, nor using curl and parsing the HTML output, were ideal.

So, I spent a few hours knocking up my own variation of this type of service, doubtless the gazillionth implementation clogging up the internet, which you can find here. While it's still pretty basic, there are a couple of features that I haven't noticed in other implementations:

  • A Geo-IP lookup is done, to identify the originating country, region, city and latitude and longitude. This data is obtained via a Google API, so it's probably as accurate as these things get - which isn't very much, at least at the lat/long level. (The main motivation for adding this functionality was to help analyse if my VPN can be abused to break region restrictions on sites like Hulu ;-)
  • To make things more convenient for non-browser uses, multiple output formats are supported (HTML, plain text, CSV, XML and JSON), which can be specified either by an old-school format=whatever CGI argument, or a more RESTful way using the HTTP Accept header.

Here are a couple of examples of usage: [john@hamburg ~]$ curl -H "Accept: text/plain" "http://report-ip.appspot.com" IP Address: x.x.x.x Country: GB Region: eng City: london Lat/Long: 51.513330,-0.088947 Accept: text/plain Content-Type: ; charset="utf-8" Host: report-ip.appspot.com User-Agent: curl/7.21.3 (x86_64-redhat-linux-gnu) libcurl/7.21.3 NSS/3.13.1.0 zlib/1.2.5 libidn/1.19 libssh2/1.2.7 [john@hamburg ~]$ curl "http://report-ip.appspot.com/?format=json" { "ipAddress": "x.x.x.x", "country": "GB", "region": "eng", "city": "london", "latLong": "51.513330,-0.088947", "headers": { "Accept": "*/*", "Content-Type": "; charset="utf-8"", "Host": "report-ip.appspot.com", "User-Agent": "curl/7.21.3 (x86_64-redhat-linux-gnu) libcurl/7.21.3 NSS/3.13.1.0 zlib/1.2.5 libidn/1.19 libssh2/1.2.7" } }

I've created a project on GitHub, so you can see how minimal the underlying Python code is. The README has some notes about what extra stuff I might add in at some point, in the event I can be bothered.

As the live app is just running off an unbilled App Engine instance, it won't take much traffic before hitting the free quota limits. As such, in the unlikely event that someone out there wants to make use of this, you might be better off grabbing the code from the repo and deploying it to your own App Engine instance.

Enhanced version of Python's SimpleHTTPServer that supports HTTP Range

Posted by John Smith on

I've just uploaded a small personal project to GitHub here. It's basically a very crude webserver that allows me to share audio files on my Linux boxes to my iOS devices, using Mobile Safari.

The main reason for noting this is that the code may be of more general interest because it implements an improved version of Python stdlib's SimpleHTTPServer module, that implements basic support for the Range header in HTTP requests, which is necessary for Mobile Safari on some MP3 files.

During early development, I found that some MP3 files would refuse to play in Mobile Safari when served by SimpleHTTPServer. The same file would play fine if served by Apache. Because debugging mobile web browsers is a PITA (caveat: I've haven't kept up with the latest-and-greatest in this area), I ended up resorting to Wireshark to see what was going on.

Wireshark indicated that Mobile Safari would request chunks of the MP3 file (initially just the first couple of bytes), but SimpleHTTPServer would always serve the entire file, because it never checked for the existence of the Range header. On certain files, this wouldn't bother Mobile Safari, but on others it would cause the audio player widget to show an unhelpful generic error.

Once I understood what the problem was, I found that I'm not the first person to get caught out by this, and that Apple themselves state that servers need to support Range to keep Mobile Safari happy.

To solve the problem, I wrote a new class HTTPRangeRequestHandler that is a direct replacement for SimpleHTTPServer. In my app code proper, I then (try to) pull in my enhanced handler as follows: try: import HTTPRangeServer inherited_server = HTTPRangeServer.HTTPRangeRequestHandler except ImportError: logging.warning("Unable to import HTTPRangeServer, using stdlib's " + "SimpleHTTPServer") import SimpleHTTPServer inherited_server = SimpleHTTPServer.SimpleHTTPRequestHandler ... class MySpecificHandler(inherited_server): ... def main(port=12345): Handler = EnhancedRequestHandler httpd = SocketServer.TCPServer(("", port), Handler) Arguably it might be better for the code to die if HTTPRangeServer cannot be imported, but as the stdlib SimpleHTTPServer is good enough for many browser clients, it doesn't seem too unreasonable to use it as a fallback.

This code is currently OK for most uses, but currently it doesn't support all variations of the Range header as described at aforementioned W3C spec page. It does however support all the requests variants I've seen in my - admittedly very cursory - browser testing, and any requests that it can't parse will instead get the full file served, which is the same behaviour as SimpleHTTPServer.

The musicsharer application that's built on this class is even rougher, but as it's really just intended for my own personal use, you shouldn't hold your breath waiting for me to tart it up...

About this blog

This blog (mostly) covers technology and software development.

Note: I've recently ported the content from my old blog hosted on Google App Engine using some custom code I wrote, to a static site built using Pelican. I've put in place various URL manipulation rules in the webserver config to try to support the old URLs, but it's likely that I've missed some (probably meta ones related to pagination or tagging), so apologies for any 404 errors that you get served.

RSS icon, courtesy of www.feedicons.com RSS feed for this blog

About the author

I'm a software developer who's worked with a variety of platforms and technologies over the past couple of decades, but for the past 7 or so years I've focussed on web development. Whilst I've always nominally been a "full-stack" developer, I feel more attachment to the back-end side of things.

I'm a web developer for a London-based equities exchange. I've worked at organizations such as News Corporation and Google and BATS Global Markets. Projects I've been involved in have been covered in outlets such as The Guardian, The Telegraph, the Financial Times, The Register and TechCrunch.

Twitter | LinkedIn | GitHub | My CV | Mail

Popular tags

Other sites I've built or been involved with

Work

Most of these have changed quite a bit since my involvement in them...

Personal/fun/experimentation