• Simple Java Value Objects

    It’s a really simple thing, but I’ve been using this simple “pattern” for defining simple value objects for years, and it has served me well.

    While there’s nothing particularly special about this style, I still see a significant amount of Java code needlessly following the JavaBeans style, when using these objects as Beans in the strict sense is not actually desired, intended, or required, and simply makes code needlessly verbose and leaves objects implemented as Beans open to abuse due to leaving their internal state open for mutation.

    This pattern works well over traditional JavaBeans because:

    • it’s immutable - invaluable for concurrent or multi-threaded applications where you don’t want to give applications the ability to change values as they please
    • it’s neat - due to being immutable, there’s no need for superfluous “setters”, and if there are no setters, there’s no need for “getters”, so the code is dead simple and easy to work with
    • it’s portable - these objects are trivial to serialise using either Java Serialisation (or any of the preferable drop-in replacements), almost any serialisation library will be able to serialise them, and Jackson can deserialise them without any additional code
    • due to all the above, they’re also ideal for use as messages in event-driven systems

    Here’s an example of a simple object implemented in this style:

    import java.beans.ConstructorProperties;
    
    public class User implements Serializable {
      private static final long serialVersionUID = 1L;
    
      public final String email;
      public final String name;
      public final Address address;
    
      @ConstructorProperties({ "email", "name", "address" })
      public User(String email, String name, Address address) {
        this.name = name;
        this.email = email;
        this.address = address;  
      }
    }
    

    This object is now serialisable and deserialisable via Java serialisation or better alternatives such as FST (just leave off Serializable if you don’t need that), as well as JSON serialisation libraries such as Jackson or GSON.

  • Unreal Archive

    Over the past several months, I’ve been working on a project to provide a place to catalogue and preserve the vast amounts of user-created content the Unreal and Unreal Tournament community has been creating over the past 20+ years.

    This has resulted in the Unreal Archive.

    While it may seem a silly cause to invest so much time (and money) into, this stuff directly influenced the lives of myself and thousands of others. I would certainly not be in the profession I’m in, driving my car, living in my house, if not for the direct influence of working on Unreal Tournament maps, mods and community, and personal websites.

    This stuff made many of us who we are today, and a lot of it has already been lost in time. The internet may not ever forget, but it certainly misplaces things in ways it can’t be found again.

    A lot of content is in fact mirrored in various places on the internet, but it can be hard to download, as people generally don’t appreciate you mirroring 100s of gigabytes off their shared hosting.

    Thus, the Unreal Archive is an initiative to gather up, index, and catalogue as much Unreal, UT99 and UT2004 content as possible. So far, we have maps, map packs, voices, skins, mutators, player models, as well as support for things such as patches, updates and drivers as well as a (currently very empty) section for written documents with the intent of providing guides, tutorials, manuals, and other related documented knowledge which also seems to get lost and forgotten.

    The tech stack and some of the decisions involved may seem odd, but in keeping with the theme of longevity, preservation, and the general ease of losing things on the internet, these are some of my motivations:

    • statically generated content - the website is generated as a collection of plain HTML pages. this ensures no dependence on having to host a website with any dependency on any sort of back-end service beyond the simplest of HTTP servers. specific pains have been taken to ensure it works well with file:// local resources as well, so it doesn’t even need to be hosted!
    • written in Java - largely because I know it well enough to do this, but also because it’s not going anywhere soon, so the indexing and site generation capabilities will remain in action for a long time.
    • data stored as YAML files - a dead simple format that’s also easily human- readable. in 30 years when all the YAML parsers have died, if someone looks at these files, they’ll be easy to write new parsers for, if that’s ever needed.
    • the “database” is Git - easy to distribute amongst many people, and since this is primarily an archive, the data does not change rapidly enough to require anything more real-time.
    • the entire project is “licensed” under UNLICENSE, with the intent of it being as absolutely open as possible, for as long as possible.

    As I’m collecting a lot of the data for the archive directly from the pieces of content themselves, a large part of implementing this also involved figuring out the Unreal Package data formats. Thankfully there are still several references for this hanging around, and many people have made their research on the topic public.

    I’ve released a separate Unreal Package Library (Java) which some people may find useful. I’m using it to read map information, such as authors, player counts, titles, etc, export images such as screenshots and player portraits, as well as for parsing Unreal’s INT and UPL metadata files (more-or-less glorified INI files).

    All the code for the project is up on GitHub, as is the content database.

  • Dumping PlayStation discs on Linux

    I recently wanted to play original set of Metal Gear Solid games, but since I’m now lacking any PlayStation hardware, thought I’d try out emulation.

    I’m quite a fan of RetroArch, and that has a suitably good core for PS1 gameplay, and provides some pretty good upscaling and other nice options and optimisations, rendering via Vulkan.

    PS2 emulation still seems to really only be available via PSCX2, though it seems pretty solid and dispite being 32-bit and still running an OpenGL renderer.

    PS3 emulation via RPCS3, at time of writing, still has a way to go for most games - in my case missing Metal Gear Solid IV support is a bit of a downer in particular.

    Anyway, so to use any of these systems, you’re going to need to either stick physical discs into ye olde optical drives, or dump those discs to files for easy switching and managing.

    Note: In both cases below, remember to umount the discs first. Multi-disc games will also need some manual script rejigging.

    Both of the following scripts should be invoked using:

    $ ./dump-psx.sh GameName
    

    Here’s a simple Bash script for dumping/ripping your PlayStation 1 games (download). This requires installation of package cdrdao.

    #!/bin/bash
    
    GAMEFILE="$1/$1"
    mkdir "$1"
    
    cdrdao read-cd --read-raw --datafile "$GAMEFILE.bin" --device /dev/cdrom --driver generic-mmc-raw "$GAMEFILE.toc"
    toc2cue "$GAMEFILE.toc" "$GAMEFILE.cue"
    

    And here’s the process for dumping/ripping PlayStation 2 games (download). This requires installation of package genisoimage.

    #!/bin/bash
    
    GAMEFILE="$1/$1"
    mkdir "$1"
    
    BLK_SIZE=$(isoinfo -d -i /dev/cdrom | grep -i -E 'block size' | sed 's/[^0-9]*//')
    VOL_SIZE=$(isoinfo -d -i /dev/cdrom | grep -i -E 'volume size' | sed 's/[^0-9]*//')
    
    dd if=/dev/cdrom of="$GAMEFILE.iso" bs=$BLK_SIZE count=$VOL_SIZE
    
  • UTStatsDB

    UTStatsDB is a player and match statistics system for Unreal Tournament 99, 2003, 2004 and 3, which parses match logs generated by each game (sometimes requiring additional server-side mutators), and makes stats for each game available through a website.

    The stats are also aggregated by player, map and server, allowing you to browse and analyse quite a number of in-depth stats for each.

    The project was developed and maintained by Patrick Contreras and Paul Gallier between 2002 and around 2009, where the original project seems to have been abandoned some time after the release of UT3. (addendum: by some coincidence, after 9 years of inactivity, the original author did create a release a few days after my revival/release) Locating downloads (the download page is/was not working) or the source (their SCM system seems to require auth or is simply gone) was quite troublesome.

    Thankfully it was released under GPL v2, so I’ve taken it upon myself to be this project’s curator (addendum: since the original author also made a new release, I may now need to look into a rename or major version bump), and have since released two new versions, 3.08 and 3.09 which focus firstly on getting PHP support up to scratch so it runs without issue on PHP 7+, as well as implementing PHP’s PDO database abstraction layer for DB access, rather than using each of the supported DB drivers (MySQL, MSSQL, SQLite) directly.

    In addition to many other bug fixes and issues, I’ve thus far revised the presentation significantly, provided Docker support, improved performance of several SQL operations by implementing caching and better queries, etc.

    UTStatsDB can be found on GitHub, where the the latest release can also be downloaded.

    A live example of UTStatsDB in action can be found at the UnrealZA stats site.

  • New Car

    Said a teary farewell to some old things, got a new thing. Very happy with the new thing.

  • Running Unreal Tournament 99 on Linux (part 2)

    After installing and running Unreal Tournament (UT99) using Wine, I thought it would be good to try running a Linux-native version of the game.

    Thankfully, this is reasonably easy, and seems very stable - more so than running under Wine.

    Before you start, you’ll need the original Unreal Tournament game files, most easily obtainable by installing the Steam version using Wine.

    Continue reading...

  • Running Unreal Tournament 99 on Linux (part 1)

    With all the talk of Unreal Tournament 4 possibly being cancelled one of these days, due to Epic’s runaway success with Fortnite, I’ve decided there’s really no reason to not be playing UT99.

    Thus, we set about trying to run it on modern hardware, with a modern Linux installation.

    As much as this is about setting things up on Linux, it’s also partially my own attempt at some knowledge preservation, as a lot of this stuff ends up being forgotten or lost over time (it’s been almost 20 years! a lot of the old sites and things you expect to find this info on simply do not exist anymore :()

    This is part one of two, and will focus on installing and running the game using Wine.

    Continue reading...

  • Publishing Server Status to StatsD with no additional software

    I recently wanted to set up a couple of rough monitoring services to keep track of simple server status, load, disk etc. While there are options available like Munin which can do this by installing agents on the machines to be monitored, I wanted something a little simpler and more portable.

    I’m quite fond of the StatsD + Graphite + Grafana stack, which is quite easy to run thanks to Kamon’s grafana_grafite Docker image, and I realised you can actually quite simply write counters, gauges and timers to StatsD using nothing but the standard Linux tools nc and cron.

    For example, every minute on each server being monitored, a simple cron job is executed which uses nc to write a bunch of information to my StatsD service:

    #!/bin/bash
    
    HOST=$(hostname)
    
    STAT_HOST="statsd-host"
    STAT_PORT=8215
    
    # load average
    echo "load.$HOST.avg:`cat /proc/loadavg | cut -d ' ' -f 1 | awk '{print $1*100}'`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
    
    # memory
    echo "memory.$HOST.perc.free:`free | grep Mem | awk '{print $3/$2 * 100.0}'`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
    echo "memory.$HOST.bytes.total:`free -b | grep Mem | awk '{print $2}'`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
    echo "memory.$HOST.bytes.used:`free -b | grep Mem | awk '{print $3}'`|g" | nc -w 1 -u  $STAT_HOST $STAT_PORT
    
    # disk
    echo "disk.$HOST.kbytes.total:`df -k --output=size / | grep -v [a-z]`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
    echo "disk.$HOST.kbytes.used:`df -k --output=used / | grep -v [a-z]`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
    echo "disk.$HOST.kbytes.avail:`df -k --output=avail / | grep -v [a-z]`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
    
    # mail queues
    for i in maildrop hold incoming active deferred bounce; do echo "postfix.$HOST.queues.${i}:`find /var/spool/postfix/${i} -type f | wc -l`|c"; done | nc -w 1 -u $STAT_HOST $STAT_PORT
    

    It’s perhaps a bit inefficient in places, but gets the job done fairly well. One minute resolution may be a bit rough, but it’s sufficient for most of these data points which don’t change too dramatically over time.

    Some other more specific variations include HTTP accesses, ping times, etc. Pretty much any parameter you can parse down to a single number can be published as a counter, gauge or timer to StatsD, and then neatly graphed over time.

  • Aurial, HTML5 Subsonic Music Player

    I have finally decided to release version 1.0 of Aurial, my implementation of a music player/client for the Subsonic music server.

    I started this around two years ago, some time after switching my primary desktop from Windows to Linux, and I really missed foobar2000 - it has been my primary music player ever since. Unfortunately I have an irrational aversion to using Wine to run Windows applications, and none of the native music players on Linux felt good to me. As I already ran a Subsonic music server, I thought I’d just make use of that.

    The existing browser-based clients for Subsonic were either too basic, or the state of their code and some implementation features made me uncomfortable. I just wanted a nice music player that allowed me to browse my collection similar to how I did in foobar2000 (using Subsonic’s ID3 tag based APIs, rather than the directory-based browsing offered by other clients), perhaps manage playlists, make ephemeral queues, and importantly, scrobble played tracks.

    Podcasts, videos, and other things some clients support don’t interest me at all, and are a bit out of scope of a foobar2000-like client I beleive.

    Aurial allows me to build a music player the way I prefer to browse, manage and play music (which admitedly, is quite heavily influenced by my prior foobar200 configuration and usage habits).

    This was my first attempt at a React application, and it started off simply enough, with JSX transpiling and stuff happening directly in the browser. At some point Bable was no longer available for browsers, which led to my adoption of Webpack (and eventually Webpack 2) for producing builds.

    This also led to things like needing some sort of CI, and I’ve recently begun producing builds via TravisCI which automates building the application, and deploying it to GitHub Pages, which I think is pretty neat.

    I also got to play with HTML5’s <audio/> a bit, as the player library I was using previously had some reliance on Flash, and was occassionally tricky to coax into using HTML rather than that. The result is infinitely smaller and less complex audio playback implementation (it’s amazing how much easier life is when you ignore legacy support).

    Anyway, altogether it’s been fun, and as I’m using it contantly, it’s always evolving bit by bit. Hopefully someone else finds it useful too.

  • Client-Side Processing of Images with JavaScript Before Uploading

    The title’s quite silly unfortunately, but I was recently doing some experimentation with uploading images to CouchDB directly from a browser. I needed to scale the images before storage, and since I was talking directly to the CouchDB service without any kind of in-between API services or server-side scripts, needed a way to achieve this purely on the client.

    Thanks to modern APIs available in browsers, combined with a Canvas, it’s actually reasonably simple to process a user-selected image prior to uploading it to the server without the need for any third-party libraries or scripts.

    Continue reading...