logo

ShrimpWorks

// why am I so n00b?

TLDR; There’s now an MQTT Client implementation written in UnrealScript

I’ve been doing a bit of stuff in UnrealScript recently, and reacquainting myself with it.

Something I’ve always been aware of, but have never really looked at in much detail, is that it has an actual TCP client you can extend to implement whatever remote communications protocol you’d like.

For whatever reason MQTT popped up as my candidate to play with, with the thought that you’d be able to publish in-game events to some topics and build interesting things with (the first thing that came to mind was a match stats collection service which doesn’t rely on the traditional process of log scraping), in addition to allowing in-game functionality to respond to incoming events by way of topic subscriptions. And being something targeted at supporting very simple IoT devices, the protocol should be fairly easy to work with.

Thus, we jump into the comprehensive but sometimes strangely documented MQTT version 5.0 protocol documentation to find out how it works. It is indeed fairly straight forward.

Now to find out how the Unreal Engine 1 TcpLink class works. Keeping in mind this was implemented in the late 90s, data was smaller, data structures were generally less complex, and not everything was networked.

Firstly, opening a connection is a bit of a process.

  1. Request resolution of a hostname by Resolve(hostname);
  2. An event, Resolved(ipAddr) will fire, with the resolved IP address (integer representation)
  3. Then, you manually bind the client’s ephemeral port with, a simple BindPort - this immediately returns a bound port number
  4. If your port was bound, you can call Open(ipAddr)
  5. An event, Opened() will fire when the connection is established, and you may now send and receive data.

So slightly more manual than a higher level implementation in most modern languages, but when you consider the engine is single-threaded, it’s quite a reasonable process to get around blocking on network I/O.

Sending data is fairly simple, via the SendBinary(count, bytes[255) function. If you have more than 255 bytes of data to send, it’s a simple matter of re-filling the 255 byte array and repeating, until you’re done.

Initially, I tried to use the ReceivedBinary(count, bytes[255]) event for processing inbound data, but due to a known engine bug, this only serves up garbage data, so we’re left relying on ReadBinary(count, bytes[255]) which similar to sending, you can call multiple times on a re-usable buffer until the function returns 0 bytes read.

To make working with data using these processes a bit easier, I implemented a ByteBuffer class, modelled exactly after Java NIO’s ByteBuffer. I feel allocating a re-usable fixed size buffer array which can be compact()ed, followed by a series of put(bytes[255]), and an eventual flip() to allow reading is both performant and simple to reason about.

Implementing this ByteBuffer class also gave me a better understanding of the Java ByteBuffer in the process, even though I’ve been using it for years, it helps to reinforce understanding of some of the implementation details.

So, using this process of connecting, filling buffers, parsing them according to the specification, sending responses and so on gives us a nice suite of functionality within the client itself. We also want to support custom subscribers which allow other code and mods to receive events from MQTT subscriptions.

UnrealScript of course does not have the concept of Interfaces, but does support inheritance, so by extending MQTTSubscriber, custom code can do what it needs to, using the receivedMessage(topic, message) on subclasses of that class.

UnrealScript also provides a very neat child/owner relationship between spawned Actors, and so we’re making use of this to attach subscribers to the MQTT Client. Two standard events the MQTTClient makes use of for this are GainedChild(child) and LostChild(child), which notify the client when a subscriber has been spawned as a child of the client. On gaining a child, the client can automatically establish a subscription for the subscriber’s topic, so it can start receiving those messages. Similarly, when it loses a child, the client can automatically clean up any related topic subscriptions.

This process allows neat life-cycle management of both the subscriber classes themselves, as well as the actual server-side topic subscription, by leveraging built-in language/system functionality.

Overall, I’m happy with the end result, both in final utility of the implementation, and it’s usability for users of the classes involved. It was also pretty educational and enlightening to see how this old single-threaded engine deals with network connectivity, and the process of building the ByteBuffer also helped reinforce my understanding of Java’s implementation as well.

titleNew Car

date 22 Mar 2022

New car from a few months ago. First time without a Civic or a Type R badge. First car not bought from Honda Westrand.

It even has plastic wheel covers over steelies.

Frequently while implementing HTTP API or other HTTP clients, you want to be able to test your client implementation against an actual HTTP service, which helps validate that your headers are set correctly, body is serialised appropriately, and responses are parsed as expected.

This can be done through the use of various additional libraries and mocking frameworks, however I’d argue that for most use cases, something that can simply validate and respond to an HTTP request is more than enough.

For such cases, the example below achieves just that. I find this much quicker and easier to set up, requiring no additional dependencies or learning of a new DSL, and test setup, execution and teardown are at least a factor of 3-4 times faster for the same test suite.

import com.sun.net.httpserver.HttpServer;

public class MyAPIClientTest {
	private static final int PORT = 56897;

	private HttpServer server;

	@BeforeEach
	public void before() throws IOException {
		this.server = HttpServer.create(new InetSocketAddress("127.0.0.1", PORT), 0);
		this.server.setExecutor(Executors.newSingleThreadExecutor());
		this.server.start();
	}

	@AfterEach
	public void after() {
		this.server.stop(0);
	}

	@Test
	public void shouldGetBalance() {
		// test setup - define expectations, set up expected response
		this.server.createContext("/za/pb/v1/accounts/172878438321553632224/balance", e -> {
			try {
				assertEquals(e.getRequestMethod(), "GET");
				assertEquals(e.getRequestHeaders().getFirst("Authorization"), "Bearer Ms9OsZkyrhBZd5yQJgfEtiDy4t2c");

				// JSON.toBytes() is a simple wrapper around a Jackson writeValueAsBytes() call
				byte[] result = JSON.toBytes(Map.of(
						"data", Map.of(
								"accountId", "172878438321553632224",
								"currentBalance", BigDecimal.valueOf(28857.76),
								"availableBalance", BigDecimal.valueOf(98857.76),
								"currency", "ZAR"
						),
						"links", Map.of("self", "/za/pb/v1/accounts/172878438321553632224/balance"),
						"meta", Map.of("totalPages", 1)
				));
				e.sendResponseHeaders(200, result.length);
				e.getResponseBody().write(result);
			} finally {
				e.close();
			}
		});
    
		// run test using my client against the API
		MyClient client = new MyClient("127.0.0.1", PORT);
		Balance = client.getBalance("172878438321553632224");
		// validate response client gathered, etc...
    }
}

As you can see, we’ve simply set up an expectation based on URL and method, validated that the Authorization was provided as expected, and then constructed a suitable response in the format the upstream API should be providing.

In place of the constructed API response I’ve used here, one could also easily place the contents of real or documented example API responses directly into the response, for your client to consume.

Seems like it’s only been 4 years since the last time I did this. Really feels a lot longer.

Anyway, I was becoming rather annoyed with trying to keep Ruby Gems and things updated and working, with plugins breaking and stuff with Jekyll, and I recently built a few other websites using Hugo, which made me quite keen on switching.

Hopefully everything’s carried over properly with all the right URLs and things. I’ve also tried to improve some things, and will be expanding the Projects section which I feel doesn’t represent everything that goes on.

As for the style: I just wanted to do something a little more fun and retro looking.

My first website (“Shrimp’s Maps”, the precursor to “ShrimpWorks” and the first website I ever made, which I used for sharing Unreal Tournament levels in early 2000), featured a silly “tech” look with blue gridlines and a cool angular header graphic thing, and all the text was monospace Courier New. Unfortunately it’s completely lost in time.

Once I upgraded to Wordpress in ~2005 after using a system called Geeklog for a while after Shrimp’s Maps, I adopted the content-with-sidebar style, which I switched away from at some point in favor of menus, so for more throwback fun, I’ve replicated that old layout style as well.

It’s perhaps a little busy, but everything’s going to “clean” and stripped down these days, a bit of busy-ness is not the end of the world :).

Here’s how we looked in 2005! Thanks archive.org!

Here’s how we looked in 2014!

And here’s what we’ve just come from.

There’s a strong tendency to want to run everything in Docker these days, especially if you’re trying to run something as an always-on service, since passing --restart=always to your run invocation or Docker Compose configuration ensures that running containers start back up after reboots or failures, and seems to involve a little less “black magic” than actually configuring software to run as services directly on a host.

The downside to this is the approach is that running a service in a container leads to significantly longer startup times, more memory and CPU overhead, lost logs, and in my opinion offer a false sense of security and isolation since most images are still configured to run as root, and more often than not large swathes of the host filesystem are mounted as volumes to achieve simple tasks.

There’s also a belief that your software will magically run anywhere - but if you’re writing Java (or any JVM language) code - that’s one of Java’s biggest selling points - it already has its own VM your code is running in, no most platforms!

Therefore, let’s see how easy it actually is to configure our software to run as a standard system service, providing us with the ability to run it as a separate restricted user, complete with standard logging configuration, and give us control over via standard service myservice start|status|restart|stop commands.

arrow Continue Reading ...

It’s a really simple thing, but I’ve been using this simple “pattern” for defining simple value objects for years, and it has served me well.

While there’s nothing particularly special about this style, I still see a significant amount of Java code needlessly following the JavaBeans style, when using these objects as Beans in the strict sense is not actually desired, intended, or required, and simply makes code needlessly verbose and leaves objects implemented as Beans open to abuse due to leaving their internal state open for mutation.

This pattern works well over traditional JavaBeans because:

  • it’s immutable - invaluable for concurrent or multi-threaded applications where you don’t want to give applications the ability to change values as they please
  • it’s neat - due to being immutable, there’s no need for superfluous “setters”, and if there are no setters, there’s no need for “getters”, so the code is dead simple and easy to work with
  • it’s portable - these objects are trivial to serialise using either Java Serialisation (or any of the preferable drop-in replacements), almost any serialisation library will be able to serialise them, and Jackson can deserialise them without any additional code
  • due to all the above, they’re also ideal for use as messages in event-driven systems

Here’s an example of a simple object implemented in this style:

import java.beans.ConstructorProperties;

public class User implements Serializable {
  private static final long serialVersionUID = 1L;

  public final String email;
  public final String name;
  public final Address address;

  @ConstructorProperties({ "email", "name", "address" })
  public User(String email, String name, Address address) {
    this.name = name;
    this.email = email;
    this.address = address;  
  }
}

This object is now serialisable and deserialisable via Java serialisation or better alternatives such as FST (just leave off Serializable if you don’t need that), as well as JSON serialisation libraries such as Jackson or GSON.

Unreal Archive

Over the past several months, I’ve been working on a project to provide a place to catalogue and preserve the vast amounts of user-created content the Unreal and Unreal Tournament community has been creating over the past 20+ years.

This has resulted in the Unreal Archive.

While it may seem a silly cause to invest so much time (and money) into, this stuff directly influenced the lives of myself and thousands of others. I would certainly not be in the profession I’m in, driving my car, living in my house, if not for the direct influence of working on Unreal Tournament maps, mods and community, and personal websites.

This stuff made many of us who we are today, and a lot of it has already been lost in time. The internet may not ever forget, but it certainly misplaces things in ways it can’t be found again.

A lot of content is in fact mirrored in various places on the internet, but it can be hard to download, as people generally don’t appreciate you mirroring 100s of gigabytes off their shared hosting.

Thus, the Unreal Archive is an initiative to gather up, index, and catalogue as much Unreal, UT99 and UT2004 content as possible. So far, we have maps, map packs, voices, skins, mutators, player models, as well as support for things such as patches, updates and drivers as well as a (currently very empty) section for written documents with the intent of providing guides, tutorials, manuals, and other related documented knowledge which also seems to get lost and forgotten.

The tech stack and some of the decisions involved may seem odd, but in keeping with the theme of longevity, preservation, and the general ease of losing things on the internet, these are some of my motivations:

  • statically generated content - the website is generated as a collection of plain HTML pages. this ensures no dependence on having to host a website with any dependency on any sort of back-end service beyond the simplest of HTTP servers. specific pains have been taken to ensure it works well with file:// local resources as well, so it doesn’t even need to be hosted!
  • written in Java - largely because I know it well enough to do this, but also because it’s not going anywhere soon, so the indexing and site generation capabilities will remain in action for a long time.
  • data stored as YAML files - a dead simple format that’s also easily human- readable. in 30 years when all the YAML parsers have died, if someone looks at these files, they’ll be easy to write new parsers for, if that’s ever needed.
  • the “database” is Git - easy to distribute amongst many people, and since this is primarily an archive, the data does not change rapidly enough to require anything more real-time.
  • the entire project is “licensed” under UNLICENSE, with the intent of it being as absolutely open as possible, for as long as possible.

As I’m collecting a lot of the data for the archive directly from the pieces of content themselves, a large part of implementing this also involved figuring out the Unreal Package data formats. Thankfully there are still several references for this hanging around, and many people have made their research on the topic public.

I’ve released a separate Unreal Package Library (Java) which some people may find useful. I’m using it to read map information, such as authors, player counts, titles, etc, export images such as screenshots and player portraits, as well as for parsing Unreal’s INT and UPL metadata files (more-or-less glorified INI files).

All the code for the project is up on GitHub, as is the content database.

UTStatsDB is a player and match statistics system for Unreal Tournament 99, 2003, 2004 and 3, which parses match logs generated by each game (sometimes requiring additional server-side mutators), and makes stats for each game available through a website.

The stats are also aggregated by player, map and server, allowing you to browse and analyse quite a number of in-depth stats for each.

The project was developed and maintained by Patrick Contreras and Paul Gallier between 2002 and around 2009, where the original project seems to have been abandoned some time after the release of UT3. (addendum: by some coincidence, after 9 years of inactivity, the original author did create a release a few days after my revival/release) Locating downloads (the download page is/was not working) or the source (their SCM system seems to require auth or is simply gone) was quite troublesome.

Thankfully it was released under GPL v2, so I’ve taken it upon myself to be this project’s curator (addendum: since the original author also made a new release, I may now need to look into a rename or major version bump), and have since released two new versions, 3.08 and 3.09 which focus firstly on getting PHP support up to scratch so it runs without issue on PHP 7+, as well as implementing PHP’s PDO database abstraction layer for DB access, rather than using each of the supported DB drivers (MySQL, MSSQL, SQLite) directly.

In addition to many other bug fixes and issues, I’ve thus far revised the presentation significantly, provided Docker support, improved performance of several SQL operations by implementing caching and better queries, etc.

UTStatsDB can be found on GitHub, where the the latest release can also be downloaded.

A live example of UTStatsDB in action can be found at the UnrealZA stats site.

Said a teary farewell to some old things, got a new thing. Very happy with the new thing.

With all the talk of Unreal Tournament 4 possibly being cancelled one of these days, due to Epic’s runaway success with Fortnite, I’ve decided there’s really no reason to not be playing UT99.

Thus, we set about trying to run it on modern hardware, with a modern Linux installation.

As much as this is about setting things up on Linux, it’s also partially my own attempt at some knowledge preservation, as a lot of this stuff ends up being forgotten or lost over time (it’s been almost 20 years! a lot of the old sites and things you expect to find this info on simply do not exist anymore :()

This is part one of two, and will focus on installing and running the game using Wine.

arrow Continue Reading ...

I recently wanted to set up a couple of rough monitoring services to keep track of simple server status, load, disk etc. While there are options available like Munin which can do this by installing agents on the machines to be monitored, I wanted something a little simpler and more portable.

I’m quite fond of the StatsD + Graphite + Grafana stack, which is quite easy to run thanks to Kamon’s grafana_grafite Docker image, and I realised you can actually quite simply write counters, gauges and timers to StatsD using nothing but the standard Linux tools nc and cron.

For example, every minute on each server being monitored, a simple cron job is executed which uses nc to write a bunch of information to my StatsD service:

#!/bin/bash

HOST=$(hostname)

STAT_HOST="statsd-host"
STAT_PORT=8215

# load average
echo "load.$HOST.avg:`cat /proc/loadavg | cut -d ' ' -f 1 | awk '{print $1*100}'`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT

# memory
echo "memory.$HOST.perc.free:`free | grep Mem | awk '{print $3/$2 * 100.0}'`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
echo "memory.$HOST.bytes.total:`free -b | grep Mem | awk '{print $2}'`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
echo "memory.$HOST.bytes.used:`free -b | grep Mem | awk '{print $3}'`|g" | nc -w 1 -u  $STAT_HOST $STAT_PORT

# disk
echo "disk.$HOST.kbytes.total:`df -k --output=size / | grep -v [a-z]`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
echo "disk.$HOST.kbytes.used:`df -k --output=used / | grep -v [a-z]`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT
echo "disk.$HOST.kbytes.avail:`df -k --output=avail / | grep -v [a-z]`|g" | nc -w 1 -u $STAT_HOST $STAT_PORT

# mail queues
for i in maildrop hold incoming active deferred bounce; do echo "postfix.$HOST.queues.${i}:`find /var/spool/postfix/${i} -type f | wc -l`|c"; done | nc -w 1 -u $STAT_HOST $STAT_PORT

It’s perhaps a bit inefficient in places, but gets the job done fairly well. One minute resolution may be a bit rough, but it’s sufficient for most of these data points which don’t change too dramatically over time.

Some other more specific variations include HTTP accesses, ping times, etc. Pretty much any parameter you can parse down to a single number can be published as a counter, gauge or timer to StatsD, and then neatly graphed over time.

I have finally decided to release version 1.0 of Aurial, my implementation of a music player/client for the Subsonic music server.

I started this around two years ago, some time after switching my primary desktop from Windows to Linux, and I really missed foobar2000 - it has been my primary music player ever since. Unfortunately I have an irrational aversion to using Wine to run Windows applications, and none of the native music players on Linux felt good to me. As I already ran a Subsonic music server, I thought I’d just make use of that.

The existing browser-based clients for Subsonic were either too basic, or the state of their code and some implementation features made me uncomfortable. I just wanted a nice music player that allowed me to browse my collection similar to how I did in foobar2000 (using Subsonic’s ID3 tag based APIs, rather than the directory-based browsing offered by other clients), perhaps manage playlists, make ephemeral queues, and importantly, scrobble played tracks.

Podcasts, videos, and other things some clients support don’t interest me at all, and are a bit out of scope of a foobar2000-like client I believe.

Aurial allows me to build a music player the way I prefer to browse, manage and play music (which admittedly, is quite heavily influenced by my prior foobar200 configuration and usage habits).

aurial

This was my first attempt at a React application, and it started off simply enough, with JSX transpiling and stuff happening directly in the browser. At some point Bable was no longer available for browsers, which led to my adoption of Webpack (and eventually Webpack 2) for producing builds.

This also led to things like needing some sort of CI, and I’ve recently begun producing builds via TravisCI which automates building the application, and deploying it to GitHub Pages, which I think is pretty neat.

I also got to play with HTML5’s <audio/> a bit, as the player library I was using previously had some reliance on Flash, and was occasionally tricky to coax into using HTML rather than that. The result is infinitely smaller and less complex audio playback implementation (it’s amazing how much easier life is when you ignore legacy support).

Anyway, altogether it’s been fun, and as I’m using it constantly, it’s always evolving bit by bit. Hopefully someone else finds it useful too.

The title’s quite silly unfortunately, but I was recently doing some experimentation with uploading images to CouchDB directly from a browser. I needed to scale the images before storage, and since I was talking directly to the CouchDB service without any kind of in-between API services or server-side scripts, needed a way to achieve this purely on the client.

Thanks to modern APIs available in browsers, combined with a Canvas, it’s actually reasonably simple to process a user-selected image prior to uploading it to the server without the need for any third-party libraries or scripts.

arrow Continue Reading ...

This is a small follow-on on from the Kodi on Debian Sid guide I did earlier this year to get lirc (IR remote support) working once more, following an upgrade to version 0.9.4, which changes how the lirc services and configuration work (shakes fist at systemd).

After upgrading and following all the instructions in /usr/share/doc/lirc/README.Debian.gz, I was left with the problem of Kodi not responding to any remote input at all.

Firstly, I had to re-source my remote’s configuration (mceusb) from the lirc git repository. Place the *.lircd.conf file from there into /etc/lirc/lircd.conf.d/ and remove/rename other .lircd.conf files already in that directory.

Now, running irw and pressing some buttons on your remote should show you the button pressed and the configuration used.

Next up, Kodi fails to connect to the IR device. There are two trivial but non-obvious solutions:

Firstly, without changing any of the default configuration generated by the migration process outlined in the lirc README file, simply change your Kodi starup command as follows:

$ kodi --lircdev /var/run/lirc/lircd

Alternatively, you may change the lirc configuration, to put the device file back where Kodi expects it:

# in /etc/lirc/lirc_options.conf:
output = /dev/lircd

Then end result should be you happily continuing with your life.

I recently spent some time in Australia, specifically Sydney and Melbourne, and took a bunch of photos from a few parks and interesting places in Sydney (unfortunately I was pretty ill and didn’t get out very far in Melbourne).

I really enjoyed the number of parks and amount of greenery around the city centres.

All the Sydney images are up here.

the moon

Today’s solar eclipse, as viewed from Johannesburg, South Africa, through the lens of a Canon SX50 at 50x optical zoom, through some eclipse viewing eyeware from the 90s.

I recently went through the process of reinstalling the media PC connected to my TV, which I use to run Kodi for movies and TV, and Steam in Big Picture mode, which allows me to stream Windows-only games from my desktop to the couch.

I thought it would be useful to describe my setup and the process to achieve it, in case anyone else is interested in creating their own custom Kodi/Debian/Steam builds.

arrow Continue Reading ...

After almost exactly two years since the last release of Out of Eve, here is version 3.0.

As may be noted from the release note, the main goal of this release is to catch everything up with the current state of EVE, it’s API, and the static data dump.

Along the way some new stuff was also added an improved, like the new menu system which allows access to all your characters, so there’s no need to switch between them and then view detail pages, and the introduction of memcached caching, which stores and retrieves entities loaded from the static database dump, reducing page load times and database accesses (a single page load may result in hundreds of individual MySQL queries).

I’m rather pleased with this release, and it seems a lot more solid than most before.

I’ve also got the public Out of Eve website back up, now featuring HTTPS courtesy of Letsencrypt, at last.

It seems surprisingly difficult to find a simple lightbox implementation which doesn’t rely on jQuery. I wanted something simple for this site, but did not want to have to re-do any HTML, so came up with a basic JavaScript and CSS solution.

This also turned out to be a useful lesson in “modern” jQuery-less DOM manipulation. I found 10 Tips for Writing JavaScript without jQuery quite useful in this regard.

For the Lightbox/pop-up itself, the Pure CSS Lightbox by Gregory Schier served as an excellent reference and starting point.

arrow Continue Reading ...

More a curiosity than an actual useful project, I just had an Idea I wanted to try out, and this is the result.

This Java application (or library, if you want to include it in your own project) simply takes a source image, a couple of optional parameters, and outputs a new image with a halftone- like effect.

Briefly, works by stepping through the pixels of the source image at an interval defined by the dot size specified, samples the brightness of that pixel, and draws a circle onto the destination image, scaled according to the source pixel brightness.

For reference, take a look at the java.awt Graphics2D, Image and BufferenImage classes. It’s really nice to half a whole bunch of image processing and drawing capabilities available within the standard library, rather than needing to rely on external things (as I recently discovered to be the case with Ruby - pretty much all image processing is done via an ImageMagick dependency).

The source, documentation and a download are available from the image-halftone GitHub project page.

titleWebsite update

date 10 Mar 2016

Yup. It looks different. For the first time in over 11 years, this website is not being built dynamically by PHP scripts.

I’ve jumped on the static site generation bandwagon, and after taking a look at several options (primarily Hugo and Nikola), and decided to settle on Jekyll. At the end of the day the easiest to install and get started with won, basically (which I found amusing for a Ruby project, for some reason).

There are a couple of reasons for wanting to change. Primarily, it seems every second day I read about some new Wordpress exploit wiping out sites left right and centre. It’s just another point of admin to update things all the time, which I could do without. Additionally, I’m tired of bots smashing into the admin login page all day. While it doesn’t really impact me all that much, it’s just something that bothers me.

The myriad plugins “required” to manage comment spam, the aforementioned login attempts, galleries, links, etc. all provide their own potential security issues, and all need to be regularly updated (assuming their authors didn’t abandon them years ago).

Finally, I wanted to do some custom design (yes, I’m not fantastic at it), but the thought of building mixed HTML and PHP templates for Wordpress horrifies me.

For the conversion, I used the Jekyll Wordpress migration process, which resulted in a bit of a mess, followed by conversion from HTML to Markdown using Pandoc, which did an excellent job. Over several days I had to clean up and reformat most pages, rebuild the galleries, redesign everything, etc., but I feel the result is worth it.

The full source code (plugins, config, assets, posts etc) are available in GitHub if anyone wants to steal anything.

Now that we have dependency management with Ivy working along with everything else covered before, we’ve covered almost everything required to start building real projects with Ant.

Another thing any real project should have, is unit tests. Thankfully, using the scaffolding already put in place in earlier parts of this series, integrating a JUnit testing task into our existing build script is really straight-forward.

arrow Continue Reading ...

So far, we’ve covered the basics of creating a re-distributable .jar package suitable for use as a library, and building a Jar file which can be run by a user or server process.

A major part of any non-trivial application these days is the inclusion and re-use of 3rd party libraries which implement functionality your applications require. When a project starts, it’s probably easy enough to manually drop the odd jar library into a lib directory and forget about it, but maintaining a large application which depends on many libraries, which in turn depend on additional libraries for their own functionality, it can quickly turn into a nightmare to manage.

To solve this problem, many dependency management tools have been introduced, most notably, Apache Maven. Maven however, is so much more than just a dependency management tool, and is actually intended to manage your entire project structure. I believe however, the combination of Ant and Ivy provides far more flexibility, extensibility and control over your build and dependency management processes.

So, let’s integrate Apache Ivy into our Ant script as we left it in part 2.

arrow Continue Reading ...

In part 1, we went over the basics of using Ant to create a redistributable .jar file, suitable for use as a library in other projects. A lot of the time however, you’re probably going to want to be building things which can actually be run as regular Java applications.

Once again, the code for this tutorial is available in GitHub. More usefully, you may want to see the diff between the part 1 script and the new one.

Here’s a quick explanation of what we’ve done to achieve an executable jar file:

arrow Continue Reading ...

Apache Ant is a general-purpose build tool, primarily used for the building of Java applications, but it is flexible enough to be used for various tasks.

In the Java world at least, Ant seems to be largely passed over for the immediate convenience and IDE support of Maven, however long term, I believe a good set of Ant scripts offer far more flexibility and room for tweaking your build processes. The downside is that there’s a lot of stuff you need to learn and figure out and build by hand.

In this series of tutorials, I’ll try to document the process of learning I’ve gone through building and maintaining Ant build files, from the most basic of “just compile my stuff” steps to automatic generation of JavaDoc output, dependency management using Ant’s companion, Ivy, unit testing using JUnit, and integrating with some additional tools I’ve been using, such as Checkstyle and FindBugs.

For part 1 of this tutorial, I’ve created a simple Hello World library. It doesn’t have a main executable itself, the goal of this is to produce a .jar file we can include in other projects, to start our Ant script off fairly simply.

The source for this project can be found in GitHub. Here’s the breakdown of everything going on in this project:

arrow Continue Reading ...

I’ve been meaning to do some posts on setting up a Java build process using Apache’s Ant and Ivy, but never really get that far.

I’m a fan of allowing build dependencies (beyond the actual Ant binary itself) download automagically as part of the build, rather than requiring the developer to download and install a bunch of different tools and then orchestrating them via Ant. Essentially you should be able to install Ant, grab the code of something you want to build, and execute it.

To this end I have spend many hours trying to get the FindBugs static analysis tool and it’s requirements downloaded as Ivy dependencies as is possible with most tools, but gave up due to some rather weird and seemingly hard-coded dependency paths and file names within the FindBugs project.

Therefore I gave up and just have it downloading using an Ant “get” task, which feels a bit brute-force, but sometimes you need to compromise. Here’s my solution, presented as an all-in-one Ant target:

arrow Continue Reading ...

Here’s a thing I’ve been wanting for a while now, and have been unable to something to suite my needs (well, more wants than needs, I guess). I end up generating a lot of text/documentation for various things (both at home and work), normally spread around a little - project descriptions and introductions in READMEs, APIs and design plans in wikis, sometimes random files, etc, and wanted the ability to consolidate these into collections that could be nicely presented, either publicly or for team reference.

My preferred requirements, which were not met by existing solutions such as Sphinx, Read the Docs, Beautiful docs and Daux.io are:

  • No need to pre/post processing the input documents as a separate “compile” or parsing step
    • Should use existing plain Markdown documents as input and format output at runtime only
  • Along with the above, the documents should be “live” - if I change the source file, I don’t want to “recompile” my documentation pages, they should reflect changes by default
  • Not a hosted solution
    • Particularly, something anyone can drop on a private server (work environment) or whatever they want to do with it
  • No server-side requirements beyond simple HTTP file serving
  • I may be out of the JavaScript development scene, but what’s up with requiring users to use a dozen different build systems and dependency management frameworks to use your JavaScript app these days?
    • Seriously, the attraction used to be that you could simply drop a couple of HTML, CSS and JS files in your www-root and magic came out. Get off my lawn!

My solution is Markdocs - a simple HTML and JavaScript application for organising individual Markdown documents as a documentation collection.

markdocs

See the README on the Markdocs GitHub page for usage instructions. Basically, you define the documents to include via a simple JSON file, which is loaded at runtime. The required documents are then loaded using jQuery, parsed at runtime with Marked right in the user’s browser, and a table of contents and the documents themselves are generated and presented using a simple Semantic UI interface.

At present it’s perfectly usable, but there are still a couple of things I want to improve and add, including suitable inter-document linking (while not enforcing any magic link syntax - your stand-alone document should still work as stand-alone documents) and ability to provide links to the individual source documents as well as an “Edit” link (for example, let you define a link to the editable document on GitHub).

Will update as it progresses.

I’ve become fond of using nginx on my development machines, rather than a full Apache.

There are no explicit options built-in which allow something along the same lines as Apache’s userdir, however it’s easy enough to tweak the default configuration to support that behaviour without the need for external modules.

I also do some PHP dabbling from time to time, so need to enable that as well.

Install the required bits:

$ sudo aptitude install nginx php5-fpm

Configure nginx (the below is my customised and cleaned out server definition):

/etc/nginx/sites-available/default

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    root /var/www/html;

    # Add index.php to the list if you are using PHP
    index index.html index.htm index.nginx-debian.html index.php;

    server_name _;

    # PHP support in user directories
    location ~ ^/~(.+?)(/.*\.php)$ {
        alias /home/$1/public_html;
        autoindex on;

        include snippets/fastcgi-php.conf;

        try_files $2 = 404;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
    }

    # PHP support in document root
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
    }

    # User directories in /home/user/public_html/
    # are accessed via http://host/~user/
    location ~ ^/~(.+?)(/.*)?$ {
        alias /home/$1/public_html$2;
        autoindex on;
    }
}

I also had to make a change to /etc/nginx/snippets/fastcgi-php.conf, to comment out the following line:

#try_files $fastcgi_script_name =404;

After restarting the nginx service (also make sure the php5-fpm service is running), you will be able to serve HTML and PHP files from your ~/public_html directory.

Recently, I’ve made the switch from KDE being my preferred Linux desktop environment/window manager, to i3, a tiling window manager, for both my work and private development environments (my home desktop is still Windows 7, since I do still game enough for it to become painful to dual-boot - so I do most of my development within a VM these days).

I really like it’s absolutely minimal approach - essentially it does nothing itself, it provides a simple window manager, and near limitless configurability. This has proven an excellent learning experience for me, since it’s forced me to get a lot closer to system components usually “hidden” behind sliders and widgets in KDE or Gnome, as well as a host of alternatives to applications those environments provide by default. It’s also resulted in a much cleaner and faster system, containing only the applications and services I actually want.

We recently installed fresh new desktop machines at work, so I thought I’d share some of my setup, in case it’s of some value to anyone else (and my own future reference!). The following steps assume you know how to operate a basic Debian system. I’m not going to go too deep into any usage details for i3 either, since there’s an excellent user guide and comprehensive FAQ system which should answer any questions you may have.

I’d also advocate using “aptitude” as an alternative to “apt-get” for all package installations, updates and removals.

The Basics

I always start off with a Debian “netinst”. Post-install, this provides an incredibly basic bare-bones OS with a few system utilities (during the installation, de-select the pre-configured “Desktop”, “Web Server”, “Mail Server”, etc. options, just keep the “Standard System Utilities”).

First thing to to after installing is install sudo and add your user to the sudoers group, to avoid having to be root to get things done. Now’s also a good time to install vim.

I also like seeing Aptitude’s “visual preview” of changes when doing package management, so to avoid having to call $ aptitude --visual-preview install ... on every invocation, we can edit root’s aptitude config:

/root/.aptitude/config:

Aptitude::CmdLine::Visual-Preview "true";

Upgrade to Unstable/Sid

Perhaps a bit reckless, but I’ve honestly never experienced any crippling issues running Debian Unstable (“sid”). You’ll only need to modify /etc/apt/source.list and replace references to “wheezy” or “testing” with “unstable” or “sid”, and disable the updates and security repositories, leaving you only the main deb and deb-src repositories (I’ve enabled non-free and contrib, since I want to install FlashPlayer and nVidia drivers later):

/etc/apt/source.list:

deb http://cdn.debian.net/debian/ unstable main non-free contrib
deb-src http://cdn.debian.net/debian/ unstable main non-free contrib

After saving the above changes, execute the following:

$ sudo aptitude update
$ sudo aptitude dist-upgrade

The dist-upgrade step will upgrade all installed packages to whatever’s newest in unstable.

Desktop Install

With the base system as up-to-date as it can be, it’s time to install the desktop environment.

$ sudo aptitude install xorg lightdm i3-wm i3status suckless-tools

After installation, I’d reboot and ensure a nice graphical login prompt appears. After login, you’ll be asked some initial i3 setup questions (which are easy to change later) and land in the default i3 workspace. Press Mod+Enter (Mod being whatever you selected in the aforementioned setup questions - likely “windows” key, or Ctrl) to open a new terminal window. It’s probably xterm, which is sort of OK, but I switched to lxtermial - it’s nice and lightweight but still has a fair number of configuration and convenience features (like URL detection - useful for IRC).

If you install another terminal, and opening more terminals results in more xterms rather than your installed terminal, do the following to set your preferred option:

$ sudo update-alternatives --config x-terminal-emulator

Desktop Tweaks

Before digging too deep into installing additional software, it’s a good time to configure some additional options to make life a bit more pleasant.

Look and Feel

In order to make sure your eyes are not offended by the default GTK theme which you may end up seeing a lot of, set up the GTK theme and icon theme:

~/.gtkrc-2.0:

include "/usr/share/themes/Adwaita/gtk-2.0/gtkrc"
gtk-icon-theme-name="Adwaita"

In addition, I found it a lot cleaner and space-maximising to disable i3’s window titles and thin it’s borders down, by addition the following to ~/i3/config:

new_window 1pixel
new_float normal

py3status

Install python-pip via Aptitude, and then $ sudo pip install py3status. I use py3status since it provides some nice additional modules, is more flexible, and is fully compatible with the default i3status configuration. It’s also a good time to check out the i3status configuration documentation and do some tweaks, since a couple of the default entries here are likely not too useful.

Wallpaper

Randomised (of fixed if preferred) wallpapers can easily be achieved by installing feh (which makes for a good i3-friendly picture viewer in general) then adding the following to ~/i3/config:

exec --no-startup-id feh --recursive --randomize --bg-fill ~/Pictures/wallpaper/

Incidentally, the imgur wallpaper gallery is a good place to find some wallpapers.

File Management

Sometimes a GUI file manager can be useful, and for this, a nice light-weight alternative to the bigger desktops’ Nautilus and Dolphins is PCManFM, installed as pcmanfm.

A nice companion application for (compressed) archive management is xarchiver. You may need to install additional tools (such as zip, unzip, unrar-free, etc, depending on the files you commonly work with).

Conclusion

The entire setup to this point should not have taken more than 1-2 hours, depending on download speeds (really, most time is spent just waiting for downloads…), so you can get this kind of environment running with minimal effort and downtime.

I haven’t included anything about multimedia, custom key bindings, lock screens, or others here, but there are loads of other resources around which can fill you in on those and the myriad ways you can configure your i3 environment.

Your next step, if you’re new to i3, should probably be to take a read through the i3 user guide, which is impressively comprehensive.