logo

ShrimpWorks

// why am I so n00b?

TLDR; There’s now an MQTT Client implementation written in UnrealScript

I’ve been doing a bit of stuff in UnrealScript recently, and reacquainting myself with it.

Something I’ve always been aware of, but have never really looked at in much detail, is that it has an actual TCP client you can extend to implement whatever remote communications protocol you’d like.

For whatever reason MQTT popped up as my candidate to play with, with the thought that you’d be able to publish in-game events to some topics and build interesting things with (the first thing that came to mind was a match stats collection service which doesn’t rely on the traditional process of log scraping), in addition to allowing in-game functionality to respond to incoming events by way of topic subscriptions. And being something targeted at supporting very simple IoT devices, the protocol should be fairly easy to work with.

Thus, we jump into the comprehensive but sometimes strangely documented MQTT version 5.0 protocol documentation to find out how it works. It is indeed fairly straight forward.

Now to find out how the Unreal Engine 1 TcpLink class works. Keeping in mind this was implemented in the late 90s, data was smaller, data structures were generally less complex, and not everything was networked.

Firstly, opening a connection is a bit of a process.

  1. Request resolution of a hostname by Resolve(hostname);
  2. An event, Resolved(ipAddr) will fire, with the resolved IP address (integer representation)
  3. Then, you manually bind the client’s ephemeral port with, a simple BindPort - this immediately returns a bound port number
  4. If your port was bound, you can call Open(ipAddr)
  5. An event, Opened() will fire when the connection is established, and you may now send and receive data.

So slightly more manual than a higher level implementation in most modern languages, but when you consider the engine is single-threaded, it’s quite a reasonable process to get around blocking on network I/O.

Sending data is fairly simple, via the SendBinary(count, bytes[255) function. If you have more than 255 bytes of data to send, it’s a simple matter of re-filling the 255 byte array and repeating, until you’re done.

Initially, I tried to use the ReceivedBinary(count, bytes[255]) event for processing inbound data, but due to a known engine bug, this only serves up garbage data, so we’re left relying on ReadBinary(count, bytes[255]) which similar to sending, you can call multiple times on a re-usable buffer until the function returns 0 bytes read.

To make working with data using these processes a bit easier, I implemented a ByteBuffer class, modelled exactly after Java NIO’s ByteBuffer. I feel allocating a re-usable fixed size buffer array which can be compact()ed, followed by a series of put(bytes[255]), and an eventual flip() to allow reading is both performant and simple to reason about.

Implementing this ByteBuffer class also gave me a better understanding of the Java ByteBuffer in the process, even though I’ve been using it for years, it helps to reinforce understanding of some of the implementation details.

So, using this process of connecting, filling buffers, parsing them according to the specification, sending responses and so on gives us a nice suite of functionality within the client itself. We also want to support custom subscribers which allow other code and mods to receive events from MQTT subscriptions.

UnrealScript of course does not have the concept of Interfaces, but does support inheritance, so by extending MQTTSubscriber, custom code can do what it needs to, using the receivedMessage(topic, message) on subclasses of that class.

UnrealScript also provides a very neat child/owner relationship between spawned Actors, and so we’re making use of this to attach subscribers to the MQTT Client. Two standard events the MQTTClient makes use of for this are GainedChild(child) and LostChild(child), which notify the client when a subscriber has been spawned as a child of the client. On gaining a child, the client can automatically establish a subscription for the subscriber’s topic, so it can start receiving those messages. Similarly, when it loses a child, the client can automatically clean up any related topic subscriptions.

This process allows neat life-cycle management of both the subscriber classes themselves, as well as the actual server-side topic subscription, by leveraging built-in language/system functionality.

Overall, I’m happy with the end result, both in final utility of the implementation, and it’s usability for users of the classes involved. It was also pretty educational and enlightening to see how this old single-threaded engine deals with network connectivity, and the process of building the ByteBuffer also helped reinforce my understanding of Java’s implementation as well.

Unreal Archive

Over the past several months, I’ve been working on a project to provide a place to catalogue and preserve the vast amounts of user-created content the Unreal and Unreal Tournament community has been creating over the past 20+ years.

This has resulted in the Unreal Archive.

While it may seem a silly cause to invest so much time (and money) into, this stuff directly influenced the lives of myself and thousands of others. I would certainly not be in the profession I’m in, driving my car, living in my house, if not for the direct influence of working on Unreal Tournament maps, mods and community, and personal websites.

This stuff made many of us who we are today, and a lot of it has already been lost in time. The internet may not ever forget, but it certainly misplaces things in ways it can’t be found again.

A lot of content is in fact mirrored in various places on the internet, but it can be hard to download, as people generally don’t appreciate you mirroring 100s of gigabytes off their shared hosting.

Thus, the Unreal Archive is an initiative to gather up, index, and catalogue as much Unreal, UT99 and UT2004 content as possible. So far, we have maps, map packs, voices, skins, mutators, player models, as well as support for things such as patches, updates and drivers as well as a (currently very empty) section for written documents with the intent of providing guides, tutorials, manuals, and other related documented knowledge which also seems to get lost and forgotten.

The tech stack and some of the decisions involved may seem odd, but in keeping with the theme of longevity, preservation, and the general ease of losing things on the internet, these are some of my motivations:

  • statically generated content - the website is generated as a collection of plain HTML pages. this ensures no dependence on having to host a website with any dependency on any sort of back-end service beyond the simplest of HTTP servers. specific pains have been taken to ensure it works well with file:// local resources as well, so it doesn’t even need to be hosted!
  • written in Java - largely because I know it well enough to do this, but also because it’s not going anywhere soon, so the indexing and site generation capabilities will remain in action for a long time.
  • data stored as YAML files - a dead simple format that’s also easily human- readable. in 30 years when all the YAML parsers have died, if someone looks at these files, they’ll be easy to write new parsers for, if that’s ever needed.
  • the “database” is Git - easy to distribute amongst many people, and since this is primarily an archive, the data does not change rapidly enough to require anything more real-time.
  • the entire project is “licensed” under UNLICENSE, with the intent of it being as absolutely open as possible, for as long as possible.

As I’m collecting a lot of the data for the archive directly from the pieces of content themselves, a large part of implementing this also involved figuring out the Unreal Package data formats. Thankfully there are still several references for this hanging around, and many people have made their research on the topic public.

I’ve released a separate Unreal Package Library (Java) which some people may find useful. I’m using it to read map information, such as authors, player counts, titles, etc, export images such as screenshots and player portraits, as well as for parsing Unreal’s INT and UPL metadata files (more-or-less glorified INI files).

All the code for the project is up on GitHub, as is the content database.

UTStatsDB is a player and match statistics system for Unreal Tournament 99, 2003, 2004 and 3, which parses match logs generated by each game (sometimes requiring additional server-side mutators), and makes stats for each game available through a website.

The stats are also aggregated by player, map and server, allowing you to browse and analyse quite a number of in-depth stats for each.

The project was developed and maintained by Patrick Contreras and Paul Gallier between 2002 and around 2009, where the original project seems to have been abandoned some time after the release of UT3. (addendum: by some coincidence, after 9 years of inactivity, the original author did create a release a few days after my revival/release) Locating downloads (the download page is/was not working) or the source (their SCM system seems to require auth or is simply gone) was quite troublesome.

Thankfully it was released under GPL v2, so I’ve taken it upon myself to be this project’s curator (addendum: since the original author also made a new release, I may now need to look into a rename or major version bump), and have since released two new versions, 3.08 and 3.09 which focus firstly on getting PHP support up to scratch so it runs without issue on PHP 7+, as well as implementing PHP’s PDO database abstraction layer for DB access, rather than using each of the supported DB drivers (MySQL, MSSQL, SQLite) directly.

In addition to many other bug fixes and issues, I’ve thus far revised the presentation significantly, provided Docker support, improved performance of several SQL operations by implementing caching and better queries, etc.

UTStatsDB can be found on GitHub, where the the latest release can also be downloaded.

A live example of UTStatsDB in action can be found at the UnrealZA stats site.

With all the talk of Unreal Tournament 4 possibly being cancelled one of these days, due to Epic’s runaway success with Fortnite, I’ve decided there’s really no reason to not be playing UT99.

Thus, we set about trying to run it on modern hardware, with a modern Linux installation.

As much as this is about setting things up on Linux, it’s also partially my own attempt at some knowledge preservation, as a lot of this stuff ends up being forgotten or lost over time (it’s been almost 20 years! a lot of the old sites and things you expect to find this info on simply do not exist anymore :()

This is part one of two, and will focus on installing and running the game using Wine.

arrow Continue Reading ...

I recently went through the process of reinstalling the media PC connected to my TV, which I use to run Kodi for movies and TV, and Steam in Big Picture mode, which allows me to stream Windows-only games from my desktop to the couch.

I thought it would be useful to describe my setup and the process to achieve it, in case anyone else is interested in creating their own custom Kodi/Debian/Steam builds.

arrow Continue Reading ...

Wow. I hadn’t been paying much attention to this, since it’s been a while since anything with the “Tribes” name on it was worth following. This really does look pretty awesome though! It should be out this year as well.

I really have to use this thing more often than once or twice a year :D. I’m just going to do a few posts on a couple of subjects just for personal lols.

Lots of Team Fortress 2 has been played. I thought since it went free, there’d be plenty of noobs to beat up. Sadly all the kids playing games these days are not noobs ;). I’m also really enjoying playing an online FPS on PC once more, still something I haven’t done to any degree since UT3 fizzled (did it ever really get going at all, though?).

The more I play however, the more I long for something like Unreal to return.

TF2 is great fun, but it all feels extremely luck and spam based. Pretty much every class with the exception of the Spy and Sniper can be played by running headlong into the fight with your finger glued to the trigger and hoping for the best.

As much as I noobed my way through UT for years, I find myself missing the “precision” elements of that sort of game. Carefully timing armour and pickup spawns, shock-comboing someone halfway across the map, the intensity of a close CTF match. Sure there are still plenty of Flak monkeys and rocket spam, but TF2 just feels pretty flat compared to all that.

They just don’t make them the way they used to :). Quake Live doesn’t count because it’s Quake and I suck at it. Looking forward to trying Rage though, and I hope the multiplayer (on PC) is not too full of novelties like vehicles and crazy weapons. I could really do with some old-fashioned DM!

Hah. Only 3 months late.

Out of Eve has been fully updated to Quantum Rise spec, the promised journal feeds, API key security, and a number of other tweaks. OutofEve.com has been updated to the latest available version, and the source is available for download.

Please leave any feedback in the comments of this post. I’ll set up a proper OOE page on this site at some point, with download links and more detailed information.

As mentioned previously, I just wanted to outline a few plans for a new Out Of Eve version, mostly for my own reference, as I’m finding it much easier to work toward goals which are actually written down/typed up (lol?).

Obviously first order of business is Empyrean Age compatibility. A number of table and field names have changed and require some code updates. Lots of icons have been added and updated, so I would also like to make use of those. Unfortunately a number of images are actually missing in the EA icon dump (drones, rookie ships), so a simple drop-in replacement doesn’t works so well.

Another essential requirement, which should probably have been included in the original release, is encrypted API keys. My plan is to simply encrypt and decrypt these with a simple key file stored elsewhere in the filesystem - away from the usual configuration file, database and published www documents, so if any of that is compromised, without the key file, the API keys are useless to anyone snooping them. This also requires a method to automatically update existing unencrypted API keys.

Another handy feature would be the introduction of Atom feeds for market and journal transactions. My initial idea was an entry for each new transaction, however anyone doing a lot of trading would find their feed reader overloaded quite quickly. The obviously better solution is to just generate entries with all transactions since the last feed poll (taking into account API caching delays as well). I know I’d find this one particularly useful.

Actually that’s all :-). If all goes well, it should be releasable by the end of the weekend.