logo

ShrimpWorks

// why am I so n00b?

Now that we have dependency management with Ivy working along with everything else covered before, we’ve covered almost everything required to start building real projects with Ant.

Another thing any real project should have, is unit tests. Thankfully, using the scaffolding already put in place in earlier parts of this series, integrating a JUnit testing task into our existing build script is really straight-forward.

arrow Continue Reading ...

So far, we’ve covered the basics of creating a re-distributable .jar package suitable for use as a library, and building a Jar file which can be run by a user or server process.

A major part of any non-trivial application these days is the inclusion and re-use of 3rd party libraries which implement functionality your applications require. When a project starts, it’s probably easy enough to manually drop the odd jar library into a lib directory and forget about it, but maintaining a large application which depends on many libraries, which in turn depend on additional libraries for their own functionality, it can quickly turn into a nightmare to manage.

To solve this problem, many dependency management tools have been introduced, most notably, Apache Maven. Maven however, is so much more than just a dependency management tool, and is actually intended to manage your entire project structure. I believe however, the combination of Ant and Ivy provides far more flexibility, extensibility and control over your build and dependency management processes.

So, let’s integrate Apache Ivy into our Ant script as we left it in part 2.

arrow Continue Reading ...

In part 1, we went over the basics of using Ant to create a redistributable .jar file, suitable for use as a library in other projects. A lot of the time however, you’re probably going to want to be building things which can actually be run as regular Java applications.

Once again, the code for this tutorial is available in GitHub. More usefully, you may want to see the diff between the part 1 script and the new one.

Here’s a quick explanation of what we’ve done to achieve an executable jar file:

arrow Continue Reading ...

Apache Ant is a general-purpose build tool, primarily used for the building of Java applications, but it is flexible enough to be used for various tasks.

In the Java world at least, Ant seems to be largely passed over for the immediate convenience and IDE support of Maven, however long term, I believe a good set of Ant scripts offer far more flexibility and room for tweaking your build processes. The downside is that there’s a lot of stuff you need to learn and figure out and build by hand.

In this series of tutorials, I’ll try to document the process of learning I’ve gone through building and maintaining Ant build files, from the most basic of “just compile my stuff” steps to automatic generation of JavaDoc output, dependency management using Ant’s companion, Ivy, unit testing using JUnit, and integrating with some additional tools I’ve been using, such as Checkstyle and FindBugs.

For part 1 of this tutorial, I’ve created a simple Hello World library. It doesn’t have a main executable itself, the goal of this is to produce a .jar file we can include in other projects, to start our Ant script off fairly simply.

The source for this project can be found in GitHub. Here’s the breakdown of everything going on in this project:

arrow Continue Reading ...

Here’s a thing I’ve been wanting for a while now, and have been unable to something to suite my needs (well, more wants than needs, I guess). I end up generating a lot of text/documentation for various things (both at home and work), normally spread around a little - project descriptions and introductions in READMEs, APIs and design plans in wikis, sometimes random files, etc, and wanted the ability to consolidate these into collections that could be nicely presented, either publicly or for team reference.

My preferred requirements, which were not met by existing solutions such as Sphinx, Read the Docs, Beautiful docs and Daux.io are:

  • No need to pre/post processing the input documents as a separate “compile” or parsing step
    • Should use existing plain Markdown documents as input and format output at runtime only
  • Along with the above, the documents should be “live” - if I change the source file, I don’t want to “recompile” my documentation pages, they should reflect changes by default
  • Not a hosted solution
    • Particularly, something anyone can drop on a private server (work environment) or whatever they want to do with it
  • No server-side requirements beyond simple HTTP file serving
  • I may be out of the JavaScript development scene, but what’s up with requiring users to use a dozen different build systems and dependency management frameworks to use your JavaScript app these days?
    • Seriously, the attraction used to be that you could simply drop a couple of HTML, CSS and JS files in your www-root and magic came out. Get off my lawn!

My solution is Markdocs - a simple HTML and JavaScript application for organising individual Markdown documents as a documentation collection.

markdocs

See the README on the Markdocs GitHub page for usage instructions. Basically, you define the documents to include via a simple JSON file, which is loaded at runtime. The required documents are then loaded using jQuery, parsed at runtime with Marked right in the user’s browser, and a table of contents and the documents themselves are generated and presented using a simple Semantic UI interface.

At present it’s perfectly usable, but there are still a couple of things I want to improve and add, including suitable inter-document linking (while not enforcing any magic link syntax - your stand-alone document should still work as stand-alone documents) and ability to provide links to the individual source documents as well as an “Edit” link (for example, let you define a link to the editable document on GitHub).

Will update as it progresses.

I’ve become fond of using nginx on my development machines, rather than a full Apache.

There are no explicit options built-in which allow something along the same lines as Apache’s userdir, however it’s easy enough to tweak the default configuration to support that behaviour without the need for external modules.

I also do some PHP dabbling from time to time, so need to enable that as well.

Install the required bits:

$ sudo aptitude install nginx php5-fpm

Configure nginx (the below is my customised and cleaned out server definition):

/etc/nginx/sites-available/default

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    root /var/www/html;

    # Add index.php to the list if you are using PHP
    index index.html index.htm index.nginx-debian.html index.php;

    server_name _;

    # PHP support in user directories
    location ~ ^/~(.+?)(/.*\.php)$ {
        alias /home/$1/public_html;
        autoindex on;

        include snippets/fastcgi-php.conf;

        try_files $2 = 404;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
    }

    # PHP support in document root
    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/var/run/php5-fpm.sock;
    }

    # User directories in /home/user/public_html/
    # are accessed via http://host/~user/
    location ~ ^/~(.+?)(/.*)?$ {
        alias /home/$1/public_html$2;
        autoindex on;
    }
}

I also had to make a change to /etc/nginx/snippets/fastcgi-php.conf, to comment out the following line:

#try_files $fastcgi_script_name =404;

After restarting the nginx service (also make sure the php5-fpm service is running), you will be able to serve HTML and PHP files from your ~/public_html directory.

I wanted to add a unit conversion plugin to ZOMB and would really have liked to use an off-the-shelf existing API, but because this didn’t seem to exist in a nice hosted format already - I had to make it :).

The Units API is written in PHP, and is intended to provide an extremely simple and easy-to-use HTTP API for the conversion between various units of measure. Usage documentation is available on the project’s Github page.

I’m also hosting a publicly usable version, at the following URL, so hopefully next time someone needs this they don’t need to reinvent the wheel (again, refer to documentation linked above for usage):

As an aside, this project served as my first introduction to PHPUnit for PHP unit testing, and CI is once again provided by Drone.io which has performed admirably. Design-wise, it was another exercise in defining the public-facing API before a line of code was written, which served as an excellent guide and source of documentation as I worked on it (plus, there’s no need to worry about writing documentation when you’re done :D).

zomb-web

As mentioned, I’ve resurrected an old idea, and began work on it as a bit of a learning/practice exercise. I think it’s working out rather well.

The primary application itself, hosted on GitHub here, is essentially complete, barring the ability to persist your plugin configuration (pfft, who needs to store things anyway).

Some stuff learned along the way:

API-driven development:

Designing the external-facing API (actually defining and completely documenting the exact request and response data structures, not just “there will be a request that does things and a response that looks something like X”) was a huge help. Defining the API allows you to see how the system will actually be used up-front before writing a single line of code, and allows you to easily spot gaps and shortcomings. Once done, the “user documentation” becomes the same documentation I used to implement the back-end, which made it incredibly easy.

Git:

Still learning, getting more comfortable with it. IntelliJ IDEA has excellent built-in Git support out-the-box, and although painful to use in a Windows shell (it’s basically Bash, inside cmd.exe), I’m getting more used to the Git CLI.

Free/online continuous integration:

Initially, I started off using Travis-CI. This requires you to store a “.travis.yml” file within the root of your Git repository which I was rather uncomfortable with (I don’t like “external” metadata type things hanging around in my source repository). As an alternative, I’ve switched to using Drone.io, which just “feels” like a nicer solution. It also has additional features like the ability to store artefacts for download, or publish your artefacts to external services or servers - so you could have successful builds automatically deploy the latest binaries.

Persistence/Storage:

Persistence is hard, so once you start a service up, it should run indefinitely so you never need to write anything to disk. Sigh. Also, this part was not designed at all up-front, and my flailing around trying to get a workable solution is evidence of the need for proper design and planning before jumping in with code.

Aside from that, there are additional projects which were spawned:

zomb-web

The first front-end for ZOMB. A simple single-page HTML UI. Had some good practice remembering how to HTML and Javascript here…

zomb-plugins

A growing collection of plugins for ZOMB. At present, they’re all PHP (again, refreshing old skills…) and pretty simple. Currently, there’s time (simple current time/time-zone conversion), lastfm (see what someone’s currently listening to, find similar artists), weather (current and forecast conditions for a given city) and currency (simple currency conversion).

None of the above cannot be achieved without a simple web search, so next up I’d like to create a CLI client - weather updates in your terminal!

(Re-)Introducing ZOMB, an IRC bot back-end, which I planned, started work on some years ago, then promptly lost interest after it became vagely usable.

The general idea of ZOMB (like “zomg”, but it’s a bot, not a god [maybe version 3], and it sounds like “zombie” which is cool too) is to provide a client interface-independent bot framework, where bot functionality can be implemented in remotely hosted plugin scripts/applications, unlike a traditional bot where normally you’d need all the code running on one user’s machine/server.

Being interface-independent means a ZOMB client (the thing a user will use to interact with ZOMB) may be an IRC bot, a CLI application, or a web page. Since I’ve been less active on IRC than I’d like lately, the additional options would be useful to me personally, but since almost nobody uses IRC at all any more, ZOMB should hopefully be useful outside of that context.

So how does ZOMB work? From a user’s point of view, it’s exactly like a traditional bot - you issue a query consisiting of the plugin you want to execute, the command to call, along with command arguments. For example, you’d ask a ZOMB bot:

> weather current johannesburg

Where “weather” is the plugin, “current” is a command provided by the weather plugin, and “johannesburg” is an argument. In response to this, ZOMB would provide you a short text result, something like this:

> The weather in Johannesburg is currently 22 degrees and partly cloudy

In the background, ZOMB looked at the input, found that it knew of the “weather” plugin, and made an HTTP request to the remote plugin service passing the command and arguments along. The plugin then did what it needed to do to resolve the current weather conditions for Johannesburg, and returned a result, which ZOMB in turn returned to the requesting client.

As always, a new project provides some practice/learning opportunities:

  • API driven development I know what I want ZOMB to be able to to, so I began by defining the client and plugin APIs, around which the rest of the design must fit. I normally write a bunch of code, then stick an API on top of it, but trying it the other way around this time. Seems to be working.
  • Test driven development just to keep practicing :-)
  • Git and Github since we’re hopefully going to be using Git at work in the near future, best to get some practice in.
  • Custom Ant and Ivy build scripts I like Ant and Ivy and need to practice configuring and maintaining them from scratch.
  • Travis-CI continuous integration for Github projects, since it’s cool to have a green light somewhere showing that stuff’s not broken, and I’ve never used any CI stuff outside of work.
  • More granular commits committing smaller changes more often - I don’t know if this is a good thing or not, but seeing how it works out
  • All on Windows I haven’t really built a proper project on Windows for years :D