Wallpaper Generator

I’ve seen a whole load of wallpapers in this style: A gradient with some bright speckles floating at various positions. They’re pretty easy to create using something like the gimp but I thought it’d be fun to have an app that could create lots of variations of this instantly.


This app lets you selet the colour of the background gradient and set some parameters on how the dots are located, then simply click “Save” and you’ve got a brand new wallpaper!

Download WallpaperGenerator


After experimenting with Rainmeter for some time I got a little bored of playing with ini files (the main configuration files for Rainmeter). This program is a project I’ve started to allow people to build Rainmeter skins in a WYSIWYG environment.

Rainmeter Builder

At the moment this application allows the creation of all Rainmeter Meters and Measures. Not all properties are implemented yet though so keep an eye out for future releases.

I’ve seen various threads on forums saying that, due to the extremely flexible nature of Rainmeter, a GUI development environment is not really feasible. To some extent I agree with this and am not sure that a GUI application will ever be able to give users total control over Rainmeter skins. However, this should speed up development of Rainmeter skins dramatically, meaning that users only have to do a small amount of tweaking to the ini files.

Download RainmeterBuilder


A while ago I spotted an article on lady ada’s site that showed how to create an ambient lighting rig for your PC’s monitor using an Arduino. The tutorial for setting up the hardware was really easy to follow, but I wasn’t too keen on the CPU usage of the processing application that they provided to capture the screen. So I set about making my own application to do that and hopefully a few more bonus features. This is the result:

Below you can download an installer for the application or the source code to build it yourself.

Installer            Source

Speech Commands

This C# application listens out for your voice and when it hears specific phrases it will send keyboard shortcuts to the foreground window if it is in your list of configured applications.

To use SpeechCommands first click the “+” button on the left panel and browse for the program that you want to control with your voice. Once you’ve done this you can select the application in the left hand list and on the right panel you will see a new listbox appear. This new listbox contains the phrases that SpeechCommands will listen out for and what keys it will send when it hears them.

Above is my current setup for SpeechCommands. As you can see I’ve added VLC and Windows Media Player to the list of voice controlled applications. If you say “Play Track” to your computer when Windows Media Player’s window is in focus, SpeechCommands will generate a “Control+P” key command, thus playing a music track. The full list of available shortcut keys can be found here and modifier keys are:

Shift: +
Alt: %

Download SpeechCommands 1.0.1

TaskerUI Library

My latest demo (Tank Attack) has a slightly more substantial in-game menu than any of my previous projects. As I was developing this I was reminded how ugly GUI programming can get! For this reason I set about creating a UI library that could load menus from data files and ease the strain on the code!


The Nerdy Bit!

This project has been written as a C++ static lib which loads menus from a “.ini” file and can then render them using either DirectX or OpenGL.

This library does provide a default rendering of each control, but to allow for some eye-candy, the user can also provide textures to be used for the controls. Because of this (and the lack of image loading functionality built into OpenGL) anyone using the OpenGL must include several dlls from the OpenIL image loading suite. These are:

  • DevIL.dll
  • ILU.dll
  • ILUT.dll

You can download them from here and as long as you place them alongside your executable you should have no issues.

The UI library download includes a compiled html help file which should shed some light on how to go about using the menu system. If you require any more pointers you can download a small demo application which runs a DirectX and an OpenGL window (pictured in the above screenshot) showing off some of the UI widgets. The demo can be downloaded here.

Shader Viewer

This program lets you create, view and edit both GLSL and HLSL shaders easily.


The Nerdy Bit!

ShaderViewer is a reasonably standard C# WinForms application. The rendering window in the top-left is a C# Panel which renders an image using either DirectX or OpenGL (depending on which option is selected). The application makes calls to two C++ dlls which contains functions to initialise the OpenGL and DirectX states. Within the application there is also a timer which calls the dll functions in order to render and update the scene each tick. The textboxes on the right are actually custom controls inherited from RichTextBoxes. I did some research prior to starting this project on how syntax highlighting in a textbox could be achieved and found some interesting examples. The way I acheived the results pictured above is as follows: When a character is entered, the control runs through it’s text and constructs it’s own RichText containing a customised colour table. Any keywords found have the appropriate index to the colour table added before them (and an “index 0” added after). The overall effect is the highlighting of key GLSL and HLSL syntax with the remaining text staying a standard black font.


This program is a simulation of neural networks designed to survive within a closed environment. In this case the networks take the form of the brains of creatures (blobs) on a field of grass.


The Nerdy Bit!

Blobworld was originally written in Java as my 3rd year project at University. This version has been rewritten using C# and Windows Forms. Underneath it does some fairly simple number crunching. The GUI uses some custom UserControls to display the field and the average network. The networks in this program are simple 2 level networks with inputs connected directly to outputs. They are implemented as a 2D binary array with 14 columns and 7 rows. The columns represent inputs and the rows outputs. If a value is set to 1/true then the corresponding input and output neurons are connected. If the value is 0/false then the neurons are not connected. The inputs of the network allow the blobs to sense their world around them, the outputs give them urges to act out in that world. The first 9 inputs allow the blobs to sense if there is grass around them (top left, top middle, top right, middle left etc). The next four inform the blob as to whether it’s energy is very low, low, high or very high respectively. The last input lets the blob know if it is sitting on an egg. The first four outputs relate to moving left, moving right, moving up or moving down one square. The remaining outputs relate to eating grass, laying an egg or fertilising an egg. If an input is fired (e.g. There is grass to the top left of a blob so the 1st input is fired) and this input is connected to an output (e.g. first input is connected to the “eat” output) then an urge to perform that action will increase. Once that urge reaches a certain threshold the blob will perform this action (in this case if the blob sees grass to it’s topleft for a few ticks then it will attempt to eat some grass). The first generation of blobs have completely random brains; That is there is a 50/50 chance of any input being connected to any output. When a blob lays an egg, half of it’s genetic code (neural network configuration) is placed inside that egg. The blob that fertilises the egg fills in the other half of the genetic information. Through this process of breeding blobs with varying network configurations are created. The blobs with better genes (neural networks that will cause them to hunt out food more effectively) will survive and multiply i.e. The species will adapt to live within their environment. To the left of the grassy field there is a box showing the “average network” of all the currently living blobs. A red square represents a connected input-output. A black square represents a disconnected input-output. At first all of these squares will be dark red as half the blobs will have the corresponding neurons connected and half will not. As the program runs you should see trends start to arise, for example:

  • Blobs that can hunt out food (grass) better will survive longer. The genes that cause a blob to move in the direction of grass should become stronger – grass topleft, middleleft and bottomleft inputs should become connected to move left output.
  • Laying eggs is a very expensive action in terms of blob energy. Blobs should only lay eggs when their energy is very high.
  1. Dave
    November 29, 2012 at 2:54 am


    Thanks for the TaskerLight app. I’d like to try it out but I used a WS2801 LED Strip from ebay with an Arduino Duemillanove. I’d like to see if I can modify the Arduino code you used, but did not see where you posted it to make it work with TaskerLight.

    • November 29, 2012 at 11:01 am

      Hi Dave. If you download the source code for the TaskerLight App, in the zip file you’ll find a directory called “ArduinoTaskerLight”. Within that there is the ArduinoTaskerLight.ino which is the Arduino source code.
      I hope that is what you’re looking for. I’d be very interested to hear what you modify the code to do.

  1. September 12, 2012 at 7:51 pm
  2. September 12, 2012 at 7:52 pm
  3. March 14, 2013 at 4:31 pm

Tell me what you think!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: