Wacky Wheels

 

Following my previous experience with Zool, I have started another game project in parallel; a 3D version of the Wacky Wheels game. With Zool, the main challenge was the acquisition of the game assets, for Wacky Wheels however, it is an entirely different story as the differents formats of the game data files are well documented.

I have therefore been able to get started pretty quickly and within hours I could see some results.


Figure 1 : the first track of the first class, with a naive implementation of the background


Figure 2 : the second track of the first class


Figure 3 : the fourth track of the first class


Figure 4 : correct implementation of the background (preliminary)

From the lessons I’ve learned while working on Zool (and the time I spent on it), I have decided to delegate some of the work on this project for the following reason : it is an unrealistic task as a one-man job (to be done in a realistic time frame). Ultimately I can realize that project from start to end, but how long would it have taken ? Hence my decision of splitting up the job.

I guess that it is certainly the best decision for this project to be up in a reasonable time frame. There is a part of the project that I will first finish however; the low-level plumbing. I think this is an essential aspect for making it incentive to potential contributors.

Obviously there is still a lot left to be implemented so it looks like the original game. But in my opinion, there is less work to be done than in Zool as I will reuse some of the components I have developed for it and there is less interaction in this game compared to it.

 

Zool

zool-transparent-150

 

Table of contents

Introduction
Considerations
Approach
Implementation
Examples
Future

 

Introduction

I have started working on a remake of the Zool game, if you don’t know it you can see some footage here. Before starting the coding of the game itself, I’ve decided to do some investigation to assess the complexity of such project. And from the results of this investigation a couple of things emerged. To address this aspect I’ve decided to craft a specific software and I am going to present it to you now.

Considerations

Since this project is a remake, I had to get hold of the original assets of the game. Considering the original platform of the game, the Amiga, there were a couple of hurdles in accessing them. Beside the characteristics of the platform such as the usage of planar graphics, there was the fact that this platform used custom file systems and crunchers. In addition to these issues there was also the topic of understanding the game’s internals and mechanics.

Approach

I have set up an environment consisting of a virtual machine running an Amiga along some tools. The task of this environment was to retrieve the assets and decrunch them. Once done, the next step was to reassemble these raw assets to their original form before they got packed, for instance, graphics meant to be sprites with animations.

This is where the software I have mentioned plays a key role as it greatly simplifies this process. In addition to addressing these aspects there was another to consider; the fact that these assets were going to be used in a modern game framework. The software addresses that point by re-packing them into modern containers.

Note : an aspect that I haven’t covered regarding the reassembling step in this section is the fact that ‘parameters’ must be retrieved from the game in order for this step to be successful; I will cover this aspect in the next section.

Implementation

Software has been developed using WPF and uses features such as data-binding and commands. Of the challenges encountered there was the definition of the different types of assets along their parameters, their final form and their representation on-screen. Below you can see diagrams about how these parts have been represented in the system.


Figure 5 : the final form of a raw asset that has been processed and will be further exported


Figure 6 : different types of assets along their parameters


Figure 7 : presenters which are responsible for previewing the assets in the UI

Regarding the parameters needed for each type of asset, they all have been reverse-engineered from the game, whether by debugging the in-game memory or by deciphering the different file formats.

As the software is now mostly finished, I have started packing all these raw resources to form a ‘catalog’ that I will use when coding the game. Obviously they will be in a format easier to manipulate as this was the primary goal.

Examples

Below some screenshots of the different types of assets currently decoded by the software.


Figure 1 : a level which needs an associated tile-set and palette


Figure 2 : a tile-set, in fact packed in a well-known container (ILBM)


Figure 3 : a set of sprites, packed in a specific format


Figure 4 : a palette, a key element mostly retrieved from memory dumps

Future

Right now the software fulfils its role. There are still a few things that needs to be implemented such as higher levels objects like characters and interactions to really represent the whole content of the game.

When this is accomplished, I will complete the cataloging of all the game’s assets and everything related to the in-game experience as suggested above. Finally, I will start the coding of the game itself and once it is mature enough, the project will go public and will be open-sourced.

Direct2D canvas for SharpDX

Introduction

I have contributed to SharpDX the following feature : a cached Direct2D surface.

My contribution started when I asked for help for an issue with Direct2D, the SharpDX team was of great help; it ended up talking about how repetitive calls quickly affect performance and suggested a caching system.

I decided to adopt the philosophy that the Toolkit does, encompass native types into simpler ones to use. I started something with some ideas taken from my experience with WPF.

Example

Here 3 canvases (background, static/dynamic texts) along some 3D content:

(project at https://github.com/aybe/SharpDX.Toolkit.Direct2D.MiniDashboard)

For the formatted text, it is drawn using the following syntax:

Thanks to the Named and Optional Arguments features of C# we can get a terse syntax but still provide customization if needed, this is approximately the usage I was expecting when drawing such content.

The other feature is the caching of content, currently it is quite primitive but efficient nonetheless. The user pushes and pulls objects onto canvases and by using multiple instances of them with a minimum of discipline in regard to the placement of objects, one can render thousands of objects at each frame without a performance penalty.

Status

About 30% of the methods in DeviceContext and RenderTarget classes are implemented. I am quite confident with implementing the remaining bits as the text-related functions were the trickiest but it went pretty well in the end.

I am finished with the initial work on this feature and have committed this feature to my fork; I am waiting for their review about the whole thing prior continuing my work.

Waveform

Introduction

Of the things that WPF misses it is probably of audio-related controls. I’ve had to craft some of them and I will present you a component that renders a wave form and a little more. It features a sample-level accuracy, a theme-able interface and ‘providers’ which analyzes audio and return information of interest such as sound features.

Showcase

waveform-onsets-128
Figure 1 : provider that detects onsets and color them according their frequency band

waveform-tempo-128
Figure 2 : provider that detects beats

waveform-coloring-128
Figure 3 : provider that colors audio content like Scratch Live or rekordbox

waveform-echonest-128
Figure 4 : provider that renders feature vectors returned by an EchoNest online analysis

waveform-direct3d-128
Figure 5 :  same provider but with a Direct3D renderer and a custom shader

How it works

The rendering process components are laid out below:

AudioStream -> Waveform -> WaveformRenderer

– AudioStream reads and converts audio samples
– Waveform builds the peak data and cache
– WaveformRenderer is an abstract renderer

Finally, providers are plugged to a renderer as needed.

Status

The version in GitHub is old, I yet have to upload the latest version featuring providers.

SharpMix

sharpmix-transparent-600

Table of contents

Introduction
Features
Description
Gallery
Project status
Future
Conclusion

 

Introduction

This is an overview of the longest and most complex project I have been working on so far, a digital mixing solution for DJs written using the C# language. I will go through some aspects of the software during the development phase, the suspension of the project and the conclusions I drew from this experience.

Features

  • mixing of digital music files
  • support of turntables using a time-code
  • support of MIDI control surfaces
  • analysis and extraction of audio features
  • management of a collection of digital music files

Description

The software allows the user to mix digital files with a ‘traditional’ approach by using turntables. The system works by using a ‘time-code’ CD which is a special kind of Audio CD that contains a sequence of numeric codes along an error-correcting code for its robustness. Its role is to synchronize the playback of digital music according the user interaction with a turntable. In addition to the support of turntables, it supports the usage of MIDI-enabled DJ control surfaces for driving the performance, as well as navigating in the user interface.

Another major feature is the analysis and extraction of audio features from digital music files such as the tempo, the key chord and audio segments. These audio properties are useful to a DJ for organizing a collection of digital music files. By tagging music with keywords pertaining to the field of music composition, it allows for more creativity and the usage of the technique known as harmonic mixing.

Here you can see a video and a few screenshots of some parts and components that have been developed over time. Some of them are considered to be in a mature state, others are still in a preliminary state.

Figure 1 : a video showing the responsiveness of the system while tracking the time-code played by a CDJ-1000


Figure 2 : last version of the software using the WPF framework


Figure 3 : the presentation page of an artist, content is fetched from EchoNest


Figure 4 : an implementation of a high-quality colored bitmap font


Figure 5 : a preliminary implementation of a tag cloud


Figure 6 : the output of the time-code tracking smoothing system against burst errors


Figure 7 : first version of the software that is seen on the video

Project status

After 18 months of development I have decided to put the development of the project on hold, mainly by the lack of sufficient resources to react to market changes in a realistic time frame. If you look at the credits of the Traktor software you would see that nearly 70 people have participated in its development; obviously I can hardly compete with such workforce given the resources I have invested in and the time frame I have envisioned initially.

Between its inception and suspension there has been a lot of novelties in the field of technologies and computing. Of them, the appearance of touch-enabled devices, the advent of DJ control surfaces providing a MIDI time-code and finally, newer patterns in the software development world.

And since I was seeking for an A+ grade software, evidently I could not ignore and not adopt these newer techs in the project. The project however, is not being abandoned; it is just in a suspended state. I am currently elaborating and reviewing a strategy for the software to resurface later, with a different model, though.

Future

The software is probably going to resurface within the next year, in a slimmed-down, refined and more modern version. You can expect it to be touch-enabled, a fully-featured offline mode (previously it had to rely to online services for the analysis of music), will be free and probably provide cloud-enabled features.

Conclusion

While I will easily admit that from a commercial point of view the software has been a failure, it was however a positive experience for me where I have learned many things in regard to programming, signal processing and the design of interfaces. Even if its development time stretched to the point of it getting suspended, I still consider it as an achievement by some ways. I’ll conclude by saying that the fact this project will resurface in the future mitigates somewhat that point.

Aubio

aubio-tile-256

aubio is a C++ library for the detection of audio features such as pitch, beats, tempo and onsets. I am still working on making it available from .NET and will publish the sources along a NuGet package when it’s done.

LibKeyFinder

libkeyfinder-tile-256

LibKeyFinder is a musical key chord detection library with high accuracy. Its performance is on par with top commercial solutions available such as MixedInKey or Tonart.

I contributed the following additions to the project:

– extended the initial C++ library to support any programming language
– a first .NET assembly for using the library from C#
– a second .NET assembly that simplifies the usage of it
– NuGet packages: LibKeyFinderDotNet and LibKeyFinderDotNet.BASS

I also plan to make features in KeyFinder application available from these assemblies.

SonicApi

sonicAPI-tile-256

SonicApi is a web service that extracts audio features such as pitch, tempo and key chord. The service also provides processing services such as correction of pitch, tempo and the addition of reverberation effects. It is the web service version of zplane high quality components used by many products/companies in the field such as abletonSteinberg. Korg or Native Instruments.

I have written a library for accessing the online services from the .NET platform which uses the asynchronous version of the services. I’ve notified them about it and they decided to make it part of the official API, currently I am waiting for their review and instructions as they would like it to match the Objective C API they have developed internally.

Downloads

library source code
sample project
NuGet package

ENMFPdotNet

echonest-tile

The EchoNest is a company that provides services for the analysis and extraction of musical features, some of their clients are MTV or the BBC; it has recently been acquired by Spotify.

I developed a .NET version of the API, currently I am reviewing it and documenting it before publishing it. There is a component of the API that I have already published, ENMFPdotNet. It is the component that fingerprints audio songs for querying their identification service. A NuGet package is also available here.

Since it is not the original utility you will have to roll out your own decoding process to 32-bit 22Khz monophonic PCM as this is the format that Codegen expects. I have put a small example on how to achieve that using BASS.NET, basically you submit a file name, specify the desired sample rate, the number of channels desired and you will get your audio data converted to that format.

Also, there is a workaround explained in the README on how to use the library in a Windows Store application.

If you prefer to use the original command-line utility (codegen), it is available here.