Zool

zool-transparent-150

 

Table of contents

Introduction
Considerations
Approach
Implementation
Examples
Future

 

Introduction

I have started working on a remake of the Zool game, if you don’t know it you can see some footage here. Before starting the coding of the game itself, I’ve decided to do some investigation to assess the complexity of such project. And from the results of this investigation a couple of things emerged. To address this aspect I’ve decided to craft a specific software and I am going to present it to you now.

Considerations

Since this project is a remake, I had to get hold of the original assets of the game. Considering the original platform of the game, the Amiga, there were a couple of hurdles in accessing them. Beside the characteristics of the platform such as the usage of planar graphics, there was the fact that this platform used custom file systems and crunchers. In addition to these issues there was also the topic of understanding the game’s internals and mechanics.

Approach

I have set up an environment consisting of a virtual machine running an Amiga along some tools. The task of this environment was to retrieve the assets and decrunch them. Once done, the next step was to reassemble these raw assets to their original form before they got packed, for instance, graphics meant to be sprites with animations.

This is where the software I have mentioned plays a key role as it greatly simplifies this process. In addition to addressing these aspects there was another to consider; the fact that these assets were going to be used in a modern game framework. The software addresses that point by re-packing them into modern containers.

Note : an aspect that I haven’t covered regarding the reassembling step in this section is the fact that ‘parameters’ must be retrieved from the game in order for this step to be successful; I will cover this aspect in the next section.

Implementation

Software has been developed using WPF and uses features such as data-binding and commands. Of the challenges encountered there was the definition of the different types of assets along their parameters, their final form and their representation on-screen. Below you can see diagrams about how these parts have been represented in the system.


Figure 5 : the final form of a raw asset that has been processed and will be further exported


Figure 6 : different types of assets along their parameters


Figure 7 : presenters which are responsible for previewing the assets in the UI

Regarding the parameters needed for each type of asset, they all have been reverse-engineered from the game, whether by debugging the in-game memory or by deciphering the different file formats.

As the software is now mostly finished, I have started packing all these raw resources to form a ‘catalog’ that I will use when coding the game. Obviously they will be in a format easier to manipulate as this was the primary goal.

Examples

Below some screenshots of the different types of assets currently decoded by the software.


Figure 1 : a level which needs an associated tile-set and palette


Figure 2 : a tile-set, in fact packed in a well-known container (ILBM)


Figure 3 : a set of sprites, packed in a specific format


Figure 4 : a palette, a key element mostly retrieved from memory dumps

Future

Right now the software fulfils its role. There are still a few things that needs to be implemented such as higher levels objects like characters and interactions to really represent the whole content of the game.

When this is accomplished, I will complete the cataloging of all the game’s assets and everything related to the in-game experience as suggested above. Finally, I will start the coding of the game itself and once it is mature enough, the project will go public and will be open-sourced.

Direct2D canvas for SharpDX

Introduction

I have contributed to SharpDX the following feature : a cached Direct2D surface.

My contribution started when I asked for help for an issue with Direct2D, the SharpDX team was of great help; it ended up talking about how repetitive calls quickly affect performance and suggested a caching system.

I decided to adopt the philosophy that the Toolkit does, encompass native types into simpler ones to use. I started something with some ideas taken from my experience with WPF.

Example

Here 3 canvases (background, static/dynamic texts) along some 3D content:

(project at https://github.com/aybe/SharpDX.Toolkit.Direct2D.MiniDashboard)

For the formatted text, it is drawn using the following syntax:

Thanks to the Named and Optional Arguments features of C# we can get a terse syntax but still provide customization if needed, this is approximately the usage I was expecting when drawing such content.

The other feature is the caching of content, currently it is quite primitive but efficient nonetheless. The user pushes and pulls objects onto canvases and by using multiple instances of them with a minimum of discipline in regard to the placement of objects, one can render thousands of objects at each frame without a performance penalty.

Status

About 30% of the methods in DeviceContext and RenderTarget classes are implemented. I am quite confident with implementing the remaining bits as the text-related functions were the trickiest but it went pretty well in the end.

I am finished with the initial work on this feature and have committed this feature to my fork; I am waiting for their review about the whole thing prior continuing my work.

Waveform

Introduction

Of the things that WPF misses it is probably of audio-related controls. I’ve had to craft some of them and I will present you a component that renders a wave form and a little more. It features a sample-level accuracy, a theme-able interface and ‘providers’ which analyzes audio and return information of interest such as sound features.

Showcase

waveform-onsets-128
Figure 1 : provider that detects onsets and color them according their frequency band

waveform-tempo-128
Figure 2 : provider that detects beats

waveform-coloring-128
Figure 3 : provider that colors audio content like Scratch Live or rekordbox

waveform-echonest-128
Figure 4 : provider that renders feature vectors returned by an EchoNest online analysis

waveform-direct3d-128
Figure 5 :  same provider but with a Direct3D renderer and a custom shader

How it works

The rendering process components are laid out below:

AudioStream -> Waveform -> WaveformRenderer

– AudioStream reads and converts audio samples
– Waveform builds the peak data and cache
– WaveformRenderer is an abstract renderer

Finally, providers are plugged to a renderer as needed.

Status

The version in GitHub is old, I yet have to upload the latest version featuring providers.

GLA

Update

GLA was a library that mimicked XNA using OpenGL. It was a good experience as I’ve learned the version 3 of the OpenGL API while coding it. However, I have stopped working on it as there are far more advanced alternatives such as MonoGame. Of the things implemented there was vertex buffer objects, vertex array objects and effects for the drawing of 2D primitives and textures.

The source code is still available here.

Below is the original content of this post at the time it was published.


GLA is a library that attempts to mimick the XNA framework but using OpenGL (OpenTK).

Currently VBOs, VAOs and some effects are implemented such as for the drawing of 2D geometry and textures.

Gallery

The drawing of 2D primitives:

Code used for it :

 

The drawing of textures: