This is like a "Eureka moment" for me right now; I have just successfully finished writing my first sketch processing algorithm on for the DS app.
The current system is using a basic feature point identification system, similar to the initial stages of Stroke Approximation.
I haven't timed the process yet but the output seems as instant as could be, which is good news. Woo!
Here's a screen shot of it processing a star I drew. Black squares are the identified feature points.
Thursday, 28 February 2008
Saturday, 23 February 2008
Sketch Processing Progress
After a week of doing very little work on my sketch processing project, I decided to get some more coding done. Well it is now 6:25am and a lng way past my intended bedtime, but progress is good. I started by fixing a few bugs from my last coding session and quickly got back into the swing of the code. I started to put together some code to generate a set of curvature data from the initial stroke information.
In comparison to XNA and C#, the biggest problem with programming on the Nintendo DS is that there is almost no help from the IDE, every error takes about 10 times longer to debug and the actual writing of the code is so much more tedious. Anyway, my point is that with something like C#, all the functionality is at your fingertips allowing you to quickly print to the screen or output information to a console window or to a file. The lack of all this on the DS means debugging or writing new and specifically math-intensive code, becomes all the more difficult.
So I had put together this basic code, but without some tests I wasn't confident it would be getting the correct output data, and without break-points or any decent interface, I was working blind.
So instead of continuing as I was, I decided to create a simple C# app that loaded some pre-written stroke data from a text file (created by copying values from the screen of the DS app), and processed the data with almost the same code that was running on the DS. This was I could visualise, debug and check the code effectively, and reflect any changes in the DS code afterwards.
I basically ended up writing a nice little app that drew a representation of how the stroke would be seen on the DS screen, a dynamically sized curvature graph of the data with a mean cut-off line, data values, feature point identification and labeled, zoomed views of identified corners.
All in all the tool is quite useful and I will be using it for analysis on data taken directly from the DS app to visualise the information as image files, graphs and much more. All these can then be used in my report as evidence of the data and processing.
I also updated some of the GL rendering code on the DS app, but still need to fix some of the timer code.
So overall, quite good progress. Now I must get to sleep before the sun comes up and I'll fix the spelling and grammar mistakes in this when I wake up.
In comparison to XNA and C#, the biggest problem with programming on the Nintendo DS is that there is almost no help from the IDE, every error takes about 10 times longer to debug and the actual writing of the code is so much more tedious. Anyway, my point is that with something like C#, all the functionality is at your fingertips allowing you to quickly print to the screen or output information to a console window or to a file. The lack of all this on the DS means debugging or writing new and specifically math-intensive code, becomes all the more difficult.
So I had put together this basic code, but without some tests I wasn't confident it would be getting the correct output data, and without break-points or any decent interface, I was working blind.
So instead of continuing as I was, I decided to create a simple C# app that loaded some pre-written stroke data from a text file (created by copying values from the screen of the DS app), and processed the data with almost the same code that was running on the DS. This was I could visualise, debug and check the code effectively, and reflect any changes in the DS code afterwards.
I basically ended up writing a nice little app that drew a representation of how the stroke would be seen on the DS screen, a dynamically sized curvature graph of the data with a mean cut-off line, data values, feature point identification and labeled, zoomed views of identified corners.
All in all the tool is quite useful and I will be using it for analysis on data taken directly from the DS app to visualise the information as image files, graphs and much more. All these can then be used in my report as evidence of the data and processing.
I also updated some of the GL rendering code on the DS app, but still need to fix some of the timer code.
So overall, quite good progress. Now I must get to sleep before the sun comes up and I'll fix the spelling and grammar mistakes in this when I wake up.
Labels:
c#,
curvature data,
nds,
nintendo ds,
sketch processing
Thursday, 21 February 2008
Virtual Construction Toolkit
During my year working at Creative North, I worked on a project for construction company JNBentley. The application's purpose was to allow the site managers to have a visual representation of the site they were currently working on. Accurate AutoCAD survey data of the site would be exported into the application which generated a realistic representation of the site, which could be viewed in full 3D.
All of the programming was done in C++ and TourqueScript with the VCT starting life as the Torque RTS Starter Kit. Huge parts of the original engine and system were rewritten during the project to cater for some of the project's requirements.
The VCT provided the ability to place site vehicles, buildings and workers in the virtual environment, and even create paths and roads for them to travel on. All the vehicles in the VCT could be controlled down to the smallest detail; with the user being able to adjust everything from the position of every arm and scoop on an excavator to the bucket on a dunper truck. Spheres of influence showed the user the exact areas that could be affected by a vehicle at any point in time, providing good visual representation of dangers on the site.
The application also featured a wide range of tools to add excavations or mark out zones. Fences and walls could be laid out, overhead cables simulated and distances measured with incredible accuracy. Notes could be placed in the environment or on specific objects to remind users of key information.
A complex scaffolding representation system was also inculded which could be adjusted in many ways allowing different weights of poles, board sizes, bay widths and much more. From this data, predicted costs could be generated for the site managers, giving them a better representation of how much the scaffolding may cost before ordering.
Environment features and underground pipework could be loaded in and generated from survey data, giving the user a visual representation of what the site may look like in 6 months time, or even to look underground.
Aprroximately halfway through development we produced a video demonstrating the application and its functionality.
All of the programming was done in C++ and TourqueScript with the VCT starting life as the Torque RTS Starter Kit. Huge parts of the original engine and system were rewritten during the project to cater for some of the project's requirements.
The VCT provided the ability to place site vehicles, buildings and workers in the virtual environment, and even create paths and roads for them to travel on. All the vehicles in the VCT could be controlled down to the smallest detail; with the user being able to adjust everything from the position of every arm and scoop on an excavator to the bucket on a dunper truck. Spheres of influence showed the user the exact areas that could be affected by a vehicle at any point in time, providing good visual representation of dangers on the site.
The application also featured a wide range of tools to add excavations or mark out zones. Fences and walls could be laid out, overhead cables simulated and distances measured with incredible accuracy. Notes could be placed in the environment or on specific objects to remind users of key information.
A complex scaffolding representation system was also inculded which could be adjusted in many ways allowing different weights of poles, board sizes, bay widths and much more. From this data, predicted costs could be generated for the site managers, giving them a better representation of how much the scaffolding may cost before ordering.
Environment features and underground pipework could be loaded in and generated from survey data, giving the user a visual representation of what the site may look like in 6 months time, or even to look underground.
Aprroximately halfway through development we produced a video demonstrating the application and its functionality.
HLSL - Part 3.
Bubble
The Bubble effect makes use of several different techniques, bringing them together into one. It features cube mapping, environment mapping, reflection, refraction, colour blending and vertex manipulation.The effect consists of 2 core passes, one for rendering the environment map and one for rendering the bubble.
The second pass renders the bubble. During this pass, several colours are attained from several sources, and are blended together to produce the final output colour. These sources are a colour from a rainbow texture, the view through the bubble, the reflection off the front of the bubble and the reflection off the inside of the bubble.
The rainbow colour is chosen based on a time variable, added to the distance from the camera to the bubble, and added to the dot product of the view direction and light direction. The combination of these values produces a change in colour when zooming in and out, when rotating around the bubble, and slowly over time.
The next colour to be added is the pass-through colour. This is the colour from the cube map on the other side of the bubble. Refraction would affect the light passing through the bubble, magnifying the view slightly.
The third colour to add is the reflection from the inside of the bubble. The colour is multiplied by the inverse of the rainbow’s alpha value to only draw it where the rainbow colours are not (this reflection should be a weak value, in this case it is visualised as being overpowered by the rainbow colour value).
The final colour value to add to the output colour is the reflection colour from the front of the bubble. The view direction is reflected in the bubble’s normal and the colour is extracted from the cube map. This final colour is multiplied by the sum of the opacity value used for the inverse reflection and an edge value so that the reflection is more intense towards the edge of the bubble. Finally it is added to the output colour which is finally rendered to the screen.
Over Exposure
In real life terms, exposure relates to the length of time a camera lens is open. The longer the lens is opened, the more light is put onto the image. The extra light saturates the image, and eventually the image is overpowered with light and is turned completely white. This shader effect attempts to simulate the feel of over-exposure.HLSL - Part 2.
More shader effects I have created.
The full effect consists of four passes. The first does most of the work, making the texture look like a fire by applying colours, and distorting the texture co-ordinates. The second, third and fourth down-sample the apply a Gaussian blur to the output of the first pass. Finally the image is upsampled and combined with the first pass' output to provide the final effect.
The effect is broken down into three passes. The first renders the rotating elephant to a texture, from the point of view of a second camera. The areas of the texture not populated by elephant pixels are set to be completely transparent.
The second pass renders the outer 3D cube and maps the elephant texture onto the faces of the cube. The transparent areas of the texture are replaced with a golf course image, to provide a background.
The last pass renders a small quad in the centre of the world. The quad is alpha blended so that the only pixels drawn are those depicting the elephant. When the user looks straight at the quad, the elephant appears 3D however it is really just a projection.
2D Fire Effect
This effect uses a 2D screen aligned quad and applies multiple passes to transform the original texture data into a fire effect. There are many examples of fire shaders available, on the internet; however I decided to create my own. The effect was refined and improved to reduce the number of ALUs needed and thus reaching good performance speeds.The full effect consists of four passes. The first does most of the work, making the texture look like a fire by applying colours, and distorting the texture co-ordinates. The second, third and fourth down-sample the apply a Gaussian blur to the output of the first pass. Finally the image is upsampled and combined with the first pass' output to provide the final effect.
Render Target Textures
This effect uses render targets to demonstrate techniques seen in games like Super Paper Mario, where 3D visuals are projected onto flat 2D surfaces in a 3D environment. In this effect, a 3D elephant model is drawn to a renderable texture which is mapped onto a 3D cube and a 2D quad plane in 3D space with alpha transparency.The effect is broken down into three passes. The first renders the rotating elephant to a texture, from the point of view of a second camera. The areas of the texture not populated by elephant pixels are set to be completely transparent.
The second pass renders the outer 3D cube and maps the elephant texture onto the faces of the cube. The transparent areas of the texture are replaced with a golf course image, to provide a background.
The last pass renders a small quad in the centre of the world. The quad is alpha blended so that the only pixels drawn are those depicting the elephant. When the user looks straight at the quad, the elephant appears 3D however it is really just a projection.
Labels:
fire,
hlsl,
pixel shader,
render target,
rendermonkey,
shaders,
vertex shader
HLSL - Part 1.
HLSL (High Level Shader Language) is used to create vertex and pixel shader effects.
In November of 2007 I learnt HLSL and created a some effects using AMD's RenderMonkey 1.71.The following are some of the shader models I created.
The implemented effect uses a adjustable percentage value which is used to linearly interpolate from no sepia tone to fully sepia.
I added a 2D sepia effect using thje same function but applied to a screen aligned quad in order to provide good visual examples.
The specular highlight is added to the ambient and diffuse colour values to produce a simple model for the output colour of the object at a specific pixel.
There are several basic specular highlight formulae that produce varying visual effects. I have implemented a Gaussian distribution model and a Beckmann distribution model, but will just discuss the Beckmann distribution here.
The Beckmann distribution model offers a more realistic physics model than the Gaussian distribution, however is much more computationally heavy. For this reason, I have calculated the majority of the non-specular processing on the vertex shader to reduce the number of arithmetic logic commands processed by the pixel shader.
The follwing images show the Beckmann distribution with varying surface smoothness values.
From looking at screenshots, I created a formula to generate a similar effect. I also added a scaling value to adjust the amount of glow applied by the process. The following images show the effect in action.
In November of 2007 I learnt HLSL and created a some effects using AMD's RenderMonkey 1.71.The following are some of the shader models I created.
Sepia
This sepia shader model is a post-processing effect that could be applied to a colour at any time. Sepia tone effects produce images coloured in tones of brown and can commonly be found as a feature on modern digital cameras.The implemented effect uses a adjustable percentage value which is used to linearly interpolate from no sepia tone to fully sepia.
I added a 2D sepia effect using thje same function but applied to a screen aligned quad in order to provide good visual examples.
Specular Highlight
Specular highlights are the bright spots of light that are seen when bright lights reflect off shiny objects. In computer games, specular highlights help give the user a clear idea of an object’s shape and position within a scene. There are several specular highlighting models that can be used to give varying visual effects.The specular highlight is added to the ambient and diffuse colour values to produce a simple model for the output colour of the object at a specific pixel.
There are several basic specular highlight formulae that produce varying visual effects. I have implemented a Gaussian distribution model and a Beckmann distribution model, but will just discuss the Beckmann distribution here.
The Beckmann distribution model offers a more realistic physics model than the Gaussian distribution, however is much more computationally heavy. For this reason, I have calculated the majority of the non-specular processing on the vertex shader to reduce the number of arithmetic logic commands processed by the pixel shader.
The follwing images show the Beckmann distribution with varying surface smoothness values.
Edge Glow
The Edge Glow effect is not based on any realistic lighting model, but on an effect used in the game Super Mario Galaxy. The game is based in space where the backgrounds are primarily dark shades. To help the 3D models rendered against these darker backgrounds stand out, the models all appear to have glowing edges.From looking at screenshots, I created a formula to generate a similar effect. I also added a scaling value to adjust the amount of glow applied by the process. The following images show the effect in action.
Labels:
beckmann,
edge glow,
gaussian,
hlsl,
pixel shader,
rendermonkey,
sepia,
shaders,
specular highlight,
vertex shader
Sketch Processing in Games
Sketch Processing is the system of taking input data from an a device such as a touchpad and transforming the original raw data into a chosen output.
I am currently undertaking a project in which I am attemptng to implement several sketch processing techniques in a Nintendo DS application in order to analyse the possibilities and extents of the technologies within a gaming environment.
So far I have researched and analysed existing techniques spanning the subject and chosen a few to implement in my DS application.
I will be looking into:
I am currently undertaking a project in which I am attemptng to implement several sketch processing techniques in a Nintendo DS application in order to analyse the possibilities and extents of the technologies within a gaming environment.
So far I have researched and analysed existing techniques spanning the subject and chosen a few to implement in my DS application.
I will be looking into:
- Stroke Approximation with Average Based Filtering.
- Scale Space Filtering.
- Least Square Fit Fluid Sketching.
- $1 Recognizer.
Subscribe to:
Posts (Atom)