Another quick video of some new levels.
Thursday 10 April 2008
Wednesday 9 April 2008
Quintessence Update
Just a quick one. Not had much time to blog recently, but here's an up to date video for you.
Wednesday 19 March 2008
Blog Redesign
One more quick blog post.
You may have noticed if you have been here before, that I redesigned my blog last week. I fancied getting my teeth into some classic CSS and design stuff. I hope you like the results.
Oh, and yes, ALL the artwork, design and coding was done by me. Including the vector DS which I actually drew in Microsoft Visio (don't ask!).
As for me, I need to get out of the library and get to bed. Although I am enjoying this 24/7 library thing! :)
Laters.
You may have noticed if you have been here before, that I redesigned my blog last week. I fancied getting my teeth into some classic CSS and design stuff. I hope you like the results.
Oh, and yes, ALL the artwork, design and coding was done by me. Including the vector DS which I actually drew in Microsoft Visio (don't ask!).
As for me, I need to get out of the library and get to bed. Although I am enjoying this 24/7 library thing! :)
Laters.
Labels:
css,
design,
photoshop,
vector art,
web programming
Particles!
Three blogs in one day might be excessive, but its just passed midnight so I guess I'm ok.
So then ladies and gentlemen. One thing I have not blogged about as of yet is the project I'm working on with some guys from university for our "Team Project" module.
Quck brief. Team of 8. Half programmers, half designers. 8ish months. 1 game prototype.
We, "Team C" have been building our game for a while now and it's finally coming together quite well. The game is called Quintessence, which roughly translates from latin as "five elements" of the "fifth element".
Quintessence is a Marble Blast / Super Monkey Ball style game where the player must control the ball, rolling it from the start of a level to the finish, as quickly as possible. The challenge is added in the form of elements. The ball is transformed into different elements by rolling onto "Altars" which change the current element in order for the player to overcome obstacles such as hedges, water, and brick walls.
There are 5 elemental balls in total, each with their own characteristics in terms of weight, speed, acceleration, grip and visual style. They are Neutral, Fire, Water, Earth and Air.
Fire burns hedge, Water(ice) freezes water, and Earth smashes wall. Air is just floaty-light!
So that's the game, and I've been working on a particle engine this weeks which I adapted from a 2D particle engine tutorial into a fully 3D version. :)
The particle system is to be used for fire effects, ice and snow particles, chunks of rock, water splooshes and much more.
The visual style is cartoony and we are in the process of redrawing almost all of the texture to make it even more so, hence the blobby cartoony style of the fire. At last, here is a video of the effect.
Since making that video earlier in the week I have created a fireworks effect for when the player finishes the level, and snow and ice particles that fly off the Water(ice) ball when it rolls.
So then ladies and gentlemen. One thing I have not blogged about as of yet is the project I'm working on with some guys from university for our "Team Project" module.
Quck brief. Team of 8. Half programmers, half designers. 8ish months. 1 game prototype.
We, "Team C" have been building our game for a while now and it's finally coming together quite well. The game is called Quintessence, which roughly translates from latin as "five elements" of the "fifth element".
Quintessence is a Marble Blast / Super Monkey Ball style game where the player must control the ball, rolling it from the start of a level to the finish, as quickly as possible. The challenge is added in the form of elements. The ball is transformed into different elements by rolling onto "Altars" which change the current element in order for the player to overcome obstacles such as hedges, water, and brick walls.
There are 5 elemental balls in total, each with their own characteristics in terms of weight, speed, acceleration, grip and visual style. They are Neutral, Fire, Water, Earth and Air.
Fire burns hedge, Water(ice) freezes water, and Earth smashes wall. Air is just floaty-light!
So that's the game, and I've been working on a particle engine this weeks which I adapted from a 2D particle engine tutorial into a fully 3D version. :)
The particle system is to be used for fire effects, ice and snow particles, chunks of rock, water splooshes and much more.
The visual style is cartoony and we are in the process of redrawing almost all of the texture to make it even more so, hence the blobby cartoony style of the fire. At last, here is a video of the effect.
Since making that video earlier in the week I have created a fireworks effect for when the player finishes the level, and snow and ice particles that fly off the Water(ice) ball when it rolls.
Tuesday 18 March 2008
More Web Programming
I figured that because web programming is soooooo easy compared to the sort of coding we do everyday, I might as well get it learnt and add it to my big pot o' skills.
So this week I decided to have a look at JavaScript and more in depth with PHP. A simple image viewer / gallery seemed a wise thing to make as a test, so I grabbed a bunch of album cover images I had lying in a folder and got to work.
After a while it all started to make sense and I whacked together a simple php page shows a list of images which can be scrolled through using buttons or by clicking the image you want to skip directly to.
Click here to view the page.
Its all quite nice and works well enough. The whole system is completely flexible, allowing it to be used with any number of images and with any amount of images on screen at once (changeable using a little form and php post values).
I also figured it'd be wise to have the code viewable, but just using the browser's "View Source" option doesn't display all the php. So I wrote another php page that parses and a displays the index.php code in the same folder. Another php file is included in the index.php page which adds a small link box to the code parser in the bottom right hand corner of the screen. All neat and reusable for future php stuff.
Click here to view the code parser page.
On the point of future php stuff, I'm going to look into making an editable contact info MySQL database page for names and addresses accessible to certain users to act as an easily maintained global address book for families (specifically mine). This way, everyone can easily access and update family member's addresses, phone numbers, email addresses and much more.
So this week I decided to have a look at JavaScript and more in depth with PHP. A simple image viewer / gallery seemed a wise thing to make as a test, so I grabbed a bunch of album cover images I had lying in a folder and got to work.
After a while it all started to make sense and I whacked together a simple php page shows a list of images which can be scrolled through using buttons or by clicking the image you want to skip directly to.
Click here to view the page.
Its all quite nice and works well enough. The whole system is completely flexible, allowing it to be used with any number of images and with any amount of images on screen at once (changeable using a little form and php post values).
I also figured it'd be wise to have the code viewable, but just using the browser's "View Source" option doesn't display all the php. So I wrote another php page that parses and a displays the index.php code in the same folder. Another php file is included in the index.php page which adds a small link box to the code parser in the bottom right hand corner of the screen. All neat and reusable for future php stuff.
Click here to view the code parser page.
On the point of future php stuff, I'm going to look into making an editable contact info MySQL database page for names and addresses accessible to certain users to act as an easily maintained global address book for families (specifically mine). This way, everyone can easily access and update family member's addresses, phone numbers, email addresses and much more.
Labels:
gallery,
image viewer,
javascript,
php,
web programming
Sketch Processing Update
Today I have 3 exciting new developments for you, along with the usual enthralling updates!
We'll start with the bit most of you will just want to see. The video.
Technically the real reason behind all this, was to have a simple way of identifying different properties by colour coding the line segments based on specific properties such as how curved they are.
Last year I spent a few weeks writing low level colour manipulation algorithms for mobile phones, so I knew that a HSV to RGB converter would provide me with the ability to transform the parameter to an angle (Hue), and pump out nice rainbow colours.
This diagram is a good representation of HSV colour space:
(H: Hue, S: Saturation, V: Value )
I found a simple HSV -> RGB conversion function online, which was using floating point numbers. Again, after my work on mobiles last year I have accustomed to avoiding floats where possible, so I rewrote the function with bit shifted fixed point numbers to save some processing speed.
The result was lovely, but in the end I decided just to use the colours for the input stroke, with the colours cycling based on the input point ID.
Think of a button for example. You would think it'd be pretty straight forward, right? No.
Anyway, as annoying as a solid button system was to implement, a slider bar was all the more frustrating (actually, that's a lie, I quite enjoyed building it!). The toggle buttons simply link to a Boolean value which is altered as they are pressed, reflecting the current state of the button (toggled = true, and untoggled = false).
I started with my screen object class which had a "Touched" and "Released" inherited functions which were passing in a bool of whether the stylus was inside them or not. This was fine for the buttons, but when you want a slider which needs to know where abouts within the itself the stylus is, more information is required. This meant rewriting the function from the top, with the stylus co-ords being passed in, and the root object calculating and passing down whether it was inside or not.
This worked well and the rest quickly fell into place, and I added a linked a linked variable with min and max values acting as the effected value.
I was using my first slider as a multiplier for the calculation of the Beziér control points (see the next chunk of text), but these were generated when the sketch was processed (only once at the end of the stroke input). To rebuild them I had to press the "A button" which ran a little function that reprocessed the stroke.
Unfortunately whenever the slider was adjusted I had to rebuild the stroke manually. This is when I decided to get my head around function pointers.
They turned out to be surprisingly straight-forward and I quickly added a function pointer to the slider that would run whichever function you specify whenever the slider value was altered. I was really impressed with the result, and it makes the application much more interesting to use. :)
l2 is the distance along the original stroke from one point on the polyline to the next, and l1 is the distance directly between the two points.
Curved areas will obviously have a higher ratio than the more accurately represented straighter segments. The identified curved segments can then be removed from the polyline and represented with Beziér curves.
Beziér curves are a represented by a start point, an end point and a number of control points. The more control points a curve has, the more complex it becomes. The number of control points denotes the "order" of the curve, representing its mathematical complexity. A cubic curve (as seen below) has two control points.
Generating the Beziér curves to represent the curved ares of the sketch is a matter of finding an empirical value that best fits the user's drawing style. This is the multiplier that the slider adjusts.
To render a bezier curve it is best to represent it as a polyline when you have limited rendering capabilities available. This means we need to work out several points along the curve and use them as the polyline representation of the Beziér representation of the polyline representation of the original stroke! This is starting to get crazy!
Basically you need to boil down the of the curve by calculating sub curves using the control points, until a curve with 1 control point remains. this will give you the coordinate required at a chosen percentage along the original curve.
I won't go into the maths too deep at this point as I doubt anyone will even read this far, but there are some really interesting overlaps with Fibonacci series here which I may look into more at a later point!
If you want to have a play with an insane Java Applet of a cubic Beziér curve, I recommend checking out this site.
This image was just too cool to leave out. :)
We'll start with the bit most of you will just want to see. The video.
Fun Colours
After developing the application to its current state with little more than the odd pure red, green or blue line to keep my brain awake, I decided it was time for some bright colors!Technically the real reason behind all this, was to have a simple way of identifying different properties by colour coding the line segments based on specific properties such as how curved they are.
Last year I spent a few weeks writing low level colour manipulation algorithms for mobile phones, so I knew that a HSV to RGB converter would provide me with the ability to transform the parameter to an angle (Hue), and pump out nice rainbow colours.
This diagram is a good representation of HSV colour space:
(H: Hue, S: Saturation, V: Value )
I found a simple HSV -> RGB conversion function online, which was using floating point numbers. Again, after my work on mobiles last year I have accustomed to avoiding floats where possible, so I rewrote the function with bit shifted fixed point numbers to save some processing speed.
The result was lovely, but in the end I decided just to use the colours for the input stroke, with the colours cycling based on the input point ID.
Slider Bars
Developing on a really low-level platform like the DS means you are constantly having to write your own little classes to do the most basic of tasks.Think of a button for example. You would think it'd be pretty straight forward, right? No.
Anyway, as annoying as a solid button system was to implement, a slider bar was all the more frustrating (actually, that's a lie, I quite enjoyed building it!). The toggle buttons simply link to a Boolean value which is altered as they are pressed, reflecting the current state of the button (toggled = true, and untoggled = false).
I started with my screen object class which had a "Touched" and "Released" inherited functions which were passing in a bool of whether the stylus was inside them or not. This was fine for the buttons, but when you want a slider which needs to know where abouts within the itself the stylus is, more information is required. This meant rewriting the function from the top, with the stylus co-ords being passed in, and the root object calculating and passing down whether it was inside or not.
This worked well and the rest quickly fell into place, and I added a linked a linked variable with min and max values acting as the effected value.
I was using my first slider as a multiplier for the calculation of the Beziér control points (see the next chunk of text), but these were generated when the sketch was processed (only once at the end of the stroke input). To rebuild them I had to press the "A button" which ran a little function that reprocessed the stroke.
Unfortunately whenever the slider was adjusted I had to rebuild the stroke manually. This is when I decided to get my head around function pointers.
They turned out to be surprisingly straight-forward and I quickly added a function pointer to the slider that would run whichever function you specify whenever the slider value was altered. I was really impressed with the result, and it makes the application much more interesting to use. :)
Beziér Curves
The final stage of the Stroke Approximation technique is to identify the curved areas of the stroke and represent them with Beziér curves. To identify the curved areas, a ratio is calculated for each line segment of the final hybrid fit polyline. l2:l1.l2 is the distance along the original stroke from one point on the polyline to the next, and l1 is the distance directly between the two points.
Curved areas will obviously have a higher ratio than the more accurately represented straighter segments. The identified curved segments can then be removed from the polyline and represented with Beziér curves.
Beziér curves are a represented by a start point, an end point and a number of control points. The more control points a curve has, the more complex it becomes. The number of control points denotes the "order" of the curve, representing its mathematical complexity. A cubic curve (as seen below) has two control points.
Generating the Beziér curves to represent the curved ares of the sketch is a matter of finding an empirical value that best fits the user's drawing style. This is the multiplier that the slider adjusts.
To render a bezier curve it is best to represent it as a polyline when you have limited rendering capabilities available. This means we need to work out several points along the curve and use them as the polyline representation of the Beziér representation of the polyline representation of the original stroke! This is starting to get crazy!
Basically you need to boil down the of the curve by calculating sub curves using the control points, until a curve with 1 control point remains. this will give you the coordinate required at a chosen percentage along the original curve.
I won't go into the maths too deep at this point as I doubt anyone will even read this far, but there are some really interesting overlaps with Fibonacci series here which I may look into more at a later point!
If you want to have a play with an insane Java Applet of a cubic Beziér curve, I recommend checking out this site.
This image was just too cool to leave out. :)
Labels:
bezier curves,
colours,
gui,
nds,
sketch processing
Sunday 9 March 2008
High Scores and XML
Its time for something a little more fun in my exciting blogging world!
Today, XML!
To cut a long story short, I used to play games pretty competitively, vying for the best times / scores on various websites. A problem with having multiple websites to track scores and multiple games to keep track of, I always wished I could have a way of storing my high scores in a simple way that could be easily updated or manipulated.
HTML was too static and Excel was too local, but now, XML provides a wonderfully flexible solution to the problem. Hooray!
I figured I would write a basic XSLT stylesheet for my new XML file so I can plop it onto my website and make it look pretty and readable.
So far I have added several games to my data file and it can be viewed in all its glory HERE.
Gold stars like this indicate world records
I'm intending to write a php wrapper page for it which can be used to sort / manipulate the data via the web page, but for now I would love to hear your thoughts on it so far.
Thanks, and enjoy!
Today, XML!
To cut a long story short, I used to play games pretty competitively, vying for the best times / scores on various websites. A problem with having multiple websites to track scores and multiple games to keep track of, I always wished I could have a way of storing my high scores in a simple way that could be easily updated or manipulated.
HTML was too static and Excel was too local, but now, XML provides a wonderfully flexible solution to the problem. Hooray!
I figured I would write a basic XSLT stylesheet for my new XML file so I can plop it onto my website and make it look pretty and readable.
So far I have added several games to my data file and it can be viewed in all its glory HERE.
Gold stars like this indicate world records
Current games:
- TimeSplitters 2
- F-Zero GX
- Marble BlastUltra
- Metroid Prime
- SSX 3
- Super Mario 64 DS
- Super Monkey Ball
I'm intending to write a php wrapper page for it which can be used to sort / manipulate the data via the web page, but for now I would love to hear your thoughts on it so far.
Thanks, and enjoy!
Sorting Algorithms
During the development of my Sketch Processing application I reached a stage where the processing of each stroke was starting to take amounts of time noticable to the naked eye. At this point As the application was at an early stage of its life, my concerns were that if I was getting slowdown already then I had much worse to come.
Generally the processing was taking around seemingly exponential lengths of time in comparison to the number of points in the input.
After a quick time profiling addition to the code I was able to break down the processing in order to find if there was one chunk of code eating up my processing time. It soon became apparent that sorting the curvature and speed data (float arrays equal in size to the number of input points) was accounting for around 98.5% of the processing for an input of 200 points (around 2 seconds to process!).
I was relieved to find it was just a sorting algorithm that was the nuisance as I knew plenty about them and knew there would be a quicker solution available.
After reading through my basic and frankly terrible original sorting process, I realised that I'd written a type of Insertion Sort algorithm(one of the slower types of sort).
Some quick googling revealed a nice selection of algorithms and explained how there were basically 2 groups of comparison sorts. Fast, and slow.
These handy graphs show the variations in speed between the two groups, O(n^2) and O(n log n).
You can see that the slow algorithm graph has a vertical scale measured in hudreds of secionds whereas the fast algorithm graph has a vertical scale measured in tenths of seconds. This is clearly a huge difference.
Some more research helped me identify the Comb Sort as a reasonable solution. It acts in a similar way to the Bubble Sort, a relatively slow algorithm, however instead of comparing adjacent values, an initial gap of the data length is used and reduced by 1/(~1.3) each recursion over the data. This "combs" through the data and eliminates the comparison of a large portion of the unneccessary comparison, making the algorithm nearly as fast as a Quick Sort, whilst retaining the simplicity of a Bubble Sort.
Once the Comb Sort was applied, I recorded increases in speed of around 100 times, bringing the total processing time back down to unnoticable speeds.
Generally the processing was taking around seemingly exponential lengths of time in comparison to the number of points in the input.
After a quick time profiling addition to the code I was able to break down the processing in order to find if there was one chunk of code eating up my processing time. It soon became apparent that sorting the curvature and speed data (float arrays equal in size to the number of input points) was accounting for around 98.5% of the processing for an input of 200 points (around 2 seconds to process!).
I was relieved to find it was just a sorting algorithm that was the nuisance as I knew plenty about them and knew there would be a quicker solution available.
After reading through my basic and frankly terrible original sorting process, I realised that I'd written a type of Insertion Sort algorithm(one of the slower types of sort).
Some quick googling revealed a nice selection of algorithms and explained how there were basically 2 groups of comparison sorts. Fast, and slow.
These handy graphs show the variations in speed between the two groups, O(n^2) and O(n log n).
You can see that the slow algorithm graph has a vertical scale measured in hudreds of secionds whereas the fast algorithm graph has a vertical scale measured in tenths of seconds. This is clearly a huge difference.
Some more research helped me identify the Comb Sort as a reasonable solution. It acts in a similar way to the Bubble Sort, a relatively slow algorithm, however instead of comparing adjacent values, an initial gap of the data length is used and reduced by 1/(~1.3) each recursion over the data. This "combs" through the data and eliminates the comparison of a large portion of the unneccessary comparison, making the algorithm nearly as fast as a Quick Sort, whilst retaining the simplicity of a Bubble Sort.
Once the Comb Sort was applied, I recorded increases in speed of around 100 times, bringing the total processing time back down to unnoticable speeds.
Sketch Application Progress
It has been about a week since I got the initial system of feature point identification in place in my application and since then I have been working on completing the process, refining the system and general improvements all round.
The system now has a fully functional Average Based Filtering process in place, as well as Hybrid Fit Generation and refining.
The incoming data consists of a list of points recorded each frame whilst the user has the stylus on the input device. The co-ordinates of each input point are stored in two integers; one for the x-position and one for the y-position. A time value is also recorded, as the amount of time passed in milliseconds since the start of the stroke.
From these three key pieces of information, curvature and speed data can be calculated for each point "n". Curvature values are created using the change in angle between points n-1 to n, and n to n+1, and the classic formula "speed = distance / time" is used to create a list of speed.
Means are calculated for each set of data with any curvature regions above the curvature mean and any speed regions below 90% of the speed mean, being stored as feature regions. The peak values of the intersection of the two region sets are then used as the initial feature points of the sketch.
The feature points are thus located at areas of great change in direction and low speed.
To start with an initial Hybrid Fit if created, consisting of the output from the average based filtering. From this we can then add the points that will provide the greatest improvement.
To pick the best candidate point we create 2 possible new fits, one with the best unused speed point and one with the best unused curvature point. These are decided by sorting the speed and curvature data (a problem unt its own!).
With two possible candidate fits, a metric is required to identify the better of the two. This comes in the form of an error value created from the sum of the orthogonal distance squared of the original points to the corresponding line segment of each candidate fit. The fit with the lowest error is then chosen as the new Hybrid Fit and the other is discarded. This cycle is then repeated until the error value drops below a specified value, and thus the final Hybrid Fit is found.
The next system that I will implement will use scale space to identify the key feature points of the input without the need for any empirical data.
The system now has a fully functional Average Based Filtering process in place, as well as Hybrid Fit Generation and refining.
Average Based Filtering
Average Based Filtering involves taking the initial raw input data from a stroke and processing it to pick out key data points, representing intended features such as corners. By joining these feature points together we create a basic polyline representation of the sketched shape.The incoming data consists of a list of points recorded each frame whilst the user has the stylus on the input device. The co-ordinates of each input point are stored in two integers; one for the x-position and one for the y-position. A time value is also recorded, as the amount of time passed in milliseconds since the start of the stroke.
From these three key pieces of information, curvature and speed data can be calculated for each point "n". Curvature values are created using the change in angle between points n-1 to n, and n to n+1, and the classic formula "speed = distance / time" is used to create a list of speed.
Means are calculated for each set of data with any curvature regions above the curvature mean and any speed regions below 90% of the speed mean, being stored as feature regions. The peak values of the intersection of the two region sets are then used as the initial feature points of the sketch.
The feature points are thus located at areas of great change in direction and low speed.
Hybrid Fit Generation and Refinement
Once an inital set of key feature points have been located, the result can be improved by adding extra points to provide a closer fit to the original input. The best output would involve the closest match to the original stroke, while containing as few points as possible. Getting the balance right is of key importance.To start with an initial Hybrid Fit if created, consisting of the output from the average based filtering. From this we can then add the points that will provide the greatest improvement.
To pick the best candidate point we create 2 possible new fits, one with the best unused speed point and one with the best unused curvature point. These are decided by sorting the speed and curvature data (a problem unt its own!).
With two possible candidate fits, a metric is required to identify the better of the two. This comes in the form of an error value created from the sum of the orthogonal distance squared of the original points to the corresponding line segment of each candidate fit. The fit with the lowest error is then chosen as the new Hybrid Fit and the other is discarded. This cycle is then repeated until the error value drops below a specified value, and thus the final Hybrid Fit is found.
Discussion
The system's reliance upon empirical values such as the means in the average based fitering and the error threshold in the hybrid fit refinement, mean that optimum solutions are less likely from person to person as each user's input style varies. A better system would not rely on these values which require tweaking to achieve the best results for a specific person.The next system that I will implement will use scale space to identify the key feature points of the input without the need for any empirical data.
Thursday 28 February 2008
Success!
This is like a "Eureka moment" for me right now; I have just successfully finished writing my first sketch processing algorithm on for the DS app.
The current system is using a basic feature point identification system, similar to the initial stages of Stroke Approximation.
I haven't timed the process yet but the output seems as instant as could be, which is good news. Woo!
Here's a screen shot of it processing a star I drew. Black squares are the identified feature points.
The current system is using a basic feature point identification system, similar to the initial stages of Stroke Approximation.
I haven't timed the process yet but the output seems as instant as could be, which is good news. Woo!
Here's a screen shot of it processing a star I drew. Black squares are the identified feature points.
Labels:
nds,
nintendo ds,
sketch processing,
stroke approximation
Saturday 23 February 2008
Sketch Processing Progress
After a week of doing very little work on my sketch processing project, I decided to get some more coding done. Well it is now 6:25am and a lng way past my intended bedtime, but progress is good. I started by fixing a few bugs from my last coding session and quickly got back into the swing of the code. I started to put together some code to generate a set of curvature data from the initial stroke information.
In comparison to XNA and C#, the biggest problem with programming on the Nintendo DS is that there is almost no help from the IDE, every error takes about 10 times longer to debug and the actual writing of the code is so much more tedious. Anyway, my point is that with something like C#, all the functionality is at your fingertips allowing you to quickly print to the screen or output information to a console window or to a file. The lack of all this on the DS means debugging or writing new and specifically math-intensive code, becomes all the more difficult.
So I had put together this basic code, but without some tests I wasn't confident it would be getting the correct output data, and without break-points or any decent interface, I was working blind.
So instead of continuing as I was, I decided to create a simple C# app that loaded some pre-written stroke data from a text file (created by copying values from the screen of the DS app), and processed the data with almost the same code that was running on the DS. This was I could visualise, debug and check the code effectively, and reflect any changes in the DS code afterwards.
I basically ended up writing a nice little app that drew a representation of how the stroke would be seen on the DS screen, a dynamically sized curvature graph of the data with a mean cut-off line, data values, feature point identification and labeled, zoomed views of identified corners.
All in all the tool is quite useful and I will be using it for analysis on data taken directly from the DS app to visualise the information as image files, graphs and much more. All these can then be used in my report as evidence of the data and processing.
I also updated some of the GL rendering code on the DS app, but still need to fix some of the timer code.
So overall, quite good progress. Now I must get to sleep before the sun comes up and I'll fix the spelling and grammar mistakes in this when I wake up.
In comparison to XNA and C#, the biggest problem with programming on the Nintendo DS is that there is almost no help from the IDE, every error takes about 10 times longer to debug and the actual writing of the code is so much more tedious. Anyway, my point is that with something like C#, all the functionality is at your fingertips allowing you to quickly print to the screen or output information to a console window or to a file. The lack of all this on the DS means debugging or writing new and specifically math-intensive code, becomes all the more difficult.
So I had put together this basic code, but without some tests I wasn't confident it would be getting the correct output data, and without break-points or any decent interface, I was working blind.
So instead of continuing as I was, I decided to create a simple C# app that loaded some pre-written stroke data from a text file (created by copying values from the screen of the DS app), and processed the data with almost the same code that was running on the DS. This was I could visualise, debug and check the code effectively, and reflect any changes in the DS code afterwards.
I basically ended up writing a nice little app that drew a representation of how the stroke would be seen on the DS screen, a dynamically sized curvature graph of the data with a mean cut-off line, data values, feature point identification and labeled, zoomed views of identified corners.
All in all the tool is quite useful and I will be using it for analysis on data taken directly from the DS app to visualise the information as image files, graphs and much more. All these can then be used in my report as evidence of the data and processing.
I also updated some of the GL rendering code on the DS app, but still need to fix some of the timer code.
So overall, quite good progress. Now I must get to sleep before the sun comes up and I'll fix the spelling and grammar mistakes in this when I wake up.
Labels:
c#,
curvature data,
nds,
nintendo ds,
sketch processing
Thursday 21 February 2008
Virtual Construction Toolkit
During my year working at Creative North, I worked on a project for construction company JNBentley. The application's purpose was to allow the site managers to have a visual representation of the site they were currently working on. Accurate AutoCAD survey data of the site would be exported into the application which generated a realistic representation of the site, which could be viewed in full 3D.
All of the programming was done in C++ and TourqueScript with the VCT starting life as the Torque RTS Starter Kit. Huge parts of the original engine and system were rewritten during the project to cater for some of the project's requirements.
The VCT provided the ability to place site vehicles, buildings and workers in the virtual environment, and even create paths and roads for them to travel on. All the vehicles in the VCT could be controlled down to the smallest detail; with the user being able to adjust everything from the position of every arm and scoop on an excavator to the bucket on a dunper truck. Spheres of influence showed the user the exact areas that could be affected by a vehicle at any point in time, providing good visual representation of dangers on the site.
The application also featured a wide range of tools to add excavations or mark out zones. Fences and walls could be laid out, overhead cables simulated and distances measured with incredible accuracy. Notes could be placed in the environment or on specific objects to remind users of key information.
A complex scaffolding representation system was also inculded which could be adjusted in many ways allowing different weights of poles, board sizes, bay widths and much more. From this data, predicted costs could be generated for the site managers, giving them a better representation of how much the scaffolding may cost before ordering.
Environment features and underground pipework could be loaded in and generated from survey data, giving the user a visual representation of what the site may look like in 6 months time, or even to look underground.
Aprroximately halfway through development we produced a video demonstrating the application and its functionality.
All of the programming was done in C++ and TourqueScript with the VCT starting life as the Torque RTS Starter Kit. Huge parts of the original engine and system were rewritten during the project to cater for some of the project's requirements.
The VCT provided the ability to place site vehicles, buildings and workers in the virtual environment, and even create paths and roads for them to travel on. All the vehicles in the VCT could be controlled down to the smallest detail; with the user being able to adjust everything from the position of every arm and scoop on an excavator to the bucket on a dunper truck. Spheres of influence showed the user the exact areas that could be affected by a vehicle at any point in time, providing good visual representation of dangers on the site.
The application also featured a wide range of tools to add excavations or mark out zones. Fences and walls could be laid out, overhead cables simulated and distances measured with incredible accuracy. Notes could be placed in the environment or on specific objects to remind users of key information.
A complex scaffolding representation system was also inculded which could be adjusted in many ways allowing different weights of poles, board sizes, bay widths and much more. From this data, predicted costs could be generated for the site managers, giving them a better representation of how much the scaffolding may cost before ordering.
Environment features and underground pipework could be loaded in and generated from survey data, giving the user a visual representation of what the site may look like in 6 months time, or even to look underground.
Aprroximately halfway through development we produced a video demonstrating the application and its functionality.
HLSL - Part 3.
Bubble
The Bubble effect makes use of several different techniques, bringing them together into one. It features cube mapping, environment mapping, reflection, refraction, colour blending and vertex manipulation.The effect consists of 2 core passes, one for rendering the environment map and one for rendering the bubble.
The second pass renders the bubble. During this pass, several colours are attained from several sources, and are blended together to produce the final output colour. These sources are a colour from a rainbow texture, the view through the bubble, the reflection off the front of the bubble and the reflection off the inside of the bubble.
The rainbow colour is chosen based on a time variable, added to the distance from the camera to the bubble, and added to the dot product of the view direction and light direction. The combination of these values produces a change in colour when zooming in and out, when rotating around the bubble, and slowly over time.
The next colour to be added is the pass-through colour. This is the colour from the cube map on the other side of the bubble. Refraction would affect the light passing through the bubble, magnifying the view slightly.
The third colour to add is the reflection from the inside of the bubble. The colour is multiplied by the inverse of the rainbow’s alpha value to only draw it where the rainbow colours are not (this reflection should be a weak value, in this case it is visualised as being overpowered by the rainbow colour value).
The final colour value to add to the output colour is the reflection colour from the front of the bubble. The view direction is reflected in the bubble’s normal and the colour is extracted from the cube map. This final colour is multiplied by the sum of the opacity value used for the inverse reflection and an edge value so that the reflection is more intense towards the edge of the bubble. Finally it is added to the output colour which is finally rendered to the screen.
Over Exposure
In real life terms, exposure relates to the length of time a camera lens is open. The longer the lens is opened, the more light is put onto the image. The extra light saturates the image, and eventually the image is overpowered with light and is turned completely white. This shader effect attempts to simulate the feel of over-exposure.HLSL - Part 2.
More shader effects I have created.
The full effect consists of four passes. The first does most of the work, making the texture look like a fire by applying colours, and distorting the texture co-ordinates. The second, third and fourth down-sample the apply a Gaussian blur to the output of the first pass. Finally the image is upsampled and combined with the first pass' output to provide the final effect.
The effect is broken down into three passes. The first renders the rotating elephant to a texture, from the point of view of a second camera. The areas of the texture not populated by elephant pixels are set to be completely transparent.
The second pass renders the outer 3D cube and maps the elephant texture onto the faces of the cube. The transparent areas of the texture are replaced with a golf course image, to provide a background.
The last pass renders a small quad in the centre of the world. The quad is alpha blended so that the only pixels drawn are those depicting the elephant. When the user looks straight at the quad, the elephant appears 3D however it is really just a projection.
2D Fire Effect
This effect uses a 2D screen aligned quad and applies multiple passes to transform the original texture data into a fire effect. There are many examples of fire shaders available, on the internet; however I decided to create my own. The effect was refined and improved to reduce the number of ALUs needed and thus reaching good performance speeds.The full effect consists of four passes. The first does most of the work, making the texture look like a fire by applying colours, and distorting the texture co-ordinates. The second, third and fourth down-sample the apply a Gaussian blur to the output of the first pass. Finally the image is upsampled and combined with the first pass' output to provide the final effect.
Render Target Textures
This effect uses render targets to demonstrate techniques seen in games like Super Paper Mario, where 3D visuals are projected onto flat 2D surfaces in a 3D environment. In this effect, a 3D elephant model is drawn to a renderable texture which is mapped onto a 3D cube and a 2D quad plane in 3D space with alpha transparency.The effect is broken down into three passes. The first renders the rotating elephant to a texture, from the point of view of a second camera. The areas of the texture not populated by elephant pixels are set to be completely transparent.
The second pass renders the outer 3D cube and maps the elephant texture onto the faces of the cube. The transparent areas of the texture are replaced with a golf course image, to provide a background.
The last pass renders a small quad in the centre of the world. The quad is alpha blended so that the only pixels drawn are those depicting the elephant. When the user looks straight at the quad, the elephant appears 3D however it is really just a projection.
Labels:
fire,
hlsl,
pixel shader,
render target,
rendermonkey,
shaders,
vertex shader
HLSL - Part 1.
HLSL (High Level Shader Language) is used to create vertex and pixel shader effects.
In November of 2007 I learnt HLSL and created a some effects using AMD's RenderMonkey 1.71.The following are some of the shader models I created.
The implemented effect uses a adjustable percentage value which is used to linearly interpolate from no sepia tone to fully sepia.
I added a 2D sepia effect using thje same function but applied to a screen aligned quad in order to provide good visual examples.
The specular highlight is added to the ambient and diffuse colour values to produce a simple model for the output colour of the object at a specific pixel.
There are several basic specular highlight formulae that produce varying visual effects. I have implemented a Gaussian distribution model and a Beckmann distribution model, but will just discuss the Beckmann distribution here.
The Beckmann distribution model offers a more realistic physics model than the Gaussian distribution, however is much more computationally heavy. For this reason, I have calculated the majority of the non-specular processing on the vertex shader to reduce the number of arithmetic logic commands processed by the pixel shader.
The follwing images show the Beckmann distribution with varying surface smoothness values.
From looking at screenshots, I created a formula to generate a similar effect. I also added a scaling value to adjust the amount of glow applied by the process. The following images show the effect in action.
In November of 2007 I learnt HLSL and created a some effects using AMD's RenderMonkey 1.71.The following are some of the shader models I created.
Sepia
This sepia shader model is a post-processing effect that could be applied to a colour at any time. Sepia tone effects produce images coloured in tones of brown and can commonly be found as a feature on modern digital cameras.The implemented effect uses a adjustable percentage value which is used to linearly interpolate from no sepia tone to fully sepia.
I added a 2D sepia effect using thje same function but applied to a screen aligned quad in order to provide good visual examples.
Specular Highlight
Specular highlights are the bright spots of light that are seen when bright lights reflect off shiny objects. In computer games, specular highlights help give the user a clear idea of an object’s shape and position within a scene. There are several specular highlighting models that can be used to give varying visual effects.The specular highlight is added to the ambient and diffuse colour values to produce a simple model for the output colour of the object at a specific pixel.
There are several basic specular highlight formulae that produce varying visual effects. I have implemented a Gaussian distribution model and a Beckmann distribution model, but will just discuss the Beckmann distribution here.
The Beckmann distribution model offers a more realistic physics model than the Gaussian distribution, however is much more computationally heavy. For this reason, I have calculated the majority of the non-specular processing on the vertex shader to reduce the number of arithmetic logic commands processed by the pixel shader.
The follwing images show the Beckmann distribution with varying surface smoothness values.
Edge Glow
The Edge Glow effect is not based on any realistic lighting model, but on an effect used in the game Super Mario Galaxy. The game is based in space where the backgrounds are primarily dark shades. To help the 3D models rendered against these darker backgrounds stand out, the models all appear to have glowing edges.From looking at screenshots, I created a formula to generate a similar effect. I also added a scaling value to adjust the amount of glow applied by the process. The following images show the effect in action.
Labels:
beckmann,
edge glow,
gaussian,
hlsl,
pixel shader,
rendermonkey,
sepia,
shaders,
specular highlight,
vertex shader
Subscribe to:
Posts (Atom)