Sunday 9 March 2008

Sketch Application Progress

It has been about a week since I got the initial system of feature point identification in place in my application and since then I have been working on completing the process, refining the system and general improvements all round.

The system now has a fully functional Average Based Filtering process in place, as well as Hybrid Fit Generation and refining.

Average Based Filtering

Average Based Filtering involves taking the initial raw input data from a stroke and processing it to pick out key data points, representing intended features such as corners. By joining these feature points together we create a basic polyline representation of the sketched shape.

The incoming data consists of a list of points recorded each frame whilst the user has the stylus on the input device. The co-ordinates of each input point are stored in two integers; one for the x-position and one for the y-position. A time value is also recorded, as the amount of time passed in milliseconds since the start of the stroke.

From these three key pieces of information, curvature and speed data can be calculated for each point "n". Curvature values are created using the change in angle between points n-1 to n, and n to n+1, and the classic formula "speed = distance / time" is used to create a list of speed.

Means are calculated for each set of data with any curvature regions above the curvature mean and any speed regions below 90% of the speed mean, being stored as feature regions. The peak values of the intersection of the two region sets are then used as the initial feature points of the sketch.

The feature points are thus located at areas of great change in direction and low speed.

Hybrid Fit Generation and Refinement

Once an inital set of key feature points have been located, the result can be improved by adding extra points to provide a closer fit to the original input. The best output would involve the closest match to the original stroke, while containing as few points as possible. Getting the balance right is of key importance.

To start with an initial Hybrid Fit if created, consisting of the output from the average based filtering. From this we can then add the points that will provide the greatest improvement.

To pick the best candidate point we create 2 possible new fits, one with the best unused speed point and one with the best unused curvature point. These are decided by sorting the speed and curvature data (a problem unt its own!).

With two possible candidate fits, a metric is required to identify the better of the two. This comes in the form of an error value created from the sum of the orthogonal distance squared of the original points to the corresponding line segment of each candidate fit. The fit with the lowest error is then chosen as the new Hybrid Fit and the other is discarded. This cycle is then repeated until the error value drops below a specified value, and thus the final Hybrid Fit is found.

Discussion

The system's reliance upon empirical values such as the means in the average based fitering and the error threshold in the hybrid fit refinement, mean that optimum solutions are less likely from person to person as each user's input style varies. A better system would not rely on these values which require tweaking to achieve the best results for a specific person.

The next system that I will implement will use scale space to identify the key feature points of the input without the need for any empirical data.

No comments: