A Normalization Process to Standardize Handwriting Data Collected from Multiple Resources for Recognition

Wen Li Wang, Mei Huei Tang

Research output: Contribution to journalConference articlepeer-review

7 Scopus citations

Abstract

This paper presents a normalization process for handwriting recognition with the ability to accommodate scribbling data of different resolutions collected from diverse devices, such as touch screens and tablets. The normalization algorithms aim at being position, scale and rotation invariant in order to standardize non-uniform handwriting results from all sorts of users. The process starts with identifying the bound of a handwriting. The cropped bound is centered to the origin and then scaled to a default size without producing undesirable distortions. Image skew problem is handled by sampling data image of multi-angles through rotation transformation to produce extra learning artifacts. Due to the high volume of pixel data, down-sampling is employed by mingling neighborhood pixels into blocks to improve learning and recognition speed. Finally, a 2D image is serialized into an array of blocks to conduct learning and recognition. The empirical studies show that this proposed standardization approach can yield a high degree of accuracy, verified by a number of popular machine learning algorithms.

Original languageEnglish (US)
Pages (from-to)402-409
Number of pages8
JournalProcedia Computer Science
Volume61
DOIs
StatePublished - 2015
EventComplex Adaptive Systems, 2015 - San Jose, United States
Duration: Nov 2 2015Nov 4 2015

All Science Journal Classification (ASJC) codes

  • Computer Science(all)

Fingerprint

Dive into the research topics of 'A Normalization Process to Standardize Handwriting Data Collected from Multiple Resources for Recognition'. Together they form a unique fingerprint.

Cite this