TY - JOUR
T1 - A Normalization Process to Standardize Handwriting Data Collected from Multiple Resources for Recognition
AU - Wang, Wen Li
AU - Tang, Mei Huei
N1 - Publisher Copyright:
© 2015 Published by Elsevier B.V.
Copyright:
Copyright 2016 Elsevier B.V., All rights reserved.
PY - 2015
Y1 - 2015
N2 - This paper presents a normalization process for handwriting recognition with the ability to accommodate scribbling data of different resolutions collected from diverse devices, such as touch screens and tablets. The normalization algorithms aim at being position, scale and rotation invariant in order to standardize non-uniform handwriting results from all sorts of users. The process starts with identifying the bound of a handwriting. The cropped bound is centered to the origin and then scaled to a default size without producing undesirable distortions. Image skew problem is handled by sampling data image of multi-angles through rotation transformation to produce extra learning artifacts. Due to the high volume of pixel data, down-sampling is employed by mingling neighborhood pixels into blocks to improve learning and recognition speed. Finally, a 2D image is serialized into an array of blocks to conduct learning and recognition. The empirical studies show that this proposed standardization approach can yield a high degree of accuracy, verified by a number of popular machine learning algorithms.
AB - This paper presents a normalization process for handwriting recognition with the ability to accommodate scribbling data of different resolutions collected from diverse devices, such as touch screens and tablets. The normalization algorithms aim at being position, scale and rotation invariant in order to standardize non-uniform handwriting results from all sorts of users. The process starts with identifying the bound of a handwriting. The cropped bound is centered to the origin and then scaled to a default size without producing undesirable distortions. Image skew problem is handled by sampling data image of multi-angles through rotation transformation to produce extra learning artifacts. Due to the high volume of pixel data, down-sampling is employed by mingling neighborhood pixels into blocks to improve learning and recognition speed. Finally, a 2D image is serialized into an array of blocks to conduct learning and recognition. The empirical studies show that this proposed standardization approach can yield a high degree of accuracy, verified by a number of popular machine learning algorithms.
UR - http://www.scopus.com/inward/record.url?scp=84962703134&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84962703134&partnerID=8YFLogxK
U2 - 10.1016/j.procs.2015.09.171
DO - 10.1016/j.procs.2015.09.171
M3 - Conference article
AN - SCOPUS:84962703134
SN - 1877-0509
VL - 61
SP - 402
EP - 409
JO - Procedia Computer Science
JF - Procedia Computer Science
T2 - Complex Adaptive Systems, 2015
Y2 - 2 November 2015 through 4 November 2015
ER -