This paper presents an algorithm using discriminative sparse representations to segment tissues in optical images of the uterine cervix. Because of the large variations in the image appearance caused by the changing of illumination and specular reflection, the different classes of color and texture features in optical images are often overlapped with each other. Using sparse representations they can be transformed to higher dimension with sparse constraints and become more linearly separated. Different from the previous reconstructive sparse representation, the discriminative method considers positive and negative samples simultaneously, which means that these generated dictionaries can be discriminative and perform better for their own classes but worse for the others. New data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we used our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of the uterine cervix. Compared with the other general methods including SVM, nearest neighbor and reconstructive sparse representations, our approach showed higher sensitivity and specificity.