If citizen science is to be used in the context of environmental research, there needs to be a rigorous evaluation of humans’ cognitive ability to interpret and classify environmental features. This research, with a focus on land cover, explores the extent to which citizen science can be used to sense and measure the environment and contribute to the creation and validation of environmental data. We examine methodological differences and humans’ ability to classify land cover given different information sources: a ground-based photo of a landscape versus a ground and aerial based photo of the same location. Participants are solicited from the online crowdsourcing platform Amazon Mechanical Turk. Results suggest that across methods and in both ground-based, and ground and aerial based experiments, there are similar patterns of agreement and disagreement among participants across land cover classes. Understanding these patterns is critical to form a solid basis for using humans as sensors in earth observation.