Machine labeling of image content as private or public is a notoriously difficult problem, with the usual image processing challenges compounded by the highly personal, subjective, and contextual nature of access control decision making. In general, a user's privacy expectation for a given image is consequential to specific contents therein and the presence of sensitive content somewhere in the image is sufficient to warrant a private label. In this work, we extend the problem of determining a single privacy label for a given image to jointly inferring a privacy label and detecting the specific areas of sensitive content within a privately labeled image. We propose a stochastic spatial attribution model which exploits sophisticated (deep neural net derived) image features over randomly selected image patches, as well as image saliency quantification. We validate our detected private regions through extensive user study experiments. This effort to achieve spatial attribution of private image content helps to lay a foundation for warning mechanisms which may serve to aid both social media sites and their users.