Recommender systems have become popular in recent years, and ordinary users are more likely to rely on such service when completing various daily tasks. The need to design and build explainable recommender interfaces is increasing rapidly. Most of the designs of such explanations are intended to reflect the underlying algorithms by which the recommendations are computed. These approaches have been shown to be useful for obtaining system transparency and trust. However, little is known about how to design explanation interfaces for causal (non-expert) users to achieve different explanatory goals. As a first step toward understanding the user interface design factors, we conducted an international (across 13 countries) online survey of 14 active users of a social recommender system. This study captures user feedback in the field and frames it in terms of design principles and opportunities.