A number of important human-robot applications demand trust. Although a great deal of research has examined how and why people trust robots, less work has explored how robots might decide whether to trust humans. Surface cues are perceptual clues that provide hints as to a person's intent and are predictive of behavior. This paper proposes and evaluates a model for recognizing trust surface cues by a robot and predicting if a person's behavior is deceitful in the context of a trust game. The model was tested in simulation and on a physical robot that plays an interactive card game. A human study was conducted where subjects played the game against a simulation, the robot, and a human opponent. Video data was hand coded by two coders with an inter-rater reliability of 0.41 based on Levenshtein distance. It was found that the model outperformed/matched the human coders on 50% of the subjects. Overall, this paper contributes a method that may begin to allow robots to evaluate the surface cues generated by a person to determine whether or not it should trust them.