The end-user experience in viewing a video depends on the distortion; however, also of importance is the delay experienced by the packets of the video flow since it impacts the timeliness of the information contained and the playback rate at the receiver. Unfortunately, these performance metrics are in conflict with each other in a wireless network. Packet losses can be minimized by perfectly avoiding interference by separating transmissions in time or frequency; however, this decreases the rate at which transmissions occur, and this increases delay. Relaxing the requirement for interference avoidance can lead to packet losses and thus increase distortion, but can decrease the delay for those packets that are delivered. In this paper, we investigate this trade-off between distortion and delay for video. To understand the trade-off between video quality and packet delay, we develop an analytical framework that accounts for characteristics of the network (e.g. interference, channel variations) and the video content (motion level), assuming as a basis, a simple channel access policy that provides flexibility in managing the interference in the network. We validate our model via extensive simulations. Surprisingly, we find that the trade-off depends on the specific features of the video flow: it is better to trade-off high delay for low distortion with fast motion video, but not with slow motion video. Specifically, for an increase in PSNR (a metric that quantifies distortion) from 20 to 25 dB, the penalty in terms of the increase in mean delay with fast motion video is 91 times that with slow motion video. Our simulation results further quantify the trade-offs in various scenarios.