Epileptic seizure detection using multi-channel scalp electroencephalogram (EEG) signals has gained increasing attention in clinical therapy. Recently, researchers attempt to employ deep learning techniques with channel selection to determine critical channels. However, existing models with such hard selection procedure do not take dynamic constraints into account, since the irrelevant channels vary significantly across different situations. To address these issues, we propose ChannelAtt, an end-to-end multi-view deep learning model with channel-aware attention mechanism, to express multi-channel EEG signals in a high-level space with interpretable meanings. ChannelAtt jointly learns both multi-view representation and its contribution scores. We propose two attention mechanisms to learn the attentional representations of multi-channel EEG signals in time-frequency domain. Experimental results show that the proposed ChannelAtt model outperforms the baselines in detecting epileptic seizures. Analytical results of a case study demonstrate that the learned attentional representations are meaningful.