Artificial intelligence (AI) systems are increasingly seen in many public facing applications such as self-driving land vehicles, autonomous aircraft, medical systems and financial systems. AI systems should equal or surpass human performance, but given the consequences of failure or erroneous or unfair decisions in these systems, how do we assure the public that these systems work as intended and will not cause harm? For example, that an autonomous vehicle does not crash or that intelligent credit scoring system is not biased, even after passing substantial acceptance testing prior to release. In this paper we discuss AI trust and assurance and related concepts, that is, assured autonomy, particularly for critical systems. Then we discuss how to establish trust through AI assurance activities throughout the system development lifecycle. Finally, we introduce a 'trust but verify continuously' approach to AI assurance, which describes assured autonomy activities in a model based systems development context and includes postdelivery activities for continuous assurance.