A challenging problem for software and systems engineers is to provide assurance of operations for a system that is critical but must operate in situations that cannot be easily created in the testing lab. For example, a space system cannot be fully tested in all operational modes until it is launched and nuclear power plants cannot be tested under real critical temperature overload conditions. This situation is particularly challenging when seeking to provide assurance in critical AI systems (CAIS) where the underlying algorithms may be very difficult to verify under any conditions. In these cases using systems that have a similar underlying application, operational profiles, user characteristics, and underlying AI algorithms may be suitable as testing proxies. For example, a robot vacuum may have significant operational and implementation similarities to act as a testing proxy for some aspects of an autonomous vehicle.In this work we discuss the challenges in assured autonomy for CAIS and suggest a way forward using proxy systems. We describe a methodology for characterizing CAIS and matching them to their non-critical proxy equivalent.