This paper describes a vision-based algorithm for autonomous landing on a moving target. The algorithm fuses multiple outputs of two different computer vision techniques. One is the Viola-Jones object detection using Haar-like features, and the other is the AprilTag detection that segments an image based on local gradients. The Haar-like feature detector can detect any arbitrary known features, and we use this method when an aircraft is at altitude and approaches a landing spot. The AprilTag, which allows for precise position and attitude determination of the target, is placed at an expected landing location and used for a final approach. The combination of those techniques allows us to track the target through all the landing phases from altitude to touch down. We fuse the outputs by utilizing the statistics of the measurements and multiple extended Kalman filters. This way, we can not only probabilistically choose the right target from multiple candidates but also estimate the velocity of the target for formation flight and landing. This algorithm is demonstrated in an image-in-the-loop simulation and flight tests with a Yamaha RMAX helicopter and a WAM-V boat.