Increasing wire delays and memory capacities motivate new ways of designing L2 and L3 caches. NUCA (Non-Uniform Cache Architecture) has received considerable attention in the last few years. While most of the prior NUCA-based efforts have focused on data placement, data replacement, and migration related issues, this paper studies the problem of data search. Specifically, it proposes and experimentally evaluates several data search schemes for NUCA L2 caches that exhibit different performance-power trade-offs. These schemes are based on predicting the next ring (set of banks) to be accessed in a NUCA L2, and checking the banks in that ring first. In this work, we present the details of these prediction schemes, and compare them to two alternate approaches: searching all rings in parallel, and searching rings sequentially, starting with the one that is closest to the CPU.