It is well known that it is vital to account for trend breaks when testing for a unit root. In practice, uncertainty exists over whether or not a trend break is present and, if it is, where it is located. Harris et al. (2009) and Carrion-i-Silvestre et al. (2009) propose procedures which account … for both of these forms of uncertainty. Each uses what amounts to a pre-test for a trend break, accounting for a trend break (the associated break fraction estimated from the data) in the unit root procedure only where the pre-test signals a break. Assuming the break magnitude is fixed (independent of sample size) these authors show that their methods achieve near asymptotically ecient unit root inference in both trend break and no trend break environments. These asymptotic results are, however, somewhat at odds with the finite sample simulations reported in both papers. These show the presence of pronounced "valleys" in the finite sample power functions (when mapped as functions of the break magnitude) of the tests such that power is initially high for very small breaks, then decreases as the break magnitude increases, before increasing again. Here we show that treating the break magnitude as local to zero (in a Pitman drift sense) allows the asymptotic analysis to very closely approximate this finite sample effect, thereby providing useful analytical insights into the observed phenomenon. In response to this problem we propose practical solutions, based either on the use of a with break unit root test but with adaptive critical values, or on a union of rejections principle taken across with break and without break unit root tests. The former is shown to eliminate power valleys but at the expense of power when no break is present, while the latter considerably mitigates the valleys while not losing all the power gains available when no break exists.