For optimisation problems with multiple objectives and large search spaces, it may not be feasible to find all optimal solutions. Even if possible, a decision maker (DM) is only interested in a small number of these solutions. Incorporating a DM's solution preferences into the process reduces the problem's search space by focusing only on regions of interest. Allowing a DM to interact and alter their preferences during a single optimisation run facilitates learning and mistake correction, and improves the search for desired solutions. In this paper, we apply an interactive framework to four leading multi-objective evolutionary algorithms (MOEAs), which use reference points to model preferences. Furthermore, we propose a new performance metric for algorithm responsiveness to preference changes, and evaluate these algorithms using this metric. Interactive algorithms must respond to changes in DM preferences and we show how our new metric is able to differentiate between the four algorithms when run on the ZDT suite of test problems. Finally, we identify characteristics of these methods that determine their level of response to change.