Aside from hand-coded bots, most videogame agents rely on experience in one way or another. Some agents improve over time by adjusting to real experience, while others make projections by leveraging simulated experience. In one sense, human play seems to resemble simulation, in that human players often try to visualise possible trajectories before deciding on a real course of action. However, most existing simulation-based agents require precise knowledge of the game's code, and their search depth is limited by the granular time scales encountered in videogames. Human players, on the other hand, appear to visualise plans in terms of abstract "skills", such as running and jumping. These skills are learned from real experience, so in this sense human play resembles learning-based approaches. Motivated by these observations, we propose an approach that bridges the gap between skills, which are uncertain in outcome and duration, and traditional simulation-based planning, which is discrete. We apply this approach to maze-like navigation problems in Infinite Mario. After an initial skill acquisition phase, our agent is capable of navigating new levels without further training, and also scales better with goal distance than a granular simulation method that exploits exact knowledge of the game's physics.