The advancing capabilities of computational systems to learn and adapt autonomously with datasets have provided new opportunities for designers and artists in their creative practice. This paper examines YomeciLand x Bunjil Place (Nguyen 2019), a playable sound-responsive installation that uses audio recognition to capture, recognise and categorise human sounds as a form of input to evolve a virtual environment of ‘artificial’ lifeforms. The potential of artificial intelligence in creative practice has recently drawn considerable interest, however our understanding of its application in sound practice is only emerging. The project is analysed in relation to three key themes: artificial intelligence for sound recognition, the ‘sounding body’ as play, and digital audiovisual composition as performance. In doing so, the research presents a framework for how artificial intelligence can aid sound recognition in a sound-responsive installation with YomeciLand x Bunjil Place shared as a case study to demonstrate this in practice.