In this paper, we propose a novel query generation task we refer to as the Strong Natural Language Query (SNLQ) problem. The key idea we explore is how to best learn document summarization and ranker effectiveness jointly in order to generate human-readable queries which capture the information need conveyed by a document, and that can also be used for refinding tasks and query rewriting. Our problem is closely related to two well-known retrieval problems—known-item finding and strong query generation—with the additional objective of maximizing query informativeness. In order to achieve this goal, we combine state-of-the-art abstractive summarization techniques and reinforcement learning. We have empirically compared our new approaches with several closely related baselines using the MS-MARCO data collection, and show that the approach is capable of achieving substantially better trade-off between effectiveness and human-readability than have been reported previously.