In our everyday life, we use increasingly the voice to interact with assistants for heterogeneous requests (weather, booking, shopping, etc.). We present our experiments to apply the Natural Language Processing to the querying of astronomical data services. It is of course easy to prototype something. But is it realistic to propose it as a new way of interaction in a near future, as an alternative to the traditional forms exposing parameter fields, check boxes, etc.? To answer to this question, it is necessary to answer before to the most fundamental question: is it possible to satisfy professional astronomers needs through this way? We have not started from scratch as we have useful tools and resources (name resolver, authors in Simbad, missions and wavelengths in VizieR, etc.) and the Virtual Observatory (VO) brings us standards like TAP, UCDs, ..., implemented in the CDS services. The interoperability, enabled by the VO, is a mandatory backbone. We explain how it helps us to query our services in Natural Language. And how it will be possible in a further step to query the whole VO through this way. We present our pragmatic approach based on a chatbot interface (involving Machine Learning) to reduce the gap between good and imprecisely/ambiguous queries. Comments (necessarily enthusiastic) are welcome. Collaborations too.
Link to PDF (may not be available yet): P1-12.pdf