One consequence of situated face-to-face conversation is the co- observability of participants’ respiratory movements and sounds. We explore whether this information can be exploited in pre- dicting incipient speech activity. Using a methodology called stochastic turn-taking modeling, we compare the performance of a model trained on speech activity alone to one additionally trained on static and dynamic lung volume features. The method- ology permits automatic discovery of temporal dependencies across participants and feature types. Our experiments show that respiratory information substantially lowers cross-entropy rates, and that this generalizes to unseen data.