Digital twins are virtual replicas of their physical counterparts, providing real-time monitoring and decision-making capabilities. By integrating forecasting-based methods, the potential of digital twins can be augmented significantly, enabling them to execute advanced predictive tasks. However, with digital twins typically involving a human-in-the-loop, the need for explainability becomes crucial for understanding how and why a forecast was made. To effectively integrate explainability methods, forecasting methods, and digital twins, it is essential to define the relations between these components in a structured manner. In this work, we address this issue by providing a meta-model for the integration of explainable forecasting methods with digital twins. We evaluate our meta-model in the context of a smart building digital twin with multiple forecasting and explainability methods. The evaluation demonstrates the inherent trade-off between providing explanations and generating accurate forecasts in this context.