Calibration of simulation models and hyperparameter optimisation of machine learning and deep learning methods are computationally demanding optimisation problems, for which many state-of-the-art optimisation methods are adopted and applied in various studies. However, their performances come to a test when the parameter optimisation problems exhibit high-dimensional spaces and expensive evaluation of models’ or methods’ settings. Population-based (evolutionary) methods work well for the former but not suitable for expensive evaluation functions. On the opposite, Bayesian optimisation eliminates the necessity of frequent simulations to find the global optima. However, the computational demand rises significantly as the number of parameters increases. Bayesian optimisation with random forests has overcome issues of its state-of-the-art counterparts. Still, due to the non-parametric output, it fails to utilise the capabilities of available acquisition functions. We propose a semi-parametric approach to overcome such limitations to random forests by identifying a mixture of parametric components in their outcomes. The proposed approach is evaluated empirically on four optimisation benchmark functions with varying dimensionality, confirming the improvement in guiding the search process. Finally, in terms of running time, it scales linearly with respect to the dimensionality of the search space.