Patients with Parkinson’s disease (PD) have distinctive voice patterns, often perceived as expressing sad emotion. While this characteristic of Parkinsonian speech has been supported through the perspective of listeners, where both PD and healthy control (HC) subjects repeat the same speaking tasks, it has never been explored through a machine learning modelling approach. Our work provides an objective evaluation of this characteristic of the PD speech, by building a transfer learning system to assess how the PD pathology affects the sadness perception. To do so we introduce a Mixture-of-Experts (MoE) architecture for speech emotion recognition designed to be transferable across datasets. Firstly, by relying on publicly available emotional speech corpora, we train the MoE model and then we use it to quantify perceived sadness in never seen before PD and matched HC speech recordings. To build our models (experts), we extracted spectral features of the voicing parts of speech and we trained a gradient boosting decision trees model in each corpus to predict happiness vs. sadness. MoE predictions are created by weighting each expert’s prediction according to the distance between the new sample and the expert-specific training samples. The MoE approach systematically infers more negative emotional characteristics in PD speech than in HC. Crucially, these judgments are related to the disease severity and the severity of speech impairment in the PD patients: the more impairment, the more likely the speech is to be judged as sad. Our findings pave the way towards a better understanding of the characteristics of PD speech and show how publicly available datasets can be used to train models that provide interesting insights on clinical data.