In this paper, we propose a method for pain recognition by fusing physiological signals (heart rate, respiration,blood pressure, and electrodermal activity) and facial actionunits. We provide experimental validation that the fusion ofthese signals results in a positive impact to the accuracy ofpain recognition, compared to using only one modality (i.e.physiological or action units). These experiments are conductedon subjects from the BP4D+ multimodal emotion corpus, andinclude same- and cross-gender experiments. We also investigatethe correlation between the two modalities to gain furtherinsight into applications of pain recognition. Results suggestthe need for larger and more varied datasets that includephysiological signals and action units that have been codedfor all facial frames.