The limited size of existing datasets and signal variability have hindered EEG-based emotion recognition. In this paper, we present a solution that simultaneously addresses both problems. Generative Adversarial Networks (GANs) have recently shown notable data augmentation (DA) success. Therefore, we leverage a GAN-based DA technique to enhance the robustness of our proposed emotion recognition model by synthetically increasing the size of our datasets. Moreover, we employ contrastive learning to improve the quality of the learned representations from EEG signals and mitigate the adverse impact of inter-subject and intra-subject variability in signals corresponding to the same stimuli or emotions. We do so by maximizing the similarity in the representation of such EEG signals. We perform EEG-based emotion classification using a Graph Neural Network (GNN), which learns the relationship between the extracted EEG features. We compare the proposed model with several recent state-of-the-art emotion recognition models on the DEAP and MAHNOB datasets. The experimental results demonstrate that the proposed model outperforms previous models with a 64.84% and 66.40% emotion classification accuracy on the test set of the DEAP dataset and a 66.98% and 71.69% emotion classification accuracy on the test set of the MAHNOB-HCI dataset for the valence and arousal emotional dimensions, respectively. We perform an ablation study to demonstrate how contrastive learning, GAN, and GNN contribute to improving the proposed solution’s performance.The limited size of existing datasets and signal variability have hindered EEG-based emotion recognition. In this paper, we present a solution that simultaneously addresses both problems. Generative Adversarial Networks (GANs) have recently shown notable data augmentation (DA) success. Therefore, we leverage a GAN-based DA technique to enhance the robustness of our proposed emotion recognition model by synthetically increasing the size of our datasets. Moreover, we employ contrastive learning to improve the quality of the learned representations from EEG signals and mitigate the adverse impact of inter-subject and intra-subject variability in signals corresponding to the same stimuli or emotions. We do so by maximizing the similarity in the representation of such EEG signals. We perform EEG-based emotion classification using a Graph Neural Network (GNN), which learns the relationship between the extracted EEG features. We compare the proposed model with several recent state-of-the-art emotion recognition models on the DEAP and MAHNOB datasets. The experimental results demonstrate that the proposed model outperforms previous models with a 64.84% and 66.40% emotion classification accuracy on the test set of the DEAP dataset and a 66.98% and 71.69% emotion classification accuracy on the test set of the MAHNOB-HCI dataset for the valence and arousal emotional dimensions, respectively. We perform an ablation study to demonstrate how contrastive learning, GAN, and GNN contribute to improving the proposed solution’s performance. Leer más