Interpersonal Neural Synchrony and Subjective Vitality During Social Interaction : An EEG Hyper Scanning Study

Authors

DOI:

https://doi.org/10.17010/ijcs/2026/v11/i1/175961

Keywords:

EEG emotion recognition, Random Forest, feature selection, affective computing, brain–computer interface.
Publication Chronology: Paper Submission Date : January 2, 2026 ; Paper sent back for Revision : January 9, 2026 ; Paper Acceptance Date : January 12, 2026 ; Paper Published Online : February 5, 2026.

Abstract

EEG emotion decoding with high accuracy is an important component of affect-aware HCI, mental health monitoring, and adaptive multimedia. However, many recent methods are based on deep learning models trained on raw signals, which require a lot of data, intensive computation, and sometimes lack interpretability. In this paper, we revisit a traditional but carefully crafted machine learning pipeline and demonstrate that with appropriate feature design and selection, it can achieve state-of-the-art results on multi-class EEG emotion classification. We apply our approach to a real-world EEG dataset consisting of 2,132 trials with 2,548 hand-designed features per trial, including statistical, covariance, eigenvalue-based, entropy, correlation, and FFT features derived from two experimental conditions. After z-score normalization, a Random Forest classifier is first trained on the entire feature set to obtain importance weights, and the top 150 features are selected as a compact affective feature space. A second Random Forest classifier is then trained on this reduced feature space and tested using a stratified hold-out test noteset and 5-fold cross-validation. The proposed approach reaches a test accuracy of 96.72%, with high and well-balanced precision, recall, and F1-score for all three classes of emotions (negative, neutral, positive). The accuracy of the 5-fold cross-validation is 95.87% ± 0.28%, thus confirming the robustness of the approach. The feature importance profiles and the PCA plots also provide insights into which statistical and spectral features convey the most important information about emotions. In summary, the approach shows that an interpretable and feature-driven Random Forest approach can be at least competitive with, if not superior to, more complex deep learning architectures for EEG-based emotion recognition, and that it is computationally efficient and easy to deploy.

Downloads

Download data is not yet available.

Published

2026-02-05

How to Cite

Kobra, M. J., Majeed, M. R., & Rahman, M. O. (2026). Interpersonal Neural Synchrony and Subjective Vitality During Social Interaction : An EEG Hyper Scanning Study. Indian Journal of Computer Science, 11(1), 8–23. https://doi.org/10.17010/ijcs/2026/v11/i1/175961

References

[1] Lin, Y.-P., Wang, C.-H., T.-P. Jung, T.-L.Wu, S.-K. Jeng, J.-R. Duann, and J.-H. Chen, “EEG-based emotion recognition in music listening.,” IEEE Trans. Biomed. Eng., vol. 57, no. 7, pp. 1798–1806, doi: 10.1109/TBME.2010.2048568. DOI: https://doi.org/10.1109/TBME.2010.2048568

[2] R. Jenke, A. Peer, and M. Buss, “Feature extraction and selection for emotion recognition from EEG,” IEEE Trans. Affective Comput., vol. 5, no. 3, pp. 327–339, Jul-Sep. 2014, doi: 10.1109/TAFFC.2014.2339834.

[3] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, and T. Ebrahimi, “DEAP: A database for emotion analysis using physiological signals,” IEEE Trans. Affective Comput., vol. 3, no. 1, pp. 18–31, Jan.-Mar. 2012, doi: 10.1109/T-AFFC.2011.15. DOI: https://doi.org/10.1109/T-AFFC.2011.15

[4] Y. Liu and O. Sourina, “Real-time subject-dependent EEG-based emotion recognition algorithm,” in Gavrilova, M. L., Tan, C. J. K., Mao, X., Hong, L. (eds) Trans. Comput. Sci. XXIII. Lecture Notes Comput. Sci., vol. 8490, 2014, Springer, Berlin, Heidelberg, doi:10.1007/978-3-662-43790-2_11. DOI: https://doi.org/10.1007/978-3-662-43790-2_11

[5] R.-N. Duan, J.-Y. Zhu, and B.-L. Lu, “Differential entropy feature for EEG-based emotion classification,” in 2013 6th Int. IEEE/EMBS Conf. Neural Eng. (NER), San Diego, CA, USA, 2013, pp. 81–84, doi: 10.1109/NER.2013.6695876. DOI: https://doi.org/10.1109/NER.2013.6695876

[6] X. Jie, R. Cao, and L. Li, “Emotion recognition based on the sample entropy of EEG,” Biomed. Materials Eng., vol. 24, no. 1, pp. 1185–1192, 2014, doi: 10.3233/bme-130919. DOI: https://doi.org/10.3233/BME-130919

[7] P. C. Petrantonakis and L. J. Hadjileontiadis, “Emotion recognition from EEG using higher order crossings,” in IEEE Trans. Inform. Technol. Biomed., vol. 14, no. 2, pp. 186–197, March 2010, doi: 10.1109/TITB.2009.2034649. DOI: https://doi.org/10.1109/TITB.2009.2034649

[8] Z. Mohammadi, J. Frounchi, and M. Amiri, “Wavelet-based emotion recognition system using EEG signal,” Neural Comput. Appl., vol. 28, no. 8, pp. 1985–1990, 2017, doi:10.1007/s00521-015-2149-8. DOI: https://doi.org/10.1007/s00521-015-2149-8

[9] P. Zhuang, Y. Huang, D. Zeng, D., and X. Ding, “Mixed noise removal based on a novel non-parametric Bayesian sparse outlier model,” Neurocomputing, vol. 174 Part B, pp. 858–865, Jan. 2016, doi: 10.1016/j.neucom.2015.09.095. DOI: https://doi.org/10.1016/j.neucom.2015.09.095

[10] V. Rozgić, S. N. Vitaladevuni, and R. Prasad, “Robust EEG emotion classification using segment level decision fusion,” in IEEE Int. Conf. Acoust., Speech, Signal Process., Vancouver, BC, Canada, 2013, pp. 1286–1290, May 2013, doi: 10.1109/ICASSP.2013.6637858. DOI: https://doi.org/10.1109/ICASSP.2013.6637858

[11] W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks,” IEEE Trans. Auton. Mental Develop., vol. 7, no. 3, pp. 162–175, Sep. 2015, doi: 10.1109/TAMD.2015.2431497. DOI: https://doi.org/10.1109/TAMD.2015.2431497

[12] Y. Yang, Q. Wu, M. Qiu, Y. Wang, and X. Chen, “Emotion recognition from multi-channel EEG through parallel convolutional recurrent neural network,” in 2018 Int. Joint Conf. Neural Networks, pp. 1–7. IEEE, Jul. 2018, doi: 10.1109/IJCNN.2018.8489331. DOI: https://doi.org/10.1109/IJCNN.2018.8489331

[13] M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, “A multimodal database for affect recognition and implicit tagging,” IEEE Trans. Affective Comput., vol. 3, no. 1, pp. 42–55, Jan-Mar. 2012, doi: 10.1109/T-AFFC.2011.25. DOI: https://doi.org/10.1109/T-AFFC.2011.25

[14] S. Alhagry, A. A. Fahmy, and R. A. El-Khoribi, “Emotion recognition based on EEG using LSTM recurrent neural network,” Int. J. Adv. Comput. Sci. Appl., vol. 8, no. 10, 2017, doi: 10.14569/IJACSA.2017.081046. DOI: https://doi.org/10.14569/IJACSA.2017.081046

[15] M. J. Kobra, M. O. Rahman Hossain, and M. A. Nakib, “A novel hybrid framework for noise estimation in high-texture images using Markov, MLE, and CNN approaches,” Scientific J. Eng. Res., vol. 1, no. 2, pp. 54–63, Mar. 2025, doi: 10.64539/sjer.v1i2.2025.25. DOI: https://doi.org/10.64539/sjer.v1i2.2025.25

[16] M. J. Kobra, A. M. Nakib, P. Mweetwa, and M. O. Rahman, “Effectiveness of Fourier, Wiener, bilateral, and CLAHE denoising methods for CT scan image noise reduction,” Scientific J. Eng. Res., vol. 1, no. 3, pp. 96–108, 2025, doi: 10.64539/sjer.v1i3.2025.27. DOI: https://doi.org/10.64539/sjer.v1i3.2025.27

[17] M. J. Kobra, A. M. Nakib, P. Mweetwa, and M. O. Rahman, “Hybrid K-means, Random Forest, and simulated annealing for optimizing underwater image segmentation,” Scientific J. Eng. Res., vol. 1, no. 4, pp. 153–163, 2025. doi: 10.64539/sjer.v1i4.2025.46.

[18] M. J. Kobra and M. O. Rahman, “Evaluating the impact of image enhancement techniques on deep learning-based X-ray classification,” ICCK Trans. Adv. Comput. Syst., vol. 1, no. 3, pp. 193–207, 2025, doi: 10.62762/TACS.2025.653850.

[19] J. Zuntz, F. Lanusse, A. I. Malz, A. H. Wright, A. Slosar, B. Abolfathi, D. Alonso, A. Bault, C. R. Bom, M. Brescia, A. Broussard, J.-E. Campagne, S. Cavuoti, E. S. Cypriano, B. M. O. Fraga, E. Gawiser, E. J. Gonzalez, D. Green, P. Hatfield, K. Iyer, D. Kirkby, A. Nicola, E. Nourbakhsh, A. Park, G. Teixeira, K. Heitmann, E. Kovacs, and Y-Y. Mao, “The LSST-DESC 3x2pt Tomography Optimization Challenge,” Open J. Astrophys., Oct. 2021, doi: 0.21105/astro.2108.13418. DOI: https://doi.org/10.21105/astro.2108.13418

[20] C. M. Bishop, Pattern Recognit. Mach. Learn. (Information Sci. Statist.), p. 738, New York: Springer, Springer-Verlag, Berlin, Heidelberg, Aug. 2006.

[21] T. Song, W. Zheng, P. Song, and Z. Cui, “EEG emotion recognition using dynamical graph convolutional neural networks,” IEEE Trans. Affective Comput., vol. 11, no. 3, pp. 532–541, Jul.-Sep. 2020, doi: 10.1109/TAFFC.2018.2817622. DOI: https://doi.org/10.1109/TAFFC.2018.2817622

[22] P. Zhong, D. Wang, and C. Miao, “EEG-based emotion recognition using regularized graph neural networks,” doi: 0.48550/arXiv.1907.07835.

[23] W. Tao, C. Li, R. Song, J. Cheng, Y. Liu, F. Wan, and X. Chen, “EEG-based emotion recognition via channel-wise attention and self attention,” IEEE Trans. Affective Comput., vol. 14, no. 1, pp. 382–393, Jan.-Mar. 2023, doi: 10.1109/TAFFC.2020.3025777. DOI: https://doi.org/10.1109/TAFFC.2020.3025777

[24] Y. Li, W. Zheng, Z. Cui, T. Zhang, and Y. Zong, “A novel neural network model based on cerebral hemispheric asymmetry for EEG emotion recognition,” in Proc. 27th Int. Joint Conf. Artif. Intell., pp. 1561–1567, Jul. 2018, doi: https://doi.org/10.24963/ijcai.2018/216. DOI: https://doi.org/10.24963/ijcai.2018/216

[25] X. Chai, Q. Wang, Y. Zhao, X. Liu, O. Bai, and Y. Li, “Unsupervised domain adaptation techniques based on auto-encoder for non-stationary EEG-based emotion recognition,” Comput. Biol. Med., vol. 79, pp. 205–214, Dec. 2016, doi: 10.1016/j.compbiomed.2016.10.019. DOI: https://doi.org/10.1016/j.compbiomed.2016.10.019

[26] Q. Kong, Y. Cao, T. Iqbal, Y. Wang, W. Wang, and M. D. Plumbley, “PANNs: Large-scale pretrained audio neural networks for audio pattern recognition,” in IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 28, pp. 2880–2894, 2020, doi: 10.1109/TASLP.2020.3030497. DOI: https://doi.org/10.1109/TASLP.2020.3030497

[27] A. Chaddad, Y. Wu, R. Kateb, and A. Bouridane, “Electroencephalography signal processing: A comprehensive review and analysis of methods and techniques,” Sensors, vol. 23, no. 14, 6434, 2023, doi: 10.3390/s23146434. DOI: https://doi.org/10.3390/s23146434

[28] M. Alimardani, and K. Hiraki, “Passive brain-computer interfaces for enhanced human-robot interaction,” Frontiers Robot. AI, vol. 7, 125, Oct. 2020, doi: https://doi.org/10.3389/frobt.2020.00125. DOI: https://doi.org/10.3389/frobt.2020.00125

[29] M. J. Kobra, M. O. Rahman, and A. M. Nakib, “Hybrid K-means, Random Forest, and simulated annealing for optimizing underwater image segmentation,” Scientific J. Eng.Res., vol. 1, no. 4, pp. 153–163, 2025, doi: 10.64539/sjer.v1i4.2025.46. DOI: https://doi.org/10.64539/sjer.v1i4.2025.46

[30] M. J. Kobra and M. O. Rahman, “Evaluating the impact of image enhancement techniques on deep learning-based X-ray classification,” ICCK Trans. Adv. Comput. Sys., vol. 1, no. 3, pp. 193–207, 2025, doi: 10.62762/TACS.2025.653850. DOI: https://doi.org/10.62762/TACS.2025.653850

[31] M. J. Kobra, A. M. Nakib, and M. O. Rahman, “Advanced underwater image restoration: A comparative study of white balance, dehazing, and contrast enhancement techniques,” Eng. Sci. Technol. J., vol. 6, no. 4, pp. 201–215. 2025, doi: 10.51594/estj.v6i4.1930. DOI: https://doi.org/10.51594/estj.v6i4.1930

[32] W. Samek, A. Binder, G. Montavon, S. Bach, and K.-R. Müller, “Evaluating the visualization of what a deep neural network has learned,” IEEE Trans. Neural Networks Learn. Syst., vol. 28, no. 11, pp. 2660–2673, 2016, doi: 10.48550/arXiv.1509.06321. DOI: https://doi.org/10.1109/TNNLS.2016.2599820

[33] R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter, K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball, “Deep learning with convolutional neural networks for EEG decoding and visualization,” Human Brain Mapping, vol. 38, no. 11, pp. 5391–5420, Nov. 2017, doi: https://doi.org/10.1002/hbm.23730. DOI: https://doi.org/10.1002/hbm.23730

[34] A. Craik, Y. He, and J. L. Contreras-Vidal, “Deep learning for electroencephalogram (EEG) classification tasks: A review,” J. Neural Eng., vol. 16, no. 3, 031001, 2019, doi:10.1088/1741–2552/ab0ab5. DOI: https://doi.org/10.1088/1741-2552/ab0ab5

[35] G. Varoquaux, “Cross-validation failure: Small sample sizes lead to large error bars,” NeuroImage, vol. 180, Part A, pp. 68–77, Oct. 2018, doi: 10.1016/j.neuroimage.2017.06.061. DOI: https://doi.org/10.1016/j.neuroimage.2017.06.061

[36] C. Ambroise and G. J. McLachlan, “Selection bias in gene extraction on the basis of microarray gene-expression data,” Proc. Nat. Acad. Sciences, vol. 99, no. 10, pp. 6562–6566, 2002. DOI: https://doi.org/10.1073/pnas.102102699

[37] G. C. Cawley and N. L. C. Talbot, “On over-fitting in model selection and subsequent selection bias in performance evaluation,” J. Mach. Learn. Res., vol. 11, pp. 2079–2107, Aug. 2010, doi: 10.5555/1756006.1859921.

[38] A. Al-Nafjan, M. Hosny, Y. Al-Ohali, and A, Al-Wabil, “Review and classification of emotion recognition based on EEG brain-computer interface system research: A systematic review,” Appl. Sciences, vol. 7, no. 12, 1239, 2017, doi: 10.3390/app7121239. DOI: https://doi.org/10.3390/app7121239

[39] S. M. Alarcao and M. J. Fonseca, “Emotions recognition using EEG signals: A survey,” IEEE Trans. Affective Comput., vol. 10, no. 3, pp. 374–393, Jul-Sep. 2019, doi: doi: 10.1109/TAFFC.2017.2714671. DOI: https://doi.org/10.1109/TAFFC.2017.2714671

[40] L. Breiman, “Random forests,” Mach. Learn., vol. 45, no. 1, pp. 5–32, 2001, doi: 10.1023/A:1010933404324. DOI: https://doi.org/10.1023/A:1010933404324

[41] R. Jenke, A. Peer, and M. Buss, “Feature extraction and selection for emotion recognition from EEG,” IEEE Trans. Affective Comput., vol, 5, no. 3, pp. 327–339, Jul-Sep. 2014, doi: 10.1109/TAFFC.2014.2339834. DOI: https://doi.org/10.1109/TAFFC.2014.2339834