Computação afetiva no contexto da musicoterapia: uma revisão sistemática

Autores

DOI:

https://doi.org/10.33448/rsd-v10i15.22844

Palavras-chave:

Computação afetiva; Reconhecimento de emoções; Estimulação acústica; Sistema de recomendação; Musicoterapia.

Resumo

A musicoterapia é uma ferramenta eficaz para retardar o progresso da demência, uma vez que a interação com a música pode evocar emoções que estimulam as áreas do cérebro responsáveis pela memória. Essa terapia é mais bem-sucedida quando o terapeuta fornece estímulos adequados e personalizados para cada paciente. Essa personalização costuma ser difícil. Assim, métodos de Inteligência Artificial (IA) podem auxiliar nessa tarefa. Este artigo traz uma revisão sistemática da literatura da área de computação afetiva no contexto da musicoterapia. Em particular, pretendemos avaliar métodos de IA para realizar o reconhecimento automático de emoções aplicados a Interfaces Musicais Homem-Máquina (HMMI). Para realizar a revisão, realizamos uma busca automática em cinco das principais bases de dados científicas nas áreas de computação inteligente, engenharia e medicina. Procuramos todos os artigos publicados entre 2016 e 2020, cujos metadados, título ou resumo contenham os termos definidos na string de pesquisa. O protocolo de revisão sistemática resultou na inclusão de 144 trabalhos das 290 publicações retornadas da pesquisa. Através desta revisão do estado da arte, foi possível elencar os desafios atuais no reconhecimento automático de emoções. Também foi possível perceber o potencial do reconhecimento automático de emoções para construir soluções assistivas não invasivas baseadas em interfaces musicais homem-máquina, bem como as técnicas de inteligência artificial em uso no reconhecimento de emoções a partir de dados multimodais. Assim, o aprendizado de máquina para reconhecimento de emoções de diferentes fontes de dados pode ser uma abordagem importante para otimizar os objetivos clínicos a serem alcançados por meio da musicoterapia.

Biografia do Autor

Maíra Araújo de Santana, Universidade de Pernambuco

Maíra Araújo de Santana é doutoranda em Engenharia da Computação pela Universidade de Pernambuco (UPE), é bacharel e mestre em Engenharia Biomédica pela Universidade Federal de Pernambuco (UFPE). Desenvolve pesquisas em Computação Afetiva, em reconhecimento de padrões para diagnóstico precoce do câncer de mama e em Neurociências Aplicadas. Possui fluência em Português (língua nativa) e Inglês, bem como noções básicas de Espanhol, Alemão e Francês. Realizou estágio de docência na disciplina de Processamento de Sinais Digitais e estágio em Engenharia Clínica no Hospital das Clínicas de Pernambuco. Fez graduação sanduíche pelo programa Ciência sem Fronteiras do Governo Federal/CAPES nos Estados Unidos, onde cursou um ano letivo na University of Alabama at Birmingham (UAB), AL, e atuou como pesquisadora no Carl E. Ravin Advanced Imaging Laboratories (RAI Labs), na Duke University, NC, aprofundando conhecimentos específicos da área de processamento de imagens e adquirindo experiência em estágio laboratorial e produção científica. Participou de projeto de Iniciação Científica no Laboratório de Biofísica da UFPE, no qual buscou desenvolver matrizes à base de quitosana para uso no processo de eletroforese de proteínas.

Clarisse Lins de Lima, Universidade de Pernambuco

Clarisse Lins de Lima é graduada e mestre em Engenharia Biomédica pela Universidade Federal de Pernambuco (UFPE), atualmente é doutoranda em Engenharia da Computação pela Universidade de Pernambuco (UPE). Possui interesse nas áreas de epidemiologia digital, predição de epidemias e inteligência artificial aplicada à saúde.

Arianne Sarmento Torcate, Universidade de Pernambuco

Arianne Sarmento Torcate é graduada em Licenciatura em Computação pela Universidade de Pernambuco - Campus Garanhuns (UPE). Foi bolsista do Programa Institucional de Bolsas de Iniciação à Docência (PIBID) e atuou na área de Gestão de Pessoas na Empresa de Tecnologia, Educação e Consultoria Júnior (TEC JR), fortalecendo ativamente o movimento empresa Júnior (MEJ) dentro da Universidade. Foi aluna pesquisadora pelo Programa Institucional de Bolsas de Iniciação Científica (PIBIC), onde a pesquisa teve como assunto o Levantamento de Técnicas Inovadoras de Gerenciamento de Projetos Aplicadas no Contexto de Desenvolvimento de Software. Atualmente é Mestranda em Engenharia da Computação pela Universidade de Pernambuco (POLI/UPE) e integrante do grupo de pesquisa em Computação Biomédica da Universidade Federal de Pernambuco (UFPE), onde realiza pesquisas na área de Computação Afetiva. Além disso, possui experiência em desenvolvimento de Jogos Sérios para estimulação de habilidades cognitivas e tem interesse em pesquisas que abrangem o campo de Tecnologias Educacionais, Mineração de Dados e Inteligência Artificial.

Flávio Secco Fonseca, Universidade de Pernambuco

Flávio Secco Fonseca é graduado em Engenharia Mecânica pela Universidade de Pernambuco (2017). Atualmente é professor do Centro de Educação para o Ensino Profissionalizante (CEPEP) e estudante de Doutorado em Engenharia da Computação na Escola Politécnica da Universidade de Pernambuco. Tem experiência nas áreas de Engenharia Mecânica e Engenharia da Computação, com ênfase em Mecatrônica, Aprendizado de Máquina e desenvolvimento de jogos sérios.

Wellington Pinheiro dos Santos, Universidade Federal de Pernambuco

Wellington Pinheiro dos Santos possui graduação em Engenharia Elétrica Eletrônica (2001) e mestrado em Engenharia Elétrica (2003) pela Universidade Federal de Pernambuco, e doutorado em Engenharia Elétrica pela Universidade Federal de Campina Grande (2009). Atualmente é Professor Associado (dedicação exclusiva) do Departamento de Engenharia Biomédica do Centro de Tecnologia e Geociências - Escola de Engenharia de Pernambuco, Universidade Federal de Pernambuco, atuando na Graduação em Engenharia Biomédica e no Programa de Pós-Graduação em Engenharia Biomédica, do qual foi um dos fundadores (2011). Fundou o Núcleo de Tecnologias Sociais e Bioengenharia da Universidade Federal de Pernambuco, NETBio-UFPE (2012). É também membro do Programa de Pós-Graduação em Engenharia da Computação da Escola Politécnica de Pernambuco, Universidade de Pernambuco, desde 2009. Tem experiência na área de Ciência da Computação, com ênfase em Processamento Gráfico (Graphics), atuando principalmente nos seguintes temas: processamento digital de imagens, reconhecimento de padrões, visão computacional, computação evolucionária, métodos numéricos de otimização, inteligência computacional, técnicas de formação de imagens, realidade virtual, game design e aplicações de Computação e Engenharia em Medicina e Biologia. É membro da Sociedade Brasileira de Engenharia Biomédica (SBEB), da Sociedade Brasileira de Inteligência Computacional (SBIC, ex-SBRN), e da International Federation of Medical and Biological Engineering (IFMBE).

Referências

Agres, K. R., Schaefer, R. S., Volk, A., Hooren, S. v., Holzapfel, A., Bella, S. D., ... Magee, W. L. (2021). Music, Computing, and Health: A Roadmap for the Current and Future Roles of Music Technology for Health Care and Well-Being . Music & Science, 4, 1–32. doi: 10.1177/2059204321997709

Al-Qazzaz, N. K., Sabir, M. K., Ali, S. H. B. M., Ahmad, S. A., & Grammer, K. (2020). Electroencephalogram Profiles for Emotion Identification over the Brain Regions Using Spectral, Entropy and Temporal Biomarkers. Sensors(Basel), 20. doi:10.3390/s20010059

Amali, D. N., Barakbah, A. R., Anom Besari, A. R., & Agata, D. (2018). Semantic Video Recommendation System Based on Video Viewers Impression from Emotion Detection. In 2018 international electronics symposium on knowledge creation and intelligent computing (ies-kcic) (p. 176-183). doi: 10.1109/KCIC.2018.8628592

Aranha, R. V., Silva, L. S., Chaim, M. L., & Nunes, F. d. L. d. S. (2017). Using Affective Computing to Automatically Adapt Serious Games for Rehabilitation. In 2017 ieee 30th international symposium on computer-based medical systems (cbms) (p. 55-60). doi: 10.1109/CBMS.2017.89

Arroyo-Palacios, J., & Slater, M. (2016). Dancing with Physio: A Mobile Game with Physiologically Aware Virtual Humans. IEEE Transactions on Affective Computing, 7(4), 326-336. doi: 10.1109/TAFFC.2015.2472013

Aydın, S. (2020). Deep Learning Classification of Neuro-Emotional Phase Domain Complexity Levels Induced by Affective Video Film Clips. IEEE Journal of Biomedical and Health Informatics, 24(6), 1695-1702. doi: 10.1109/JBHI.2019.2959843

Bakhtiyari, K., Taghavi, M., Taghavi, M., & Bentahar, J. (2019). Ambiance Signal Processing: A Study on Collaborative Affective Computing. In 2019 5th international conference on web research (icwr) (p. 35-40). doi: 10.1109/ICWR.2019.8765251

Bankar, C., Bhide, A., Kulkarni, A., Ghube, C., & Bedekar, M. (2018). Driving Control Using Emotion Analysis Via EEG. In 2018 ieee punecon (p. 1-7). doi: 10.1109/PUNECON.2018.8745412

Barnstaple, R., Protzak, J., DeSouza, J. F., & Gramann, K. (2020). Mobile brain/body imaging in dance: A dynamic transdisciplinary field for applied research. European Journal of Neuroscience. doi: https://doi.org/10.1111/ejn.14866

Bermúdez i Badia, S., Quintero, L. V., Cameirão, M. S., Chirico, A., Triberti, S., Cipresso, P., & Gaggioli, A. (2019). Toward emotionally adaptive virtual reality for mental health applications. IEEE Journal of Biomedical and Health Informatics, 23(5), 1877-1887. doi: 10.1109/JBHI.2018.2878846

Bertazone, T. M. A., Ducatti, M., Camargo, H. P. M., Batista, J. M. F., Kusumota, L., & Marques, S. (2016). Ações multidisciplinares/interdisciplinares no cuidado ao idoso com Doença de Alzheimer. Rev Rene.

Bhargava, A., O’Shaughnessy, K., & Mann, S. (2020). A novel approach to eeg neurofeedback via reinforcement learning. In 2020 ieee sensors (p. 1-4). doi: 10.1109/SENSORS47125.2020.9278871

Bo, H., Ma, L., & Li, H. (2017). Music-evoked emotion classification using EEG correlation-based information. In 2017 39th annual international conference of the ieee engineering in medicine and biology society (embc) (p. 3348-3351). doi: 10.1109/EMBC.2017.8037573

Bortz, B., Jaimovich, J., & Knapp, R. B. (2019). Cross-Cultural Comparisons of Affect and Electrodermal Measures While Listening to Music. In 2019 8th international conference on affective computing and intelligent interaction (acii) (p. 55-61). doi: 10.1109/ACII.2019.8925476

Boumpa, E., Charalampou, I., Gkogkidis, A., Ntaliani, A., Kokkinou, E., & Kakarountas, A. (2018). Assistive System for Elders Suffering of Dementia. In 2018 ieee 8th international conference on consumer electronics - berlin (icce-berlin) (p. 1-4). doi: 10.1109/ICCE-Berlin.2018.8576216

Brosch, T., Scherer, K. R., Grandjean, D. M., & Sander, D. (2013). The impact of emotion on perception, attention, memory, and decision-making. Swiss Medical Weekly, 143. doi: 10.4414/smw.2013.13786

Bulagang, A. F., Mountstephens, J., & Teo, J. (2021). Multiclass emotion prediction using heart rate and virtual reality stimuli. Journal of Big Data, 12. doi: 10.1186/s40537-020-00401-x

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. 1st Conference on Fairness, Accountability and Transparency, 77-91.

Caetano, L. A. O., Silva, F. S., & Silveira, C. A. B. (2017). Alzheimer, sintomas e Grupos: Uma Revisão Integrativa. Revista do NESME.

Cambria, E. (2016). Affective computing and sentiment analysis. IEEE Intelligent Systems, 31(2), 102–107.

Cameirão, M. S., Pereira, F., & i. Badia, S. B. (2017). Virtual reality with customized positive stimuli in a cognitive-motor rehabilitation task. In 2017 international conference on virtual rehabilitation (icvr) (p. 1-7). doi: 10.1109/ICVR.2017.8007543

Cavallo, F., Rovini, E., Dolciotti, C., Radi, L., Ragione, R. D., Bongioanni, P., & Fiorini, L. (2020). Physiological response to Vibro-Acoustic stimulation in healthy subjects: a preliminary study*. In 2020 42nd annual international conference of the ieee engineering in medicine biology society (embc) (p. 5921-5924). doi: 10.1109/EMBC44109.2020.9175848

Chang, H.-Y., Huang, S.-C., & Wu, J.-H. (2017). A personalized music recommendation system based on electroencephalography feedback. Multimedia Tools and Applications, 76, 19523—19542. doi: 10.1007/s11042-015-3202-4

Chapaneri, S., & Jayaswal, D. (2018). Deep Gaussian Processes for Estimating Music Mood. In 2018 15th ieee india council international conference (indicon) (p. 1-5). doi: 10.1109/INDICON45594.2018.8987036

Chatziagapi, A., Paraskevopoulos, G., Sgouropoulos, D., Pantazopoulos, G., Nikandrou, M., Giannakopoulos, T., ... Narayanan, S. (2019). Data Augmentation using GANs for Speech Emotion Recognition. INTERSPEECH 2019.

Chavan, D. R., Kumbhar, M. S., & Chavan, R. R. (2016). The human stress recognition of brain, using music therapy. In 2016 international conference on computation of power, energy information and commuincation (iccpeic) (p. 200-203). doi: 10.1109/ICCPEIC.2016.7557197

Chen, J., Pan, F., Zhong, P., He, T., Qi, L., Lu, J., ... Zheng, Y. (2020). An Automatic Method to Develop Music With Music Segment and Long Short Term Memory for Tinnitus Music Therapy. IEEE Access, 8, 141860-141871. doi: 10.1109/ ACCESS.2020.3013339

Chen, O. T., Chang, S., Ma, Y., Zhang, Y. C., & Lee, Y. L. (2020). Time capsule gift with affective awareness of event memories via near field communication. In 2020 ieee/sice international symposium on system integration (sii) (p. 585-589). doi: 10.1109/SII46433.2020.9026183

Chennafi, M., Khan, M. A., Li, G., Lian, Y., & Wang, G. (2018). Study of music effect on mental stress relief based on heart rate variability. In 2018 ieee asia pacific conference on circuits and systems (apccas) (p. 131-134). doi: 10.1109/ APCCAS.2018.8605674

Chin, Y.-H., Wang, J.-C., Wang, J.-C., & Yang, Y.-H. (2018). Predicting the Probability Density Function of Music Emotion Using Emotion Space Mapping. IEEE Transactions on Affective Computing, 9(4), 541-549. doi: 10.1109/TAFFC.2016.2628794

Colombo, R., Raglio, A., Panigazzi, M., Mazzone, A., Bazzini, G., Imarisio, C., ... Imbriani, M. (2019). The sonichand protocol for rehabilitation of hand motor function: A validation and feasibility study. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 27(4), 664-672. doi: 10.1109/TNSRE.2019.2905076

Crespo, A. B., Idrovo, G. G., Rodrigues, N., & Pereira, A. (2016). A virtual reality UAV simulation with body area networks to promote the elders life quality. In 2016 1st international conference on technology and innovation in sports, health and wellbeing (tishw) (p. 1-7). doi: 10.1109/TISHW.2016.7847780

Daly, I., Williams, D., Kirke, A., Miranda, E. R., & Nasuto, S. J. (2019). Electroencephalography reflects the activity of subcortical brain regions during approach-withdrawal behaviour while listening to music. Scientific Reports, 9. doi: 10.1038/ s41598-019-45105-2

Daly, I., Williams, D., Kirke, A., Weaver, J., Malik, A., Hwang, F., ... Nasuto, S. J. (2016). Affective brain-computer music interfacing. Journal of Neural Engineering, 13. doi: 10.1088/1741-2560/13/4/046022

Daly, I., Williams, D., Malik, A., Weaver, J., Kirke, A., Hwang, F., ... Nasuto, S. J. (2020). Personalised, Multi-Modal, Affective State Detection for Hybrid Brain-Computer Music Interfacing. IEEE Transactions on Affective Computing, 11(1), 111-124. doi: 10.1109/TAFFC.2018.2801811

Dantcheva, A., Bilinski, P., Nguyen, H. T., Broutart, J.-C., & Bremond, F. (2017). Expression recognition for severely demented patients in music reminiscence-therapy. In 2017 25th european signal processing conference (eusipco) (p. 783-787). doi: 10.23919/EUSIPCO.2017.8081314

Dechenaud, M., Laidig, D., Seel, T., Gilbert, H. B., & Kuznetsov, N. A. (2019). Development of Adapted Guitar to Improve Motor Function After Stroke: Feasibility Study in Young Adults. In Annual international conference of the iee engineering in medicine and biology society (pp. 5488–5493). doi: 10.1109/EMBC.2019.8856651

Delmastro, F., Martino, F. D., & Dolciotti, C. (2018). Physiological Impact of Vibro-Acoustic Therapy on Stress and Emotions through Wearable Sensors. In 2018 ieee international conference on pervasive computing and communications workshops (percom workshops) (p. 621-626). doi: 10.1109/PERCOMW.2018.8480170

Desai, B., Chen, B., Sirocchi, S., & McMullen, K. A. (2018). Mindtrack: Using brain-computer interface to translate emotions into music. In 2018 international conference on digital arts, media and technology (icdamt) (p. 33-37). doi: 10.1109/ ICDAMT.2018.8376491

Deshmukh, R. S., Jagtap, V., & Paygude, S. (2017). Facial emotion recognition system through machine learning approach. In 2017 international conference on intelligent computing and control systems (iciccs) (p. 272-277). doi: 10.1109/ICCONS.2017 .8250725

Souza, M. C., da Rocha Alves, A. B., de Lima, D. S., de Oliveira, L. R. F. A., da Silva, J. K. B., de Oliveira Ribeiro, E. C., ... de Oliveira, e. a., I. D. (2017). The treatment of Alzheimer in the context of musicotherapy. International Archives of Medicine, 10.

Dorneles, S. O., Barbosa, D. N. F., & Barbosa, J. L. V. (2020). Sensibilidade ao contexto na identificação de estados afetivos aplicados à educação: um mapeamento sistemático. Revista Novas Tecnologias na Educação - RENOTE, 18.

Dragulin, S., Constantin, F. A., & Rucsanda, I. (2019). The Use of Music Therapy for Adults’ Anxiety Relief. In˘ 2019 5th experiment international conference (exp.at’19) (p. 490-493). doi: 10.1109/EXPAT.2019.8876558

Dutta, E., Bothra, A., Chaspari, T., Ioerger, T., & Mortazavi, B. J. (2020). Reinforcement learning using eeg signals for therapeutic use of music in emotion management. In 2020 42nd annual international conference of the ieee engineering in medicine biology society (embc) (p. 5553-5556). doi: 10.1109/EMBC44109.2020.9175586

EC. (2020). The 2021 Ageing Report: Underlying Assumptions and Projection Methodologies [Computer software manual]. Retrieved from https://ec.europa.eu/info/sites/default/files/economy-finance/ip142_en .pdf (Last accessed: 2021 Jul. 05)

Ehrlich, S., Guan, C., & Cheng, G. (2017). A closed-loop brain-computer music interface for continuous affective interaction. In 2017 international conference on orange technologies (icot) (p. 176-179). doi: 10.1109/ICOT.2017.8336116

Ehrlich, S. K., Agres, K. R., Guan, C., & Cheng, G. (2019, 03). A closed-loop, music-based brain-computer interface for emotion mediation. PLOS ONE, 14(3), 1-24. Retrieved from https://doi.org/10.1371/journal.pone.0213516 doi:10.1371/journal.pone.0213516

Ekman, P., & Friesen, V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 124-–129.

English, B. A., & Howard, A. (2017a). The effects of adjusting task difficulty on learning motor and cognitive aspects of a multitasking task. In 2017 ieee symposium series on computational intelligence (ssci) (p. 1-7). doi: 10.1109/SSCI.2017.8285396

English, B. A., & Howard, A. M. (2017b). The effects of musical cues on motor learning using a robotic wrist rehabilitation system — a healthy pilot study. In 2017 ieee workshop on advanced robotics and its social impacts (arso) (p. 1-6). doi: 10.1109/ARSO.2017.8025208

Fang, R., Ye, S., Huangfu, J., & Calimag, D. P. (2017). Music therapy is a potential intervention for cognition of Alzheimer’s disease: a mini-review. Translational Neurodegeneration, 6(1), 2.

Fernandes, C. M., Migotina, D., & Rosa, A. C. (2021). Brain’s Night Symphony (BraiNSy): A Methodology for EEG Sonification. IEEE Transactions on Affective Computing, 12(1), 103-112. doi: 10.1109/TAFFC.2018.2850008

Fonteles, J. H., Serpa, Y. R., Barbosa, R. G., Rodrigues, M. A. F., & Alves, M. S. P. L. (2018). Gesture-controlled interactive musical game to practice hand therapy exercises and learn rhythm and melodic structures. In 2018 ieee 6th international conference on serious games and applications for health (segah) (p. 1-8). doi: 10.1109/SeGAH.2018.8401367

Gallego, M. G., & Garcia, J. G. (2017). Music therapy and Alzheimer’s disease: Cognitive, psychological, and behavioural effects. Neurología (English Edition), 32(5), 300–308. doi: 10.1155/2014/908915

Geethanjali, B., Adalarasu, K., Jagannath, M., & Guhan Seshadri, N. P. (2019). Music-Induced Brain Functional Connectivity Using EEG Sensors: A Study on Indian Music. IEEE Sensors Journal, 19(4), 1499-1507. doi: 10.1109/JSEN.2018.2873402

Geraets, C. N., Stouwe, E. C. d., Pot-Kolder, R., & Veling, W. (2021). Advances in immersive virtual reality interventions for mental disorders: A new reality? Current Opinion in Psychology, 41, 40–45. doi: 10.1016/j.copsyc.2021.02.004

Gilda, S., Zafar, H., Soni, C., & Waghurdekar, K. (2017). Smart music player integrating facial emotion recognition and music mood recommendation. In 2017 international conference on wireless communications, signal processing and networking (wispnet) (p. 154-158). doi: 10.1109/WiSPNET.2017.8299738

González, E. J. S., & McMullen, K. (2020). The design of an algorithmic modal music platform for eliciting and detecting emotion. In 2020 8th international winter conference on brain-computer interface (bci) (p. 1-3). doi: 10.1109/BCI48061 .2020.9061664

Goudbeek, M., Goldman, J. P., & Scherer, K. R. (2009). Emotion dimensions and formant position. INTERSPEECH 2009Annual Conference of the International Speech Communication Association.

Goyal, A., Kumar, N., Guha, T., & Narayanan, S. S. (2016). A multimodal mixture-of-experts model for dynamic emotion prediction in movies. In 2016 ieee international conference on acoustics, speech and signal processing (icassp) (p. 28222826). doi: 10.1109/ICASSP.2016.7472192

Greer, T., Mundnich, K., Sachs, M., & Narayanan, S. (2020). The Role of Annotation Fusion Methods in the Study of HumanReported Emotion Experience During Music Listening. In Icassp 2020 - 2020 ieee international conference on acoustics, speech and signal processing (icassp) (p. 776-780). doi: 10.1109/ICASSP40776.2020.9054329

Gu, F., Niu, J., Das, S. K., & He, Z. (2018). RunnerPal: A Runner Monitoring and Advisory System Based on Smart Devices. IEEE Transactions on Services Computing, 11(2), 262-276. doi: 10.1109/TSC.2016.2626372

Hamdan, A. C. (2008). Avaliação neuropsicológica na doença de alzheimer e no comprometimento cognitivo leve. Psicol. argum, 183–192.

Han, W., & Chan, C. (2006). An efficient MFCC extraction method in speech recognition. International Symposium on Circuits and Systems.

Han, Y., Nishio, Y., Yi-Hsiang, M., Oshiyama, C., Lin, J.-Y., Takanishi, A., & Cosentino, S. (2018). A humanrobot interface to improve facial expression recognition in subjects with autism spectrum disorder. In 2018 9th international conference on awareness science and technology (icast) (p. 179-184). doi: 10.1109/ICAwST.2018.8517228

Hasan, S. M. S., Siddiquee, M. R., Marquez, J. S., & Bai, O. (2020). Enhancement of Movement Intention Detection Using EEG Signals Responsive to Emotional Music Stimulus. IEEE Transactions on Affective Computing, 1-1. doi: 10.1109/ TAFFC.2020.3025004

Herremans, D., & Chew, E. (2019). MorpheuS: Generating Structured Music with Constrained Patterns and Tension. IEEE Transactions on Affective Computing, 10(4), 510-523. doi: 10.1109/TAFFC.2017.2737984

Hossan, A., & Chowdhury, A. M. M. (2016). Real time EEG based automatic brainwave regulation by music. In 2016 5th international conference on informatics, electronics and vision (iciev) (p. 780-784). doi: 10.1109/ICIEV.2016.7760107

Hsu, Y.-L., Wang, J.-S., Chiang, W.-C., & Hung, C.-H. (2020). Automatic ECG-Based Emotion Recognition in Music Listening. IEEE Transactions on Affective Computing, 11(1), 85-99. doi: 10.1109/TAFFC.2017.2781732

Huang, W., & Benjamin Knapp, R. (2017). An exploratory study of population differences based on massive database of physiological responses to music. In 2017 seventh international conference on affective computing and intelligent interaction (acii) (p. 524-530). doi: 10.1109/ACII.2017.8273649

Ibrahim, I. A., Ting, H.-N., & Moghavvemi, M. (2019). Formulation of a Novel Classification Indices for Classification of Human Hearing Abilities According to Cortical Auditory Event Potential signals. Arabian Journal for Science and Engineering, 44, 7133–7147. doi: 10.1007/s13369-019-03835-5

Ingale, A. B., & Chaudhari, D. S. (2012). Speech Emotion Recognition. International Journal of Soft Computing and Engineering, 2.

Izard, C. E. (1977). Human Emotions. New York: Springer.

Jagiello, R., Pomper, U., Yoneya, M., Zhao, S., & Chait, M. (2019). Rapid Brain Responses to Familiar vs. Unfamiliar Music – an EEG and Pupillometry study. Scientifics Reports, 9. doi: 10.1038/s41598-019-51759-9

Jeong, M., & Ko, B. C. (2018). Driver’s Facial Expression Recognition in Real-Time for Safe Driving. Sensors. doi: 10.3390/ s18124270

Kanehira, R., Ito, Y., Suzuki, M., & Hideo, F. (2018). Enhanced relaxation effect of music therapy with VR. In 2018 14th international conference on natural computation, fuzzy systems and knowledge discovery (icnc-fskd) (p. 1374-1378). doi: 10.1109/FSKD.2018.8686951

Kikuchi, T., Nagata, T., Sato, C., Abe, I., Inoue, A., Kugimiya, S., ... Hatabe, S. (2018). Sensibility Assessment For User Interface and Training Program an Upper-Limb Rehabilitation Robot, D-SEMUL. In Annual international conference of the ieee engineering in medicine and biology society (pp. 3028–3031). doi: 10.1109/EMBC.2018.8513074

King, J., Jones, K., Goldberg, E., Rollins, M., MacNamee, K., Moffit, C., ... Amaro, e. a., J. (2019). Increased functional connectivity after listening to favored music in adults with Alzheimer dementia. The Journal of Prevention of Alzheimer’s Disease, 6(1), 56–62.

Kirana, M. C., Lubis, M. Z., & Amalia, E. D. (2018). The effect of sound manipulation to know response rate in autism children using fft. In 2018 international conference on applied engineering (icae) (p. 1-5). doi: 10.1109/INCAE.2018.8579418

Kobayashi, A., & Fujishiro, I. (2016). An Affective Video Generation System Supporting Impromptu Musical Performance. In 2016 international conference on cyberworlds (cw) (p. 17-24). doi: 10.1109/CW.2016.11

Konno, M., Suzuki, K., & Sakamoto, M. (2018). Sentence Generation System Using Affective Image. In 2018 joint 10th international conference on soft computing and intelligent systems (scis) and 19th international symposium on advanced intelligent systems (isis) (p. 678-682). doi: 10.1109/SCIS-ISIS.2018.00114

Krüger, C., Kojic, T., Meier, L., Möller, S., & Voigt-Antons, J.-N. (2020). Development and validation of pictographic scales´ for rapid assessment of affective states in virtual reality. In 2020 twelfth international conference on quality of multimedia experience (qomex) (p. 1-6). doi: 10.1109/QoMEX48832.2020.9123100

Kumar, N., Guha, T., Huang, C., Vaz, C., & Narayanan, S. S. (2016). Novel affective features for multiscale prediction of emotion in music. In 2016 ieee 18th international workshop on multimedia signal processing (mmsp) (p. 1-5). doi: 10.1109/MMSP.2016.7813377

Kyong, J.-S., Noh, T.-S., Park, M., Oh, S.-H., Lee, J., & Suh, M.-W. (2019, 06). Phantom Perception of Sound and the Abnormal Cortical Inhibition System: An Electroencephalography (EEG) Study. Annals of Otology, Rhinology & Laryngology, 128, 84S-95S. doi: 10.1177/0003489419837990

Le, D., & Provost, E. M. (2013). Emotion recognition from spontaneous speech using hidden markov models with deep belief networks. IEEE Workshop on Automatic Speech Recognition and Understanding, 216—221.

Leslie, G., Ghandeharioum, A., Zhou, D., & Picard, R. W. (2019). Engineering Music to Slow Breathing and Invite Relaxd Physiology. In 2019 8th international conference on affective computing and intelligent interaction (acii) (Vol. abs/1907.08844, pp. 276–282).

Li, Q., Wang, X., Wang, S., Xie, Y., Xie, Y., & Li, S. (2020). More flexible integration of functional systems after musical training in young adults. IEEE Transactions on Neural Systems an Heabilitation Engineering, 28.

Li, X., Zhao, Z., Song, D., Zhang, Y., Pan, J., Wu, L., ... Wang, D. (2020). Latent factor decoding of multi-channel eeg for emotion recognition through autoencoder-like neural networks. Frontiers in Neuroscience, 14, 87. Retrieved from https:// www.frontiersin.org/article/10.3389/fnins.2020.00087 doi: 10.3389/fnins.2020.00087

Liao, C.-Y., Chen, R.-C., Tai, S.-K., & Hendry. (2017). Using single point brain wave instrument to explore and verification of music frequency. In 2017 international conference on innovative and creative information technology (icitech) (p. 1-6). doi: 10.1109/INNOCIT.2017.8319142

Lin, X., Mahmud, S., Jones, E., Shaker, A., Miskinis, A., Kanan, S., & Kim, J.-H. (2020). Virtual Reality-Based Musical Therapy for Mental Health Management. In 2020 10th annual computing and communication workshop and conference (ccwc) (p. 0948-0952). doi: 10.1109/CCWC47524.2020.9031157

LingHu, Y.-f., & Shu, H. (2018). ARM-Based Feedback System For Athletes’ Psychological Adjustment. In 2018 17th international symposium on distributed computing and applications for business engineering and science (dcabes) (p. 72-75). doi: 10.1109/DCABES.2018.00028

Liu, W., Zhang, C., Wang, X., Xu, J., Chang, Y., Ristaniemi, T., & Cong, F. (2020). Functional connectivity of major depression disorder using ongoing eeg during music perception. Clinical Neurophysiology, 131(10), 2413-2422. doi: https://doi.org/ 10.1016/j.clinph.2020.06.031

Lopes, P., Liapis, A., & Yannakakis, G. N. (2019). Modelling Affect for Horror Soundscapes. IEEE Transactions on Affective Computing, 10(2), 209-222. doi: 10.1109/TAFFC.2017.2695460

Lourinho, B. B. A. S., & Ramos, W. F. (2019). O envelhecimento, o cuidado com o idoso e a doença de alzheimer. enciclopédia biosfera, 723.

Lubetzky, A. V., Kelly, J., Wang, Z., TaghaviDilamani, M., Gospodarek, M., Fu, G., ... Hujsak, B. (2019). Head Mounted Display Application for Contextual Sensory Integration Training: Design, Implementation, Challenges and Patient Outcomes. In 2019 international conference on virtual rehabilitation (icvr) (p. 1-7). doi: 10.1109/ICVR46560.2019.8994437

Lui, J. H., Samani, H., & Tien, K.-Y. (2017). An affective mood booster robot based on emotional processing unit. In 2017 international automatic control conference (cacs) (p. 1-6). doi: 10.1109/CACS.2017.8284239

Lui, S., & Grunberg, D. (2017). Using skin conductance to evaluate the effect of music silence to relieve and intensify arousal. In 2017 international conference on orange technologies (icot) (p. 91-94). doi: 10.1109/ICOT.2017.8336096

Lv, C., Li, S., & Huang, L. (2018). Music Emotions Recognition Based on Feature Analysis. In 2018 11th international congress on image and signal processing, biomedical engineering and informatics (cisp-bmei) (p. 1-5). doi: 10.1109/CISP-BMEI.2018.8633223

Lyu, M.-J., & Yuan, S.-M. (2020). Cloud-Based Smart Dog Music Therapy and Pneumonia Detection System for Reducing the Difficulty of Caring for Patients With Dementia. IEEE Access, 8, 20977-20990. doi: 10.1109/ACCESS.2020.2969482

Malheiro, R., Panda, R., Gomes, P., & Paiva, R. P. (2018). Emotionally-relevant features for classification and regression of music lyrics. IEEE Transactions on Affective Computing, 9(2), 240-254. doi: 10.1109/TAFFC.2016.2598569

Marimpis, A. D., Dimitriadis, S. I., & Goebel, R. (2020). A multiplex connectivity map of valence-arousal emotional model. IEEE Access, 8, 170928-170938. doi: 10.1109/ACCESS.2020.3025370

Marosi-Holczberger, E., Prieto-Corona, D. M. B., Yáñez-Téllez, M. G., Rodríguez-Camacho, M. A., Rodríguez-Camacho, H., & Guerrero-Juarez, V. (2013). Quantitative Spectral EEG Assessments During Affective States Evoked By The Presentation Of The International Affective Pictures. Journal of Behavior, Health & Social Issues, 14.

Matsumoto, K., & Sasayama, M. (2018). Lyric Emotion Estimation Using Word Embedding Learned from Lyric Corpus. In 2018 ieee 4th international conference on computer and communications (iccc) (p. 2295-2301). doi: 10.1109/CompComm.2018.8780811

Mehta, D., Siddiqui, M. F. H., & Javaid, A. (2019, 04). Recognition of emotion intensities using machine learning algorithms: A comparative study. Sensors, 19. doi: 10.3390/s19081897

Meska, M. H. G., Mano, L. Y., Silva, J. P., Junior, G. A. P., & Mazzo, A. (2020). Reconhecimento de emoções para ambiente clínico simulado com uso de odores desagradáveis: estudo quase experimental. Revista Latino-A / Enfermagem, 8.

Mideska, K. G., Singh, A., Hoogenboom, N., Hellriegel, H., Krause, H., Schnitzler, A., ... Muthuraman, M. (2016). Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN. In Annual international conference of the ieee engineering in medicine and biology society (pp. 105–108). doi: 10.1109/EMBC.2016.7590651

Mitrpanont, J., Phandhu-fung, J., Klubdee, N., Ratanalaor, S., Pratiphakorn, P., Damrongvanakul, K., ... Mitrpanont, T. (2017). iCare-Stress: Caring system for stress. In 2017 6th ict international student project conference (ict-ispc) (p. 1-4). doi: 10.1109/ICT-ISPC.2017.8075319

Mo, S., & Niu, J. (2019). A novel method based on ompgw method for feature extraction in automatic music mood classification. IEEE Transactions on Affective Computing, 10(3), 313-324. doi: 10.1109/TAFFC.2017.2724515

Mulla, F., Eya, E., Ibrahim, E., Alhaddad, A., Qahwaji, R., & Abd-Alhameed, R. (2017). Neurological assessment of music therapy on the brain using Emotiv Epoc. In 2017 internet technologies and applications (ita) (p. 259-263). doi: 10.1109/ ITECHA.2017.8101950

Murad, D., Ye, F., Barone, M., & Wang, Y. (2017). Motion initiated music ensemble with sensors for motor rehabilitation. In 2017 international conference on orange technologies (icot) (p. 87-90). doi: 10.1109/ICOT.2017.8336095

Nalepa, G. J., Kutt, K., & Bobek, S. (2019). Mobile platform for affective context-aware systems. Future Generation Computer Systems, 92, 490-503. Retrieved from https://www.sciencedirect.com/science/article/pii/ S0167739X17312207 doi: https://doi.org/10.1016/j.future.2018.02.033

Navarathna, R., Carr, P., Lucey, P., & Matthews, I. (2019). Estimating audience engagement to predict movie ratings. IEEE Transactions on Affective Computing, 10(1), 48-59. doi: 10.1109/TAFFC.2017.2723011

Navarro-Tuch, S. A., Solís-Torres, R., Bustamante-Bello, R., López-Aguilar, A. A., González-Archundia, G., & HernándezGonzález, O. (2018). Variation of facial expression produced by acoustic stimuli. In 2018 international conference on mechatronics, electronics and automotive engineering (icmeae) (p. 60-64). doi: 10.1109/ICMEAE.2018.00018

Nawa, N. E., Callan, D. E., Mokhtari, P., Ando, H., & Iversen, J. (2018). Decoding music-induced experienced emotions using functional magnetic resonance imaging - Preliminary results. In 2018 international joint conference on neural networks (ijcnn) (p. 1-7). doi: 10.1109/IJCNN.2018.8489752

Nemati, S., & Naghsh-Nilchi, A. R. (2017). Exploiting evidential theory in the fusion of textual, audio, and visual modalities for affective music video retrieval. In 2017 3rd international conference on pattern recognition and image analysis (ipria) (p. 222-228). doi: 10.1109/PRIA.2017.7983051

Nichols, E., Szoeke, C. E., Vollset, S., Abbasi, N., Abd-Allah, F., Abdela, J., ... Barker-Collo, S. e. a. (2019). Global, regional, and national burden of Alzheimer’s disease and other dementias, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016. The Lancet Neurology, 18, P88–106. doi: 10.1016/S1474-4422(18)30403-4

NRC. (2001). Preparing for an Aging World: The Case for Cross-National Research [Computer software manual]. Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK98375/ (Last accessed: 2021 Jul. 05)

Oliveira, E., & Jaques, P. A. (2013). Classificação de emoções básicas através de imagens capturadas em vídeos de baixa resolução. Revista Brasileira de Computação Aplicada, 5, 40–54.

Ortegon-Sarmiento, T., Penuela, L., & Uribe-Quevedo, A. (2020). Low Back Pain Attenuation Employing Virtual Reality Physiotherapy. In 2020 22nd symposium on virtual and augmented reality (svr) (p. 169-173). doi: 10.1109/SVR51698.2020 .00037

O’Toole, P., Glowinski, D., & Mancini, M. (2019). Understanding chromaesthesia by strengthening auditory -visual-emotional associations. In 2019 8th international conference on affective computing and intelligent interaction (acii) (p. 1-7). doi: 10.1109/ACII.2019.8925465

Pais, M., Martinez, L., Ribeiro, O., Loureiro, J., Fernandez, R., Valiengo, L., ... Forlenza, O. (2020). Early diagnosis and treatment of Alzheimer’s disease: new definitions and challenges. Braz J Psychiatry, 431–441.

Panda, R., Malheiro, R., & Paiva, R. P. (2020). Novel audio features for music emotion recognition. IEEE Transactions on Affective Computing, 11(4), 614-626. doi: 10.1109/TAFFC.2018.2820691

Parra, F., Scherer, S., Benezeth, Y., Tsvetanova, P., & Tereno, S. (2019). (revised may 2019) development and cross-cultural evaluation of a scoring algorithm for the biometric attachment test: Overcoming the challenges of multimodal fusion with "small data". IEEE Transactions on Affective Computing, 1-1. doi: 10.1109/TAFFC.2019.2921311

Paxiuba, C. M., & Lima, C. P. (2020). An Experimental Methodological Approach Working Emotions and Learning Using Facial Expressions Recognition. Brazilian Journal of Computers in Education, 28, 92–114.

Picard, R. W. (1997). Affective Computing. Massachusetts, USA: MIT Press, 2.

Plut, C., & Pasquier, P. (2019). Music Matters: An empirical study on the effects of adaptive music on experienced and perceived player affect. In 2019 ieee conference on games (cog) (p. 1-8). doi: 10.1109/CIG.2019.8847951

Poria, S., Cambria, E., Bajpai, R., & Hussain, A. (2017). A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion, 37, 98–125.

Qin, Y., Zhang, H., Wang, Y., Mao, M., & Chen, F. (2020). 3d music impact on autonomic nervous system response and its therapeutic potential. In 2020 ieee conference on multimedia information processing and retrieval (mipr) (p. 364-369). doi: 10.1109/MIPR49039.2020.00080

Rahman, J. S., Gedeon, T., Caldwell, S., & Jones, R. (2020). Brain Melody Informatics: Analysing Effects of Music on Brainwave Patterns. In 2020 international joint conference on neural networks (ijcnn) (p. 1-8). doi: 10.1109/IJCNN48605.2020.9207392

Rahman, J. S., Gedeon, T., Caldwell, S., Jones, R., Hossain, M. Z., & Zhu, X. (2019). Melodious Micro-frissons: Detecting Music Genres From Skin Response. In 2019 international joint conference on neural networks (ijcnn) (p. 1-8). doi: 10.1109/ IJCNN.2019.8852318

Ramdinwawii, E., & Mittal, V. K. (2017). Effect of Different Music Genre: Attention vs. Meditation. In 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW) (pp. 135–140). doi: 10.1109/ACIIW.2017.8272603

Ramírez, A. V., Hornero, G., Royo, D., Aguilar, A., & Casas, O. (2020). Assessment of Emotional States Through Physiological Signals and Its Application in Music Therapy for Disabled People. IEEE Access, 8, 127659-127671. doi: 10.1109/ACCESS.2020.3008269

Ranjkar, E., Rafatnejad, R., Nobaveh, A. A., Meghdari, A., & Alemi, M. (2019). Design, Fabrication, and Evaluation of the “Maya” Social Robot. In 2019 7th international conference on robotics and mechatronics (icrom) (p. 52-62). doi: 10.1109/ ICRoMb48714.2019.9071795

Ricci, G. (2019). Social Aspects of Dementia Prevention from a Worldwide to National Perspective: A Review on the International Situation and the Example of Italy. Behavioural neurology, 2019, 8720904. doi: 10.1155/2019/8720904

Rizos, G., & Schuller, B. (2019). Modelling Sample Informativeness for Deep Affective Computing. In Icassp 2019 - 2019 ieee international conference on acoustics, speech and signal processing (icassp) (p. 3482-3486). doi: 10.1109/ICASSP.2019.8683729

Rizzi, L., Rosset, I., & Roriz-Cruz, M. (2014). Global epidemiology of dementia: Alzheimer’s and vascular types. BioMed research international, 2014, 908915. doi: 10.1155/2014/908915

Rushambwa, M. C., & Mythili, A. (2017). Impact assessment of mental subliminal activities on the human brain through neuro feedback analysis. In 2017 third international conference on biosignals, images and instrumentation (icbsii) (p. 1-6). doi: 10.1109/ICBSII.2017.8082299

Saha, D. P., Bortz, B. C., Huang, W., Martin, T. L., & Knapp, R. B. (2016). Affect-Aware Intelligent Environment Using Musical Cues as an Emotion Learning Framework. In 2016 12th international conference on intelligent environments (ie) (p. 178-181). doi: 10.1109/IE.2016.39

Sanders, Q., Chan, V., Augsburger, R., Cramer, S. C., Reinkensmeyer, D. J., & Do, A. H. (2020). Feasibility of Wearable Sensing for In-Home Finger Rehabilitation Early After Stroke. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28(6), 1363-1372. doi: 10.1109/TNSRE.2020.2988177

Santhosh, A. K., Sangilirajan, M., Nizar, N., Radhamani, R., Kumar, D., Bodda, S., & Diwakar, S. (2020). Computational exploration of neural dynamics underlying music cues among trained and amateur subjects. Procedia Computer Science, 171, 1839-

Retrieved from https://www.sciencedirect.com/science/article/pii/S1877050920311789 (Third International Conference on Computing and Network Communications (CoCoNet’19)) doi: https://doi.org/10.1016/ j.procs.2020.04.197

Savery, R., Rose, R., & Weinberg, G. (2019). Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture. In 2019 28th ieee international conference on robot and human interactive communication (ro-man) (p. 1-7). doi: 10.1109/RO-MAN46459.2019.8956386

Sawata, R., Ogawa, T., & Haseyama, M. (2019). Novel Audio Feature Projection Using KDLPCCA-Based Correlation with EEG Features for Favorite Music Classification. IEEE Transactions on Affective Computing, 10(3), 430-444. doi: 10.1109/ TAFFC.2017.2729540

Scardua, D. A., & Marques, K. (2018). Estudo da Identificação de Emoções Através da Inteligência Artificial. Multivix, Edu..

Scott, L. (2017). Creating Opera for Mobile Media: Artistic Opportunities and Technical Limitations. In 2017 14th international symposium on pervasive systems, algorithms and networks 2017 11th international conference on frontier of computer science and technology 2017 third international symposium of creative computing (ispan-fcst-iscc) (p. 477-484). doi: 10.1109/ISPAN -FCST-ISCC.2017.86

Seanglidet, Y., Lee, B. S., & Yeo, C. K. (2016). Mood prediction from facial video with music “therapy” on a smartphone. In 2016 wireless telecommunications symposium (wts) (p. 1-5). doi: 10.1109/WTS.2016.7482034

Shan, Y., Chen, T., Yao, L., Wu, Z., Wen, W., & Liu, G. (2018). Remote detection and classification of human stress using a depth sensing technique. In 2018 first asian conference on affective computing and intelligent interaction (acii asia) (p. 1-6). doi: 10.1109/ACIIAsia.2018.8470364

Sheffield, J., Karcher, N., & Barch, D. (2018). Cognitive deficits in psychotic disorders: a lifespan perspective. Neuropsychol Rev, 509—533.

Shen, X., Hu, X., Liu, S., Song, S., & Zhang, D. (2020). Exploring EEG microstates for affective computing: decoding valence and arousal experiences during video watching*. In 2020 42nd annual international conference of the ieee engineering in medicine biology society (embc) (p. 841-846). doi: 10.1109/EMBC44109.2020.9175482

Shukla, J., Cristiano, J., Oliver, J., & Puig, D. (2019). Robot Assisted Interventions for Individuals with Intellectual Disabilities: Impact on Users and Caregivers. International Journal of Sociall Robotics, 11, 631–649. doi: 10.1007/s12369-019-00527-w

Sonawane, A., Inamdar, M. U., & Bhangale, K. B. (2017). Sound based human emotion recognition using MFCC and multiple SVM. International Conference on Information, Communication, Instrumentation and Control (ICICIC).

Sondhi, M. (1968). New Methods of Pitch Extraction. IEEE Trans. Audio and Electroacoustics, 262–266.

Song, M., Yang, Z., Baird, A., Parada-Cabaleiro, E., Zhang, Z., Zhao, Z., & Schuller, B. (2019). Audiovisual Analysis for Recognising Frustration during Game-Play: Introducing the Multimodal Game Frustration Database. In 2019 8th international conference on affective computing and intelligent interaction (acii) (p. 517-523). doi: 10.1109/ACII.2019.8925464

Soysal, O. M., Kiran, F., & Chen, J. (2020). Quantifying Brain Activity State: EEG analysis of Background Music in A Serious Game on Attention of Children. In 2020 4th international symposium on multidisciplinary studies and innovative technologies (ismsit) (p. 1-7). doi: 10.1109/ISMSIT50672.2020.9255308

Sra, M., Vijayaraghavan, P., Rudovic, O., Maes, P., & Roy, D. (2017). DeepSpace: Mood-Based Image Texture Generation for Virtual Reality from Music. In 2017 ieee conference on computer vision and pattern recognition workshops (cvprw) (p. 2289-2298). doi: 10.1109/CVPRW.2017.283

Stappen, L., Karas, V., Cummins, N., Ringeval, F., Scherer, K., & Schuller, B. (2019). From Speech to Facial Activity: Towards Cross-modal Sequence-to-Sequence Attention Networks. In 2019 ieee 21st international workshop on multimedia signal processing (mmsp) (p. 1-6). doi: 10.1109/MMSP.2019.8901779

Su, J. H., Liao, Y. W., Wu, H. Y., & Zhao, Y. W. (2020). Ubiquitous music retrieval by context-brain awareness techniques. In 2020 ieee international conference on systems, man, and cybernetics (smc) (p. 4140-4145). doi: 10.1109/SMC42975.2020 .9282963

Subramaniam, G., Verma, J., Chandrasekhar, N., C., N. K., & George, K. (2018). Generating Playlists on the Basis of Emotion. In 2018 ieee symposium series on computational intelligence (ssci) (p. 366-373). doi: 10.1109/SSCI.2018.8628673

Suhaimi, N. S., Yuan, C. T. B., Teo, J., & Mountstephens, J. (2018). Modeling the affective space of 360 virtual reality videos based on arousal and valence for wearable EEG-based VR emotion classification. In 2018 ieee 14th international colloquium on signal processing its applications (cspa) (p. 167-172). doi: 10.1109/CSPA.2018.8368706

Syed, S., Chagani, S., Hafeez, M., Timothy, S., & Zahid, H. (2018, 10). Sign recognition system for differently abled people. In Proceedings of tencon 2018 (p. 1148-1153). doi: 10.1109/TENCON.2018.8650365

Tarvainen, J., Laaksonen, J., & Takala, T. (2017). Computational and Perceptual Determinants of Film Mood in Different Types of Scenes. In 2017 ieee international symposium on multimedia (ism) (p. 185-192). doi: 10.1109/ISM.2017.10

Teo, J., & Chia, J. T. (2018). Deep Neural Classifiers For Eeg-Based Emotion Recognition In Immersive Environments. In 2018 international conference on smart computing and electronic enterprise (icscee) (p. 1-6). doi: 10.1109/ICSCEE.2018.8538382

Thammasan, N., Hagad, J. L., Fukui, K.-i., & Numao, M. (2017). Multimodal stability-sensitive emotion recognition based on brainwave and physiological signals. In 2017 seventh international conference on affective computing and intelligent interaction workshops and demos (aciiw) (p. 44-49). doi: 10.1109/ACIIW.2017.8272584

Tiple, B. S., Joshi, P. P., & Patwardhan, M. (2016). An efficient framework for recommendation of Hindustani Art Music. In 2016 international conference on computing communication control and automation (iccubea) (p. 1-5). doi: 10.1109/ ICCUBEA.2016.7860008

Tiwari, A., & Tiwari, R. (2017). Design of a brain computer interface for stress removal using yoga a smartphone application. In 2017 international conference on computing, communication and automation (iccca) (p. 992-996). doi: 10.1109/CCAA.2017.8229939

UN. (2020). World Population Ageing 2020: Highlights [Computer software manual]. Retrieved from https://www.un.org/development/desa/pd/sites/www.un.org.development.desa.pd/files/ undesa_pd-2020_world_population_ageing_highlights.pdf (Last accessed: 2021 Jul. 05)

Verma, G., Dhekane, E. G., & Guha, T. (2019). Learning affective correspondence between music and image. In Icassp 2019 - 2019 ieee international conference on acoustics, speech and signal processing (icassp) (p. 3975-3979). doi: 10.1109/ ICASSP.2019.8683133

Vicencio-Martínez, B., A. A.and Tovar-Corona, & Garay-Jiménez, L. I. (2019). Emotion Recognition System Based on Electroencephalography. International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), 11–13.

Vinayagasundaram, B., Mallik, R., Aravind, M., Aarthi, R. J., & Senthilrhaj, S. (2016). Building a generative model for affective content of music. In 2016 international conference on recent trends in information technology (icrtit) (p. 1-6). doi: 10.1109/ICRTIT.2016.7569588

Wang, K., Wen, W., & Liu, G. (2016). The autonomic nervous mechanism of music therapy for dental anxiety. In 2016 13th international computer conference on wavelet active media technology and information processing (iccwamtip) (p. 289-292). doi: 10.1109/ICCWAMTIP.2016.8079858

Wang, X., Wenya, L., Toiviainen, P., Tapani, R., & Cong, F. (2020). Group analysis of ongoing EEG data based on fast doublecoupled nonnegative tensor decomposition. Journal of Neuroscience Methods, 330, 108502. doi: https://doi.org/10.1016/ j.jneumeth.2019.108502

Wang, Y., & Kosinski, M. (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology, 246–257. doi: 10.1037/pspa0000098

Wardana, A. Y., Ramadijanti, N., & Basuki, A. (2018). Facial Expression Recognition System for Analysis of Facial Expression Changes when Singing. In 2018 international electronics symposium on knowledge creation and intelligent computing (ieskcic) (p. 98-104). doi: 10.1109/KCIC.2018.8628578

WHO. (2017). Global action plan on the public health response to dementia (2017-2025) [Computer software manual]. https://apps.who.int/iris/bitstream/handle/10665/259615/9789241513487-eng .pdf;jsessionid=4DA480FA93471AC53988E52B35F416D8?sequence=1 (Last accessed: 2021 Jul. 05)

WHO. (2019). Risk reduction of cognitive decline and dementia: WHO Guidelines [Computer software manual]. https://www.who.int/publications/i/item/risk-reduction-of-cognitive-decline -and-dementia (Last accessed: 2021 Jul. 05)

Wingerden, E. v., Barakova, E., Lourens, T., & Sterkenburg, P. S. (2020). Robot-mediated therapy to reduce worrying in persons with visual and intellectual disabilities. Journal of applied Research in Intellectual Disabillities, 39, 229–238. doi: 10.1111/jar.12801

Wolfe, H., Peljhan, M., & Visell, Y. (2020). Singing robots: How embodiment affects emotional responses to non-linguistic utterances. IEEE Transactions on Affective Computing, 11(2), 284-295. doi: 10.1109/TAFFC.2017.2774815

Wu, C.-H., Huang, Y.-M., & Hwang, J.-P. (2016). Review of affective computing in education/learning: Trends and challenges. British Journal of Educational Technology, 47(6), 1304–1323.

Wu, Y.-C., & Chen, H. H. (2016). Generation of Affective Accompaniment in Accordance With Emotion Flow. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(12), 2277-2287. doi: 10.1109/TASLP.2016.2603006

Xing, B., Zhang, K., Zhang, L., Wu, X., Dou, J., & Sun, S. (2019). Image–music synesthesia-aware learning based on emotional similarity recognition. IEEE Access, 7, 136378-136390. doi: 10.1109/ACCESS.2019.2942073

Xu, T., Yin, R., Shu, L., & Xu, X. (2019). Emotion Recognition Using Frontal EEG in VR Affective Scenes. In 2019 ieee mtt-s international microwave biomedical conference (imbioc) (Vol. 1, p. 1-4). doi: 10.1109/IMBIOC.2019.8777843

Xu, X., Deng, J., Coutinho, E., Wu, C., Zhao, L., & Schuller, B. W. (2019). Connecting Subspace Learning and Extreme Learning Machine in Speech Emotion Recognition. IEEE Transactions on Multimedia, 21(3), 795-808. doi: 10.1109/TMM .2018.2865834

Yamada, Y., & Ono, Y. (2019). Detection of Music Preferences using Cerebral Blood Flow Signals. In Annual international conference of the ieee engineering in medicine and biology society (pp. 490–493). doi: 10.1109/EMBC.2019.8856351

Yi-Hsiang, M., Han, Y., Lin, J.-Y., Cosentino, S., Nishio, Y., Oshiyama, C., & Takanishi, A. (2018). A Synchronization Feedback System to Improve Interaction Correlation in Subjects With Autism Spectrum Disorder. In 2018 9th international conference on awareness science and technology (icast) (p. 285-290). doi: 10.1109/ICAwST.2018.8517233

Zhang, Z., Han, J., Coutinho, E., & Schuller, B. (2019). Dynamic difficulty awareness training for continuous emotion prediction. IEEE Transactions on Multimedia, 21(5), 1289-1301. doi: 10.1109/TMM.2018.2871949

Zhang, Z., Han, J., Deng, J., Xu, X., Ringeval, F., & Schuller, B. (2018). Leveraging Unlabeled Data for Emotion Recognition With Enhanced Collaborative Semi-Supervised Learning. IEEE Access, 6, 22196-22209. doi: 10.1109/ACCESS.2018.2821192

Zhao, J., Mao, X., & Chen, L. (2019). Speech emotion recognition using deep 1D and 2D CNN LSTM networks. Biomedical Signal Processing and Control, 47, 312–323.

Zhu, L., Tian, X., Xu, X., & Shu, L. (2019). Design and Evaluation of the Mental Relaxation VR Scenes Using Forehead EEG Features. In 2019 ieee mtt-s international microwave biomedical conference (imbioc) (Vol. 1, p. 1-4). doi: 10.1109/ IMBIOC.2019.8777812

Downloads

Publicado

28/11/2021

Como Citar

SANTANA, M. A. de; LIMA, C. L. de; TORCATE, A. S.; FONSECA, F. S.; SANTOS, W. P. dos. Computação afetiva no contexto da musicoterapia: uma revisão sistemática. Research, Society and Development, [S. l.], v. 10, n. 15, p. e392101522844, 2021. DOI: 10.33448/rsd-v10i15.22844. Disponível em: https://rsdjournal.org/index.php/rsd/article/view/22844. Acesso em: 22 dez. 2024.

Edição

Seção

Artigos de Revisão