Request access
To acquire access to the database:
- Please print the End User License Agreement (EULA, download here).
- Sign it, and scan it.
- Send a mail to accede@liris.cnrs.fr and include the signed EULA as a pdf.
Please note, that any requests from free email addresses (hotmail, yahoo, gmail, etc.) will be refused. After submission of the form, it may take up to a week for you to receive the download link (you will receive a notification by email).
Introduction
In contrast to existing datasets with very few video resources and limited accessibility due to copyright constraints, LIRIS-ACCEDE consists of videos with a large content diversity annotated along affective dimensions. All excerpts are shared under Creative Commons licenses and can thus be freely distributed without copyright issues. The dataset (the video clips, annotations, features and protocols) are publicly available.
LIRIS-ACCEDE is composed of six collections:
- Discrete LIRIS-ACCEDE - Induced valence and arousal rankings for 9800 short video excerpts extracted from 160 movies. Estimated affective scores are also available.
- Continuous LIRIS-ACCEDE - Continuous induced valence and arousal self-assessments for 30 movies. Raw and post-processed GSR measurements are also available.
- MediaEval 2015 affective impact of movies task downloads - Violence annotations and affective classes for the 9800 excerpts of the discrete LIRIS-ACCEDE part, plus for additional 1100 excerpts used to extend the test set for the MediaEval 2015 affective impact of movies task.
- MediaEval 2016 Emotional Impact of Movies task downloads - Test set for the MediaEval 2016 Emotional Impact of Movies task: 1200 additional videos excerpts for the Global Annotation subtask and 10 additional movies for the Continous Annotation subtask.
- MediaEval 2017 Emotional Impact of Movies task downloads - Valence/arousal and fear annotations for the development and test sets of the MediaEval 2017 Emotional Impact of Movies Task. Visual and audio features are also provided.
- MediaEval 2018 Emotional Impact of Movies task downloads - Valence/arousal and fear annotations for the development and test sets of the MediaEval 2018 Emotional Impact of Movies Task. Visual and audio features are also provided.
A general presentation of the LIRIS-ACCEDE dataset is available here :
- E. Dellandréa, M. Huigsloot, L. Chen, Y. Baveye, Z. Xiao and M. Sjöberg, “Predicting the Emotional Impact of Movies” in ACM SIGMM Records, Issue 4, 2018. http://records.mlab.no/2018/12/18/predicting-the-emotional-impact-of-movies/
A complete description of the discrete collection of the dataset can be found in the following journal paper:
- Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen, “LIRIS-ACCEDE: A Video Database for Affective Content Analysis,” in IEEE Transactions on Affective Computing, 2015.
Continuous annotations are described in the following publication:
- Y. Baveye, E. Dellandrea, C. Chamaret, and L. Chen, “Deep Learning vs. Kernel Methods: Performance for Emotion Prediction in Videos,” in 2015 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), 2015.
The collection for the MediaEval 2015 Affective Impact of Movies task is introduced in the following publication:
- M. Sjöberg, Y. Baveye, H. Wang, V. L. Quang, B. Ionescu, E. Dellandréa, M. Schedl, C.-H. Demarty, and L. Chen, “The mediaeval 2015 affective impact of movies task,” in MediaEval 2015 Workshop, 2015.
The collection for the MediaEval 2016 Emotional Impact of Movies task is introduced in the following publication:
- E. Dellandrea, L. Chen, Y. Baveye, M. Sjoberg and C. Chamaret, "The MediaEval 2016 Emotional Impact of Movies Task", in Working Notes Proceedings of the MediaEval 2016 Workshop, Hilversum, The Netherlands, October 20-21, 2016.
The collection for the MediaEval 2017 Emotional Impact of Movies task is introduced in the following publication:
- E. Dellandrea, Martijn Huigsloot, L. Chen, Y. Baveye and M. Sjoberg, "The MediaEval 2017 Emotional Impact of Movies Task", in Working Notes Proceedings of the MediaEval 2017 Workshop, Dublin, Ireland, September 13-15, 2017.
The collection for the MediaEval 2018 Emotional Impact of Movies task is introduced in the following publication:
- E. Dellandréa, M. Huigsloot, L. Chen, Y. Baveye, Z. Xiao and M. Sjöberg, "The MediaEval 2018 Emotional Impact of Movies Task", in Working Notes Proceedings of the MediaEval 2018 Workshop, Sophia Antipolis, France, October 29-31, 2018.
Other related publications are listed here.
Credits
The discrete and continuous collections of LIRIS-ACCEDE have been created by a french team of researchers:
- Yoann Baveye, Technicolor & Ecole Centrale de Lyon, LIRIS, France
- Emmanuel Dellandréa, Ecole Centrale de Lyon, LIRIS, France
- Christel Chamaret, Technicolor, France
- Liming Chen, Ecole Centrale de Lyon, LIRIS, France
- Jean-Noël Bettinelli, Ecole Centrale de Lyon, LIRIS, France
- Ting Li, Technicolor, LIRIS
Finally, we want to thank Léo Perrin who created the program generating the comparisons and collecting the data from CrowdFlower, Xingxian Li for his help on the modification of the GTrace program and we further would like to thank Ting Li who worked on the correlation between continuous affective ratings and physiological measurements. Of course, we also would like to thank all film-makers that shared their work under Creative Commons licenses.
The data for the MediaEval 2015 affective impact of movies task has been collected by:
- Mats Sjöberg, Helsinki Institute for Information Technology HIIT, University of Helsinki, Finland
- Yoann Baveye, Technicolor & Ecole Centrale de Lyon, LIRIS, France
- Hanli Wang, Tongji University, China
- Vu Lam Quang, University of Science, VNU-HCMC, Vietnam
- Bogdan Ionescu, University Politehnica of Bucharest, Romania
- Emmanuel Dellandréa, Ecole Centrale de Lyon, LIRIS, France
- Markus Schedl, Johannes Kepler University, Linz, Austria
- Claire-Hélène Demarty, Technicolor, France
- Liming Chen, Ecole Centrale de Lyon, LIRIS, France
Violence and affective classes could not have been collected without the effort from all the task organizers. Special thanks go to Bogdan Ionescu's team (University Politehnica of Bucharest, Romania), Hanli Wang's team (Tongji University, China), Vu Lam Quan's team (University of Science, VNU-HCMC, Vietnam), and Markus Schedl's team (Johannes Kepler University, Linz, Austria), who contributed a lot to the annotations, and of course Mats Sjöberg who organized... almost everything!