The multimodal recognition of eating condition - whether a person is eating or not - and if yes, which food type, is a new research domain in the area of speech and video processing that has many promising applications for future multimodal interfaces such as: adapting speech recognition or lip reading systems to different eating conditions (e.g. dictation systems), health (e.g. ingestive behaviour), or security monitoring (e.g., when eating is not allowed).
We therefore invite for participation in the first open, audio-visual challenge under strictly comparable conditions, namely audio-visual classification of eating conditions and leverage the audio-visual iHEARu-EAT database [1, 2].
This database contains around 1.4 k utterances (2.9 hours of audio/video recordings) taken from 30 German speakers while eating six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps). The data contains read speech as well as spontaneous speech combined with transcriptions.
We define three Sub-Challenges based on classification tasks in which participants are encouraged to use speech and/or video recordings:
Last update: 19 July 2018
|19 July 2018||Notifications are send to authors.|
|17 July 2018||Notifications will be a bit delayed.|
|11 June 2018||Paper submission is now open|
|25 May 2018||Paper submission deadline is extended to the 15th of June|
|16 May 2018||Release of the test data|
|12 April 2018||The preliminary baseline paper is released|
|11 April 2018||The second baseline approach is now available|
|04 April 2018||Training data and first baseline approach are released|
|22 February 2018||Challenge Website is online|
|4th April||Release of training data and first evaluation script|
|11th April||Release of the second evaluation script and prelimary baseline paper|
|16th May||Release of the test data|
|14th June||Closing competition|
|15th June||Paper submission deadline|
|29th July||Final result submission|
|31th July||Camera ready|
Simone Hantke, Machine Intelligence & Signal Processing Group, Technische Universität München, Germany ➙ website
Maximilian Schmitt, Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Germany ➙ website
Panagiotis Tzirakis, Group on Language, Audio, and Music, Imperial College London, U.K. ➙ website
Björn Schuller, audEERING GmbH, Germany ➙ website
We will present you with a baseline paper, including many details about the data, performance measures, baseline features and prediction results. Please note that the content of this paper may change until the camera ready deadline.
Please contact the organisers by email to register your team. This email should have as a subject line "Registration_TeamName" and include the following:
Please obtain the License Agreement here to get a password and further instructions for the download of the dataset: Please fill it out, print, sign, scan, and email accordingly. The agreement has to be signed by a permanent staff industrial or scientific member (e.g., full professor, associate/assistant professor, reader, lecturer).
The list of team members must be the same as those provided on the EULA and it is not allowed to share the data with others including lab mates. Upon registering your team, you will be given a link and a password to access the files.
After downloading the data, you can directly start your experiments with the train set. You can send your best results to the organisers (email@example.com) with the subject line "Results_TeamName". We will analyse five result submissions for you and let you know your performance result. See below for more information on the submission process and the way performance is measured.
The organisers provide the following data:
The introductory paper on the Challenge (the paper will be available within the baseline packages) provides extensive descriptions and baseline results. All participants will be asked to avoid repetitions of the challenge itself, the data, or feature descriptions in their submissions - of course, they have to describe shortly the essentials of the databases dealt with - but include the citation of the baseline paper (will be coming soon), as well as the citation of the dataset.
Participants may contribute in all Sub-Challenges at a time. A training and development partitioning will allow for tests and results to be reported by the participants apart from their results on the official test set.
Each participant has up to five submission attempts per Sub-Challenge. You can submit results until the final results deadline, which is before the camera ready version deadline. Your best result for each Sub-Challenge will be used in determining the winner of the challenge. Please send submissions by email to organisers (firstname.lastname@example.org) with the subject line "Results_TeamName".
Important: we may reserve the right to ask the top performer of the challenge to submit their code to us to verify the results on the original test set.
Participants' results should be sent as a single Zip file to the organisers by email. The zip file should include the name of your team, the Sub-Challenge, and the number of this attempt, e.g. Results_TUM_Likability_1.zip.
The data in the results files themselves should be formatted the same way as the training gold standard label files, that is, one ARFF file per subject, containing three attributes: Instance_name and the prediction in ASCII values. Their filenames should also be formatted in exactly the same way. We might display results obtained by participants on our website.
Please be reminded that a paper submission and at least one upload on the test set are mandatory for the participation in the Challenge. However, paper contributions within the scope are also welcome if the authors do not intend to participate in the Challenge itself.
In any case, please submit your paper until 30th May 2018(and final results by 29th July 2018, the camera ready paper by 31th July 2018) using the standard style info and respecting length limits. Remaining result upload trials can be saved for new Challenge results until 29th July 2018.
The papers will undergo the normal review process. Papers should refer to the baseline paper for details about the dataset and baseline results. This makes for a more readable set of papers, compared to each challenge paper repeating the same information. Please cite the introductive paper which will be available soon.
All papers must be formatted according to ICMI proceedings style, and should be no more than 4 pages in the two-column ACM conference format (excluding references). Papers should be submitted through the ICMI EAT's easychair submission site (submission link will be made available soon). Reviewing will be double-blind, so submissions should be anonymous: do not include the authors' names, affiliations or any clearly identifiable information in the paper. Latex and word templates for this format can be downloaded from the main website.
Please register using the main conference website. Participants simply register using the registration services.
Eduardo Coutinho, University of Liverpool, U.K.
Anna Esposito, Seconda Università di Napoli (SUN) and IIASS, Italy
Robert Istepanian, Imperial College London, U.K.
Dongmei Jiang, Northwestern Polytechnical University, China
Erik Marchi, Apple inc., USA
Shri Narayanan, University of Southern California, USA
Fabien Ringeval, Université Grenoble Alpes, France
Stefan Steidl, Friedrich-Alexander University Erlangen-Nuremberg, Germany
Dan Zecha, University of Augsburg, Germany
This Challenge will be mainly sponsored by the industrial partner audEERING GmbH and funded from the European Community's Seventh Framework Programme under grant agreement No. 338164 (ERC Starting Grant iHEARu).