A multimodal psychological, physiological and behavioural dataset for human emotions in driving tasks

[ad_1]

Ethics statement

This study was carried out under the requirements of the Declaration of Helsinki and the later amendments of it. The content and procedures of this study were noticed and approved by the Ethics Committee of Chongqing University Cancer Hospital(Approval number: 2019223).

The written informed consent were given by all participants before they joined in this study. A statement was informed to the participants that results of this study might be published in academic journals or books. During the experiments, participants were told about the rights they would have in experiments. They were allowed to withdraw at any time during the experiments.

The permissions to make the processed data records known to public were gained from all the participants at the end of the study. Since PPB-Emo is to be open to public access, separate consent was obtained for the disclosure of the data that contains personally identifiable information, which is the facial expression of participants during driving tasks. Additional permission was used to inform them about the data types that would be shared in public and the potential risks of re-identification that might be caused by sharing the date and time of the processed data records. The sharing permissions were given by all participants in this study.

Experiment I: in-depth interview to collect drivers’ viewpoints

Experiment I focused on the investigation of drivers’ viewpoints on driving scenarios that induce different emotions in humans.

Participants

In-depth interviews with 27 participants were conducted. The 27 participants included 6 females (22.22%) and 21 males (77.78%). The age range of the participants was 19–55 years old, with an average age of 36.81 years old (standard deviation (SD) = 9.27). Participants’ driving experience ranged from 1 to 25 years, with an average driving experience of 8.93 years(SD = 6.49). The occupations of the participants include workers, teachers, students, farmers, staffs, civil servants, drivers, etc.

Procedure

The aim of in-depth interviews was to obtain real-life scenario information that induces different emotions of human drivers and use the results to develop questionnaires. The scenario information collection procedure includes semi-structured interviews with human drivers. The interviews were based on the interview guide method31. All participants first signed the demographic questionnaire, and collected personal and demographic information including age, gender, driving experience, and occupation. Then, through interviews with the participants, the participants answered a set of open-ended questions (e.g., question “Could you share an experience that you felt scared while driving or even when you recalled it?”). During the answering process, the interviewer guided the participants to use their own words to recall and describe driving scenarios that trigger different emotions, including roads, weather and lighting conditions; other road users’ behaviours; events; and other contributing factors (e.g., answer “One time when I was driving on a mountain road at night, there was no one on the road. I felt very sleepy. My eyes closed a little uncontrollably. When I opened my eyes, I found that I was in a sharp bend. I stepped on the brakes. It made me feel scared.”). Each participant answered seven driving scenarios questions corresponding to different emotions. The interview time for each participant was about 30 minutes and the process was recorded.

Results of collected drivers’ viewpoints

All audio recording and on-site notes of the in-depth interviews were transcribed verbatim and analyzed using Excel files. First, the original transcripts of the 27 interviewees were broken into complete sentences. Next, the two researchers (1 male and 1 female) with expert knowledge and rich experience in drivers’ emotions analysis evaluated sorted the sentences separately and the main scenario information corresponding to the seven emotions in the statement were determined under the consensus of them. After summarizing, there are eleven kinds of scenarios that induce anger in human drivers; sixteen kinds of scenarios that induce happiness in human drivers; ten kinds of scenarios that make human drivers fear; eleven kinds of scenarios that trigger human drivers to feel disgusted; There are ten kinds of scenarios that cause human drivers to feel surprised; Relatively, few scenarios that trigger sadness and neutral of human drivers are five and six respectively. Table 2 summarizes the top five driving scenarios that induce each emotion according to the number of participants.

Table 2 Description of the top five driving scenarios that induce each emotion according to the number of participants.

Experiment II: online questionnaire for stimulus selection

Experiment II focuses on obtaining seven driving scenarios that most effectively induce the corresponding emotions of human drivers through questionnaire surveys, as the basis for the selection of video-audio stimulus materials.

Participants

409 Chinese participants were recruited from four countries, including China, the United States, Canada, and Singapore. They were asked to complete an online questionnaire, including 146 women (35.61%) and 263 men (64.39%). The age range of the participants is 18–71 years old, and the average age is 31.34 years old (SD = 10.64). Participants’ driving experience ranges from 1–41 years, with an average driving experience of 5.87 years (SD = 6.69).

Procedure

Because online surveys can avoid geographical restrictions on data collection, and previous studies have also verified the effectiveness of online tools in assessing driving behaviour32,33. Therefore, an online survey was conducted to collect the data in Experiment II. Based on the outcomes of Experiment I, the online questionnaire consists of two parts and a total of ten questions. The first part is the demographic background. There are three questions, including gender, age, and driving experience. The second part is based on the results of Experiment I and developed seven questions for driving scenarios that induce different emotions in human drivers. These questions correspond to seven emotions that need to be investigated. Each question describes five different driving scenarios. These scenario descriptions are derived from the top five more frequently mentioned scenarios in Experiment I. Participants were asked to select the scenarios most likely to induce corresponding emotions from the five scenarios and they can select more than one scenario (up to five) if they want. It takes about 10 minutes to complete the questionnaire.

The professional online survey platform Sojump (www.sojump.com) was used to design and post the questionnaire. Participants’ answers, region, and answer time were automatically recorded. The survey was distributed in the chat groups of social software (WeChat and QQ). To increase the involvement in the survey, participants will receive a reward of five RMB after completing the survey.

Results of stimulus selection

Participants reported the corresponding scenarios that easily induce seven kinds of emotion states (anger, fear, disgust, sadness, surprise, happiness and neutral) during driving. Table 3 presents the frequency and percentage of scenarios that easily induce 7 kinds of emotions among the 409 participants. Among them, a total of 344 participants (84.11%) thought that the scenario of “Others keep the high beam on while meeting the car, which affects the vision.” was most likely to induce their anger. 310 participants (75.79%) mentioned “Driving on a mountain road with high cliff beside.” that would make them feel fear. 351 participants (85.82%) felt disgusted when they saw the scenario “The driver in front keeps throwing garbage, water bottles, and spitting out.” A total of 271 participants (66.26%) thought that the scenario of “Witnessing an accident while driving.” was the easiest to make them sad. 307 participants (75.06%) reported that “Seeing some pedestrians walking on the highway.” would make them surprise. Regarding the happiness, 299 participants (73.11%) reported that “Noticing interesting things happened on the road and the scenery outside is very beautiful.” is the easiest to make them happy. The corresponding frequencies of scenarios are shown in Fig. 2. In addition, 273 participants (66.75%) felt neutral when driving while listening to soft music.

Table 3 Results of the online questionnaire survey for 409 participants.
Fig. 2

Frequency of the corresponding scenarios that easily induce six basic emotions.The x-axis represents the driving scenarios that trigger a specific emotion, such as anger-1 represents that others keep the high beam on while meeting the car, which affects the vision. Table 3 describes the content of each scenario that triggers a specific emotion. The y-axis shows the frequency of 409 participants’ scenario selections in the online questionnaire and each participant can choose up to 5 scenarios.

The emotions of human drivers need to be induced by appropriate stimuli to collect emotion data. Video-audio clips have been proven to reliably trigger the emotions of human driver6,34,35. Based on the results of the questionnaire survey, we manually selected the seven most effective (the highest percentage of participants were selected in each emotional scenario) video-audio clips on the Bilibili website (https://www.bilibili.com/) to induce the corresponding emotions of the human driver in Experiment III. Bilibili is a Chinese video-sharing site where users can upload videos of their lives, and video viewers can tag or add comments to videos through a scrolling commenting system nicknamed “bullet-screen comments”, which will help us evaluate video viewers’ emotional feelings induced by the video-audio clips.

To select the most effective video-audio clips based on the results of the online survey, two research experts (1 male and 1 female) with rich experience in drivers’ emotions analysis evaluated more than 100 video-audio clips. The consensus of the two experts determined the choice of video-audio clips, and finally, 7 videos were selected for Experiment III. Notebaly, in order to make the driver feel more immersive and induce the correct emotion in Experiment III, all the selected video-audio clips in Experiment II were first-perspective of the human driver. Table 4 describes the contents of these seven clips.

Table 4 Contents description of the selected seven video-audio stimulus for human driver emotion induction.

Experiment III: multi-modal human emotion data collection in driving tasks

The aim of Experiment III is to collect the multimodal psychological, physiological and behavioural dataset for human emotions in driving tasks.

Participants

A total of 41 drivers from Chongqing were recruited for this data collection experiment. Among these participants, the data of participant 1 was found incomplete and invalid after the collection process. The reason might due to the unexpected technical problems. Therefore, the data of 40 participants (age range = 19–58 years old, average age = 28.10 years old, SD = 9.47)) were valid in this experiment, including 31 males and 9 females. All participants had a valid driver’s license and had at least one year of driving experience (driving experience range = 1–32 years, average driving experience = 5.58 years, SD = 6.02). All participants had normal/corrected vision and hearing. Their health statuses were reported before the start of the experiment. Participants were suggested to have a regular 24-hour schedule and took no stimulating drugs or alcohol before the experiment. Each participants received a reward of 200 RMB after the experiment.

Experiment setup

The multi-modal data collection system used in this experiment mainly includes the psychological data collection module, physiological data collection module, behavioural data collection module, driver emotion induction module, driving scenarios, and data synchronization. Figure 3 shows the setup of the overall multi-modal data collection experiment. The contents of the specific modules are as follows:

Fig. 3

Experimental setup of human driver multi-modal emotional data collection. (A) EEG data collection, (B) video data collection, (C) driving behaviour data collection, (D) experiment setup, (E) driver’s emotion induction, (F) psychological data collection. The use of the relevant portraits in Fig. 3 has been authorized by the participants, and the identifiable information has been anonymized with the knowledge of the participants.

Psychological data collection module

In this experiment, three self-reported scales were used to collect psychological data, including self-assessment manikin (SAM), differential emotion scale (DES), Eysenck personality questionnaire (EPQ). SAM36 was used for participants to subjectively annotate their dimensional emotions. Representations of non-verbal graphical were used in SAM to evaluate the level of three dimensions (arousal, valence, and dominance). The 9-point scale (1 = “not at all”, 9 = “extremely”) SAM was used for assessment in the experiment procedure. DES37 was used for participants to subjectively annotate their discrete emotions. DES is a multidimensional self-report scale for human’s emotions assessment, including ten fundamental emotions: sadness, anger, contempt, fear, shame, interest, joy, surprise, disgust, and guilt. In the experiment, the 9-point scale DES (1 = “not at all”, 9 = “extremely”) was chosen as the method to evaluate the intensity of self-reported emotions in each dimension. EPQ38 with a total of 88 questions was used to assess the personality traits of participants in the experiment. EPQ is a multi-dimensional psychological measurement38, which can measure the personality traits of humans, including P-Psychoticism/Socialisation, E-Extraversion/Introversion, N-Neuroticism/Stability, L-Lie/Social Desirability. The experiment used iPad (Apple, Cupertino, USA) for participants’ self-reported emotions during driving.

Physiological data collection module

An EnobioNE (Neuroelectrics, Barcelona, Spain) was used in the experiment to collect participant’s EEG physiological data. EnobioNE is a 32-channel wireless EEG device that uses a neoprene cap to fix the channel at the desired brain location. The electrical activity of the brain was recorded using the EnobioNE-32 system. Dry copper electrodes (coated with a silver layer) fixed on the cap was used to guarantee the good contact with the participant’s scalp. The amplitude resolution of EnobioNE we used was 24 bi (0.05 μV), the sampling rate was 500 Hz, and the band-pass filter was between 2 and 40 Hz. The signal was directly captured by the NIC2 software, and The software contained programs for acquiring and processing signals. During the experiment, the software filtered out electrooculogram (EOG), electromyography (EMG) and electrocardiographic (ECG) signals simultaneously. In addition, the NIC2 software associated the channels with the variable position in the international 10–10 positioning system dynamically. The alpha wave, beta wave, gamma wave, delta wave and theta wave at these positions were directly output to the computer through the NIC2 software.

Before the experiment, the researcher suggested that the participants should wash their hair in advance to avoid the poor contact of the EEG cap electrodes. After the participants put on the device correctly, the contact status of all electrodes in the EnobioNE system was checked and adjusted till a good fit was reached. In addition, a common-mode sensing electrode clamped on the right earlobe was used as a ground reference.

Behavioural data collection module

Behavioural data collection module consists of driving behaviour data collection and video data collection. Driver behaviour data was obtained using a fixed-based driving simulator (Realtime Technologies, Ann Arbor, USA). The simulator consists of a half-cab platform and an automatic transmission, providing a 270° field of view. The simulator is equipped with a rear-view mirror with a simulated projection, allowing the driver to monitor the traffic behind. Furthermore, the sound of the engine and ambient is emitted through two speakers. The woofer in the simulator simulates the vibration of the vehicle under the driver’s seat. In addition, the simulator dashboard was an LCD (resolution 1920×720, 60 Hz) screen, which was used to display the speedometer, tachometer and gear position. The data of driver behaviour, road information and vehicle posture generated by operating the driving simulator during the driver’s driving process were synchronized and recorded in real-time in the background of the main control computer.

The video data collection composed of six high-definition cameras. Five RGB cameras and one infrared camera were used in this experiment to collect the driver’s face expression, body gesture and road scenario data. The RGB camera we used was Pro Webcam C920 (Logitech, Newark, USA) with a resolution of 1920×1080 pixels, which collected data at a frame rate of 30 fps. The infrared camera we used was an industrial-grade camera with a resolution of 1080×720 pixels, a lens focal length of 2.9 mm and a shooting angle of 90 degrees without distortion. Data collection was at a frame rate of 30 fps. Six cameras were arranged in the cockpit of the driving simulator, of which three RGB cameras were located in front of the participant’s face at 40° on the left and right sides. These cameras were used to collect facial expression data, and one RGB camera was arranged in the front pillar of the driving simulator to collect the driving posture data of participants, and one RGB camera was arranged at the position of the rear-view mirror of the driving simulator to collect road scenario information during driving. Infrared cameras were placed directly in front of the participants’ faces and were also used to collect facial expression data. In addition, the camera was also used to collect the voice information of the participants during emotional driving. The LiveView software (EVtech, Changsha, China) was used to record video information simultaneously from the six high-definition cameras.

Driver’s emotion induction module

A 20-inch simulator central display (resolution 1280×1024, 60 Hz) was used in the experiment to display video-audio stimulus materials. Stereo Bluetooth speakers (Xiaomi, Shenzhen, China) were used to play audio, and the audio was set to a relatively large volume. Meanwhile, each participant was asked if the volume was comfortable for them to ensure clear hearing volume was adjusted before the experiment. Video-audio stimulus materials selected in Experiment II was used in Experiment III. To ensure that there was no human intervention in the emotion induction of participants during the experiment, the emotion induction system in this experiment was mainly composed of a master computer, a remote display and a remote Bluetooth audio playback device.

Driving scenarios

In this experiment, two simulated driving scenarios were designed: a formal experimental scenario and a simulated driving practice scenario. The practice scenario setting aims to improve the participants’ control and familiarity with the driving simulator through the practice before the formal experiment. The scenario for practice driving was an 8 km straight section of highway with bidirectional four traffic lanes. The formal experiment scenario is a two-way two-lane straight-line section with a total length of 3 km. The reason for setting these two scenarios is to minimize the requirements of complex driving conditions on the driver’s performance, to show the real multimodal responses elicited by driver emotion to the greatest extent39. Participants were asked to drive in the right lane throughout the experiment, keeping the speed at about 80 km/h. The specific configuration parameters of the two experimental scenarios are shown in Table 5. The entire driving scenario uses SimVista and SimCreator software to build the driving scenarios.

Table 5 Driving scenarios details of Experiment III.

Data synchronization

To collect and store all data synchronously, this experiment used the D-lab data collection synchronization platform (Ergoneers, Gewerbering, Germany) to collect data in multiple channels, including EEG, driving behaviour data, video data are recorded synchronously on a common time axis to achieve subsequent synchronous analysis. In addition, D-Lab was also used to manage and control the experiment.

Experiment procedure

The whole experiment process is divided into three parts: preparation, emotional driving experiments and post-experiment interviews. The overall process is shown in Fig. 4.

Fig. 4

Experimental procedure and tasks of Experiment III. (A) Experiment preparation, (B) Multimodal human emotion data collection. (C) Post-experiment interview.

Experiment preparation

  1. 1.

    Experiment introduction: After the participants arrive in the waiting room, the participants will be explained the purpose, the duration, and the research significance of this experiment. At the same time, the participants will be informed that the data collection apparatus of this experiment is non-invasive and radiation-free, and apparatus will not have any impact or harm on the participants’ health, and the participate voluntarily of participants will be ensured.

  2. 2.

    Sign the participant inform consent form: instruct participants to read the “Participant Inform Consent Form”, the researchers will number the participants and register the basic information.

  3. 3.

    Complete the health form for experiment participants: to check the health of the participants in their daily lives, and whether they have taken psychotropic drugs, cold allergy drugs or alcohol in the past 12 hours. The researchers will evaluate the situation of the participants and see if it is suitable for them to participate in the experiment.

  4. 4.

    Wear testing apparatus: the researchers help the participants to wear the EEG. After wearing the EEG cap, the researchers will adjust the comfort level to observe whether the electrodes are fit and whether the signal collection is normal.

  5. 5.

    Simulator practice driving: the researchers will lead the participants to sit in the cockpit and adjust the positions of the seat to a suitable position. Then, the researchers will help the participants to adapt to the speed control of the driving simulator, and remind participants to drive according to the speed signs. In the process of practice driving, the researchers will explain the formal experiment process, steps and attention to the participants in the co-pilot position.

  6. 6.

    Fill in the driving simulator sickness questionnaire: check whether the participant has any physical discomfort during the driving simulator experiment.

Multimodal human emotion data collection

In the formal experiment, participants were asked to complete the driving tasks in seven emotional states (anger, sadness, fear, disgust, surprise, happiness, and neutral), in which the order of emotional induction was randomly selected. After each experiment, a 3-minute emotional cooling period was set up to allow participants to calm down from the previous period of emotions.

  1. 1.

    Emotion induction: The researcher loads the preset driving scenarios program to the driving simulator, and at the same time randomly plays the video-audio clips to the participants for emotion induction. The participants watch the video-audio stimulus material and try to maintain their emotions while driving.

  2. 2.

    Emotional driving: After the participant finished watching the emotion induction material, the participant starts emotional driving in D (Driving) gear, and the experimental platform starts recording data simultaneously. Participants were told to keep the speed at around 80 km/h during the emotional driving phase.

  3. 3.

    Self-reported emotion: After completing a time of emotional driving, participants were required to recall the state of their emotions during their driving scenarios by completing self-assessment of the SAM scale and DES scale questionnaires.

  4. 4.

    Repeat the above two steps until the participant completes seven emotional driving. After the participant completes the corresponding SAM scale and DES scale, the researcher will record the experiment process.

Post-experiment interview

After completing all the emotional driving experiments, the researcher will help the participants to remove the experimental apparatus from their bodies, and then guided the participants to complete the EPQ questionnaire.

[ad_2]

Source link

Related posts

Nayanthara: The Meteoric Rise from South to Bollywood and the Bhansali Buzz 1

“Kaala premiere: Stars shine at stylish entrance – see photos”

EXCLUSIVE: Anurag Kashyap on Sacred Games casting: ‘Every time…’