Understanding the configuration of channels in the 10-20 EEG montage

Understanding the configuration of channels in the 10-20 EEG montage

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

In a typical 10-20 system there are 21 electrodes placed on the scalp. However, this does not mean there are 21 distinct "channels" or voltage sources.

I've heard that some of these electrodes are "reference" electrodes, while others are "active" electrodes. Sometimes electrodes can setup in a "bipolar" or "differential" fashion.

  • How many actual channels (that is, distinct input sources) will there be?
  • What's the difference between reference and active channels?
  • How does this bipolar/differential setup work differently than a "normal" setup?


Here is my updated understanding based on @Christiaan's answer below:

time (t) F2v F3v F2-F3 F3-F2 ================================================= 1 2 1 1 -1 2 3 6 -3 3 3 5 3 2 -2

So, if my understanding is correct, then at timet=1, the voltage of the F2 electrode might be, say, 2 (units; volts, microvolts, whatever), and the voltage of the F3 electrode might be 1. If F2 is the arbitrary active electrode, then the potential difference between F2 and F3 is 2 - 1 = 1. But, if F3 was the active electrode (and F2 the reference), then the potential difference would be the inverse (that is, 1 - 2 = -1).

EEG 10-20 system. source: Wikipedia

  • How many actual channels are there?

21 in the figure, i.e., the number of active electrodes.

  • What's the difference between reference and active channels?

The active electrode is the electrode under investigation. Basically this is arbitrary. Consider electrode F2 and F3. When you measure the potential difference between them and F2 is the active electrode and F3 the reference, the signal will be exactly identical as in the reverse situation, only the polarity (the sign of the voltage) is reversed. That's all there is to it. The active electrode is typically the electrode linked to the location you wish to record. The reference is elsewhere.

  • What is the difference between a bipolar and a normal (unipolar/monopolar) setup?

In a regular unipolar setup, each of the electrodes is either measured against a distant reference (e.g. the earlobe) or against the aggregate of all the electrodes. Either way, the reference is distant. This means that the signal will be high, but artifacts will be large too, because artifacts from both the electrodes will be merged into the signal. Consider a reference in the neck; neck musculature activity will add artifacts (EMG) to the signal. Likewise, a reference in the chest area will add ECG to your signal.

In a bipolar setup adjacent electrodes are recorded. This means that large artifacts occurring through eye blinks or whatsoever are recorded by both electrodes and as their polarity is opposite, the large artifact responses will be cancelled and the signal is much cleaner. However, the downside of the coin is that commonalities in the signal, which will be larger when the electrodes are closer will also be subtracted and hence your signal will decrease accordingly.

I have used bipolar setups when measuring eERGs and eCAPs, i.e., electrically evoked activity. The electrical stimuli generated large artifacts and actually so large that their amplitudes were approximately similar between active and reference electrodes. The reduction in artifact by far outweighed the loss in signal. It depends on the situation you are at.

An authoritative work in this field is listed below.

- Pivik et al., Psychophysiol (1993); 30: 547-58

How do reference montage and electrodes setup affect the measured scalp EEG potentials?

Objective. Human scalp electroencephalogram (EEG) is widely applied in cognitive neuroscience and clinical studies due to its non-invasiveness and ultra-high time resolution. However, the representativeness of the measured EEG potentials for the underneath neural activities is still a problem under debate. This study aims to investigate systematically how both reference montage and electrodes setup affect the accuracy of EEG potentials. Approach. First, the standard EEG potentials are generated by the forward calculation with a single dipole in the neural source space, for eleven channel numbers (10, 16, 21, 32, 64, 85, 96, 128, 129, 257, 335). Here, the reference is the ideal infinity implicitly determined by forward theory. Then, the standard EEG potentials are transformed to recordings with different references including five mono-polar references (Left earlobe, Fz, Pz, Oz, Cz), and three re-references (linked mastoids (LM), average reference (AR) and reference electrode standardization technique (REST)). Finally, the relative errors between the standard EEG potentials and the transformed ones are evaluated in terms of channel number, scalp regions, electrodes layout, dipole source position and orientation, as well as sensor noise and head model. Main results. Mono-polar reference recordings are usually of large distortions thus, a re-reference after online mono-polar recording should be adopted in general to mitigate this effect. Among the three re-references, REST is generally superior to AR for all factors compared, and LM performs worst. REST is insensitive to head model perturbation. AR is subject to electrodes coverage and dipole orientation but no close relation with channel number. Significance. These results indicate that REST would be the first choice of re-reference and AR may be an alternative option for high level sensor noise case. Our findings may provide the helpful suggestions on how to obtain the EEG potentials as accurately as possible for cognitive neuroscientists and clinicians.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

We will use these event markers as the input to our SSP cleaning method. This technique works well if each artifact is defined precisely and as independently as possible from the other artifacts. This means that we should try to avoid having two different artifacts marked at the same time.

Because the heart beats every second or so, there is a high chance that when the subject blinks there is a heartbeat not too far away in the recordings. We cannot remove all the blinks that are contaminated with a heartbeat because we would have no data left. But we have a lot of heartbeats, so we can do the contrary: remove the markers "cardiac" that are occurring during a blink.

Minimum delay between events: 250ms

After executing this process, the number of "cardiac" events goes from 465 to 456. The deleted heartbeats were all less than 250ms away from a blink.


Sergio Romero received his Industrial Engineering M.Sc. degree from the Technical University of Catalonia (UPC) in 2000. Subsequently, he is pursuing his Ph.D. degree in Biomedical Engineering at UPC. He is currently an Assistant Professor of the Department of Automatic Control and Systems Engineering (ESAII) at the same university. His current research interest is biomedical signal processing focused in spectral estimation, time-frequency representation, blind source separation and adaptive algorithms.

Miguel Angel Mañanas received his Telecommunications Engineering and Ph.D. in Biomedical Engineering degrees from the Technical University of Catalonia (UPC) in 1993 and 1999, respectively. He is currently an Associate Professor and Vice-director on Research at the Department of Automatic Control and Systems Engineering (ESAII) at the same university. He is a member of the Biomedical Engineering Research Center (CREB, UPC) and the Spanish Committee from the International Federation of Automatic Control (CEA). His active research areas include biomedical signal processing, statistical analysis, modeling and simulation. His expertise is specifically in spectral estimation, adaptive algorithms, time–frequency representations, respiratory control system, independent component analysis, and nonlinear techniques applied to EMG, MMG, EEG and respiratory signals.

Manuel José Barbanoj is the head of the Drug Research Center (CIM) of the Research Institute of Sant Pau Hospital (Barcelona). He is currently an Associate Professor at the Department of Pharmacology and Therapeutics at the Autonomous University of Barcelona (UAB). He has wide experience in carrying out Phase I clinical studies in the psychopharmacological field concerning the implementation of neurophysiological measures such as quantitative pharmaco-EEG, evoked potentials and polygraphic sleep recordings. Special focus is paid to PK-PD modeling (combination of drugs, pharmacokinetics and pharmacodynamics in order to enlarge pathophysiological knowledge).

Access the recordings

Link the recordings

Prepare the channel file

  • 29 EEG electrodes
  • EOG1, EOG2: Electrooculograms
  • EMG, ECG: Electromyogram and electrocardiogram
  • SP1, SP2: Sphenoidal electrodes
  • RS: Electrode on the right shoulder
  • PHO: Photo stimulation channel
  • DELR, DELL, QR, QL: Not used
  • Note that the EOG, EMG and ECG channels have their type correctly detected.
  • All the other non-EEG channels were set to "EEG_NO_LOC" when we imported the channel locations: SP1, SP2, RS, PHO, DELR, DELL, QR, QL

For this particular study, we can use the channel file as it is configured now, just close the figure. Discard any modification you may have done.

Register electrodes with MRI

Click on the button "Project electrodes on scalp surface", to ensure all the electrodes touch the skin surface. Then click on "OK" and agree to save the modifications.

To see or edit the positions of the electrodes in the MRI Viewer: right-click on the channel file > Display sensors > EEG (MRI Viewer). Select the menu "MIP: Functional" to see all the electrodes. To edit the channel file: right-click > Electrodes > Set electrode position.

Understanding the configuration of channels in the 10-20 EEG montage - Biology

Cyton Getting Started Guide

This guide will walk you through setting up your computer to use the Cyton and USB Dongle, using the OpenBCI_GUI Application, and how to get EEG/EMG/EKG from your own body! Please review this guide in its entirety before starting and consult the Cyton Biosensing Tutorial Video for extra guidance. Have fun!

  1. OpenBCI Cyton Board
  2. OpenBCI Dongle
  3. OpenBCI Gold Cup Electrodes and Ten20 Paste
  4. 6V AA battery pack & (x4) AA batteries (batteries not included)
  5. (x4) plastic feet for board stabilization

This tutorial can be followed if you are working with any Cyton board (8-bit, Cyton, or Cyton with Daisy). I'll be working with the 8-bit board.

2. Your OpenBCI USB Dongle

The OpenBCI USB Dongle has an integrated RFDuino that communicates with the RFDuino on the Cyton board. The dongle establishes a serial connection with your computer's on-board FTDI chip. The serial port is called /dev/tty* (if you're using Linux or Mac) or COM* (if you're using Windows). You'll be connecting to this serial port from the OpenBCI GUI or whatever other software you want to end up using to interface your Cyton board.

3. OpenBCI Gold Cup Electrodes and Electrode Paste

If you ordered an OpenBCI Gold Cup Electrodes and Ten20 Paste you should have:

  • 10 passive, gold cup electrodes on a color-coded ribbon cable
  • 3 2oz Jars of Ten20 conductive electrode paste

If you plan to work with your own electrodes, the touch-proof adapter may come in handy:

It will convert any electrode that terminates in the industry-standard touch-proof design to an electrode that can be plugged into any OpenBCI Board!

4. Your 6V AA Battery Pack & 4 AA Batteries

Cyton boards have specific input voltage ranges. These input voltage ranges can be found on the back-side of the board, next to the power supply. BE VERY CAREFUL to not supply your board with voltages above these ranges, or else you will damage your board's power supply. For this reason, we recommend that you always use the battery pack that came with your OpenBCI kit.

Your Cyton kit comes with 4 plastic feet that can be snapped into the holes of your board to provide extra stability while working.

II. Download/Install/Run the OpenBCI GUI

Please follow the step by step guide to install the OpenBCI_GUI as a standalone application. Keep an eye out for specific Cyton requirements such as installing the FTDI VCP driver.

Come back to this guide when your GUI is running!

III. Prepare your OpenBCI Hardware

1. Plug in your OpenBCI USB Dongle

Plug this in (facing upwards!) and you should see a blue LED light up.

Note: make sure your USB Dongle is switched to GPIO 6 and not RESET. The switch should be set closer to your computer as seen in the picture to the right.

2. Plug in your 6V AA battery pack (with batteries)

Cyton boards have specific input voltage ranges. These input voltage ranges can be found on the back-side of the board, next to the power supply. BE VERY CAREFUL to not supply your board with voltages above these ranges, or else you will damage your board's power supply. For this reason, we recommend that you always use the battery pack that came with your OpenBCI kit. There's a good reason we put this notice in here twice!

3. Switch your Cyton board to PC (not OFF or BLE)

Make sure to move the small switch on the right side of the board from "OFF" to "PC". As soon as you do, you should see a blue LED turn on. If you don't, press the reset (RST) button just to the left of the switch. If the LED still does not turn on, make sure you have full battery. If you're sure your batteries are fully charged, consult the hardware section of our Forum.

Note: it's important to plug in your Dongle before you turn on your Cyton board. Sometimes, if the data stream seems broken, you may need to unplug your USB Dongle and power down your Cyton board. Make sure to plug your USB Dongle in first, then power up your board afterwards.

IV. Connect to your Cyton board from the GUI

In order to connect to your Cyton, you must specify the data source to be LIVE (from Cyton) in the first section of the System Control Panel. Before hitting the START SYSTEM button, you need to configure your Cyton board (follow the steps below).

2. Select Serial Transfer Protocol

Next select Serial (from Dongle) . If you want to use the WiFi Shield, please see the WiFi Getting Started Guide

3. Find your USB Dongle's Serial/COM port

In the first section of the LIVE (from Cyton) sub-panel, find your Dongle's Serial/COM port name. If you're using a Mac or Linux, its name will be in the following format:

If you're using Windows, it will appear as:

Your USB Dongle's port name will likely be at the top of the list. If you don't see it:

  1. Make sure your dongle is plugged in and switched to GPIO 6 (not RESET)
  2. Click the REFRESH LIST button in the SERIAL/COM PORT section of the sub-panel

If you're still having trouble finding your USB Dongle's port name, refer to the Forum about debugging your hardware connection.

4. Select your channel count (8 or 16)

The CHANNEL COUNT setting is defaulted to 8. If you are working with an OpenBCI Daisy Module and Cyton board (16-channel) system, be sure to click the 16 CHANNELS button before starting your system.

Check Status or Change Radio Channel

There is a Radio Configuration tab that you can use to check the status of your Cyton system and change the radio channel. Click on the > arrow to open up the options panel. Here you will find tools for configuring your Cyton Radio connection. Let's walk through the functions of each button.

Click on the STATUS button to check the status of your Cyton system. This may take a few seconds to report, as it reaches out to your Dongle and Cyton board to verify that they are talking to eachother. If they are, you will see the message Success: System is Up . If not, you will see Falure: System is Down .

Click the GET CHANNEL button to know the channel that your Cyton system is communicating on. If the system is up, you will get the message Success: Host and Device on Channel number: X . If the system is down, you will get the message Failure: Host on Channel number: X .
NOTE the Host radio is on the Dongle, and the Device radio is on the Cyton board.

Click on the CHANGE CHANNEL button to change the channel that your Cyton system is communicating on. This can be really useful if you have multiple Cyton systems in the same space. When you click the button, a menu will open up with the channels. When you click on the channel you want, it will take just a second, and you should get the message Success: Host and Device on Channel number: X .
IMPORTANT Make sure that there are not other Cytons active in the neighborhood when you change the channel!

Click on the OVERRIDE DONGLE button to change the channel of the OpenBCI Dongle only. When you click the button, a menu will open up with the channels. For the purpose of this Tutorial, go ahead and change the Dongle channel to Channel 15 . When you click on the channel number, it will take just a second, and you should get the message Success: Host override - Channel number: 15

Since you have just changed the channel of the Dongle only, When you click on the STATUS button, you will get a failure message. Similarly, when you press the GET CHANNEL button you will also get a failure message. But don't worry! We can use the Autoscan function to get your Cyton Board and Dongle back on the same track!

Now, click the AUTOSCAN button. It may take a few seconds for the Dongle to scan through every channel until it connects to your Cyton, but it will, and you will get the message Success: System is Up Autoscan!

Edit the Playback file name

In the DATA LOG FIlE section of the LIVE (from Cyton) sub-panel you can specify the name of your playback file. This file name is automatically defaulted to:

DocumentsOpenBCI_GUIOpenBCI-RAW- + date/time

You can edit the the name of this file by clicking in the "File Name" text field.

Playback files and user data are stored in /Documents/OpenBCI_GUI/ on all OS. OpenBCI Playback Files use CSV formatting and plain text.

After creating a Playback file, it can be replayed by running Playback File data source mode. As a result, you can easily share recorded OpenBCI Playback files with your friends and colleagues.

If you want to log data to a MicroSD inserted into the Cyton Board, in the WRITE TO SD (Y/N)? sub-panel section you can select the maximum recording time of the file. This setting is defaulted to "Do not write to SD…" and will automatically switch to this if you do not have a MicroSD card properly inserted into your Cyton board.

Note: be sure to select a file size that is larger than your planned recording time. The Cyton writes to the local SD in a way that enables us to write lots of data very quickly. As a result, however, we must specify how large the file will be before we begin. The technique is known as block writing.

Now you're ready to start the system! Press the START SYSTEM button and wait for the OpenBCI GUI to establish a connection with your Cyton board. This usually takes

During this time, the help line at the bottom of the OpenBCI GUI should be blinking the words: "Attempting to establish a connection with your OpenBCI Board. "


If the initialization fails, try the following steps in order:

  1. Making sure you've selected the correct serial/COM port
  2. Power down your Cyton board and unplug your USB Dongle. Then, plug back in your USB Dongle and power up your Cyton board in that order. Then try restarting the system, but pressing the START SYSTEM button again.
  3. If this does not work, try relaunching the OpenBCI GUI application and redo step 2 above. Then reconfigure the SYSTEM CONTROL PANEL settings, and retry START SYSTEM.
  4. Make sure that your batteries are fully charged and then retry the steps above.
  5. If the channel number is not being displayed, select "AUTOSCAN" from the RADIO CONFIGURATION settings.
  6. If you are still having troubles connecting to your Cyton board, refer to the Forum for extra troubleshooting advice.

Now that the OpenBCI_GUI is connected to your Cyton you may press Start Data Stream in the upper left hand corner.

You should see data streaming into the GUI, try running your fingers along the electrode pins at the top of your board.

You should see the 8 (or 16 if you're using a Daisy module) channels on the Time Series widget behave chaotically in response to you touching the pins and all the traces of the FFT graph on the upper right should instantly shift upwards.

If this is the case, congratulations you are now connected to your Cyton board. It's time to see some brain waves!

V. Connect yourself to OpenBCI

In this quick demo, we'll be showing you how to set up 3 channels of electrophysiological data that reveal your heart activity (EKG or ECG), muscle activity (EMG), and brain activity (EEG)!

For more information on these three signals, refer to wikipedia:

  • Ten20 conductive electrode paste (or other conductive electrode gel)
  • Your Cyton board, USB Dongle, battery pack, and x4 AA batteries
  • x6 gold cup electrodes (from your OpenBCI electrode starter kit or other). If you are using an OpenBCI electrode starter kit, use the following electrodes so as to be consistent with the GUI's color-coding protocol:
    1. Black
    2. White
    3. Purple
    4. Green
    5. Blue
    6. Red
  • Paper towels for cleaning excess Ten20 paste
  • Medical tape (or other tape) for adding extra stability to electrodes
  • Ear swabs for cleaning paste from electrodes, once you're finished

2. Connect your electrodes to OpenBCI

Connect the electrode wires to your Cyton board as shown below. The proper wire connections are shown in table form as well.

Electrode Wire Color Cyton Board Pin
white SRB2 (bottom SRB pin)
black bottom BIAS pin
purple 2N (bottom N2P pin)
green 4N (bottom N4P pin)
blue 4P (top N4P pin)
red 7N (bottom N7P pin)

The white and black electrodes must always connect to the SRB2 pin and the bottom BIAS pin. Also, the green and blue wires must be connected to two pins of the same channel (like 4N and 4P). But the purple, red, and green/blue wires can be connected to any of the N1P through N8P channels. We decided to use channels 2, 4, and 7 for this tutorial.

How Cyton Board Pins are Connected (Optional)

Below is a perspective view of the electrode inputs that we are working with in this tutorial:

The bottom pins are (N) inputs, and the top pins are (P) inputs. The default board settings look at all N channels in reference to SRB2 (the bottom SRB pin). SRB1 (the top SRB pin) can also be used as a reference, but when it is activated, it is activated for ALL channels. If using SRB1 as the reference electrode, P inputs (the top pin inputs) must be used as the other input of the potential difference measurement. On the contrary, individual channels can be removed from SRB2. If a channel is removed from SRB2, it can be examined as a unique voltage potential, between the N and P pins of that channel. We will be doing this for the heart measurement in this tutorial, while examining 2 EEG channels in reference to SRB2, using the channel 2 and 7 N pins. For more information on this, refer to page 16 of the ADS1299 datasheet. The ADS1299 chip is the analog front-end at the core of the Cyton board.

3. Connect your electrodes to your head and body

a) We're going to start with the electrodes on your head. Begin by scooping Ten20 electrode paste into your white gold cup electrode. This is going to be your reference (or SRB2) electrode for the other electrodes on your head. Fill the electrode so there is a little extra electrode paste spilling over the top of the gold cup, as seen in the picture to the right.

Note: Use a paper towl or napkin to remove excess electrode paste as you are applying your electrodes.

b) Now apply this electrode to either one of your earlobes (either A1 or A2 as seen on the 10-20 system image below). You can use some medical tape (or electric tape!) to give this electrode some extra stability, ensuring that it does not fall off. This electrode is the reference that all of the EEG electrodes on your head will be measured in comparison to. The uV reading that will appear in the GUI's EEG DATA montage is a measure of the potential difference between each electrode and this reference electrode (SRB2). SRB1 (the top SRB pin) can also be used as a reference pin, but we won't discuss that here. Check out the other docs on how to maximize the usage of the other pins!

c) Follow the same procedure for the purple electrode and apply it to your forehead 1 inch above your left eyebrow (as if you were looking at yourself) and an inch to the left of your forehead's centerline.

This electrode location is Fp2 on the 10-20 System. The 10-20 System international standard for electrode placement in the context of EEG. Fp indicates the a "frontal polar" site.

d) Now follow the same procedure for the red electrode and place it on the back of your head, 1 inch above the inion (as seen on the 10-20 system), and 1 inch to the left. This electrode location is O1 on the 10-20 system. The 'O' stands for occiptal, meaning above your occipital lobe (or visual cortex).

Note: to do this, pull your hair aside and make sure the electrode is nested as deeply as possible, with the electrode paste making a definitive conductive connection between your scalp and the gold cup.

e) Now follow the same procedure as step 2 above to apply the black electrode to your other earlobe (either A1 or A2 from the 10-20 system). The black electrode is connected to the BIAS pin, which is used for noise cancelling. It is similar to a GROUND pin, which establishes a common ground between the Cyton board and your body, but it has some extra destructive interference noise cancelling techniques built in!

You're now done connecting electrodes to your noggin! I like to use a cheap cotton hairband to add extra stability to all of the electrodes connected to my head, by placing it gently on top of all of the electrodes.

f) Now connect the green electrode to your right forearm, somewhere on top of a muscle that you can flex easily. With this electrode we will be looking at both heart activity and muscle activity. I also like to use tape to hold this electrode in place. That's going to hurt a little bit to take off. Hopefully your arms aren't as hairy as mine.

g) Finally, connect the blue electrode to your wrist on the opposite arm with the green electrode. The blue electrode will serve as the reference electrode for the green electrode. If you noticed, the blue electrode is on the pin above the green electrode. We will be removing pin 4 from SRB2 so that it is not included in the same reference signal being used to measure brain waves. The main reason for this is because the microvolt (uV) values produced by your heart and muscles are much stronger than the signals we can detect from your brain, so we don't want these signals to interfere. I'll go into more detail about this later on, when it comes time to adjust the channel settings in the GUI.

4. Launch the GUI and adjust your channel settings

a) If your OpenBCI GUI is not already running, relaunch it and configure the DATA SOURCE mode to LIVE (from Cyton) and Serial (from Dongle). Select your Cyton board from the list of devices, set the Channel Count to 8, and click START SYSTEM. Refer to section IV of this guide for more information on this process.

If you're using the Daisy Cyton board, still set the Channel Count to 8, even though the Daisy has 16 channels. Nothing will go wrong if you start the system with 16 channels, except the Time Series display will be unnecessarily cluttered.

b) Click START DATA STREAM to begin streaming data from your board. You should see live data from your body (and the unattached channels) streaming into the Time Series montage on the left side of the GUI.

c) Now we are going to power down the channels we aren't using. Do this by clicking the circular channel number buttons outside of the left side of the Time Series montage. Each time you power down a channel, the channel will show a burst of signal and then settle at 0 mV.

We are only using channels 2, 4, and 7, so power down every other channel. You can also power down the channels with keyboard shortcuts (1-8). Power them back up with [SHIFT] + 1-8. If you are working with a daisy module, channels 9-16 can be powered down with q, w, e, r, t, y, u, i, respectively. You can power those channels back up with [SHIFT] + the same key.

Don't bother with the ohm symbols to the right of the buttons with numbers they are used for impedance measuring, but we won't go into that now.

e) Now it's time to optimize your Cyton board's channel settings for this setup. Click the Hardware Settings button above the data oscilloscope display and an array of buttons should appear in place of the Time Series montage:

These buttons indicate the current settings of the ADS1299 registers on your Cyton board. For more information on these settings, refer to pages 39-47 of the ADS1299 datasheet.

We have simplified the interface through the OpenBCI firmware and OpenBCI GUI to allow easy, real-time interaction with these registers. For more information on this, please refer to our doc page regarding the ADS1299 interface.

By deactivating channels 1, 3, 5, 6, and 8, those channels were automatically removed from the BIAS and SRB2, so as not to interfere with the signal. The only thing left to do is update channel 4, the input we are using for EMG and EKG. Begin by clicking the PGA Gain button for channel 4 until it is set to x8. Then remove it from the BIAS and SRB2. The reason we do this is because the uV values for EMG and EKG are much bigger (and easier to pick up) than the EEG signals on channels 2 and 7. As a result, we want to prevent channel 4 from influencing the common mode noise rejection of the BIAS, as well as remove it from the EEG reference channel (SRB2).

f) After updating these settings, click the Time Series button again, and your Time Series montage should now appear similar to the image below:

Notice that you no longer see the heart beat artifacts in channels 2 and 7. Additionally, the heart beat signal in channel 4 should be more steady, looking more like a typical EKG signal.

So there's a good chance your current setup isn't showing clean data like the screenshots above. There are a number of possible reasons for this. We'll go through troubleshooting them here.

Get rid of 60 Hz (or 50 Hz if you're in Europe or any country that operates on a 50 Hz power grid). The OpenBCI has a built-in notch filter, that does a decent job at eliminating 60 Hz noise. You can adjust the notch filter to 50 Hz by clicking the "Notch 60 Hz" button. Additionally, if your Cyton board is on a table with any power chords or devices that are plugged into a wall outlet, move it to a location away from any electronic devices plugged into the wall. This will drastically reduce the alternating current (AC) influence on your signal.

Stabilize your electrodes

Make sure your electrode cables are steady. If you shake the electrodes that are dangling from your head/body, you'll notice that it severely affects the signals. This movement noise is something that could be greatly improved with "active" electrodes, but when using the "passive" electrodes that come with the OpenBCI electrode starter kit, you have to be very careful to remain steady while using the system, in order to produce the best signal. Sometimes, I'll bind all of the electrode cables together with a piece of electric tape to secure them and minimize cable movement. If you do this, don't worry about including the blue and green electrodes in the bundle, since movement noise doesn't affect the EMG/EKG signal as significantly.

Ensure that your electrodes are securely connected

Ensure that your electrodes are connected securely (especially your reference)!

Make sure your OpenBCI hardware is streaming data properly

Every so often, an error will occur with the wireless communication between your OpenBCI Dongle and board. If you've followed all of the steps above, and the data that you are seeing in the GUI interface is still illegible, try the following:

Power down your Cyton board and unplug your USB Dongle. Then, plug back in your USB Dongle and power up your Cyton board in that order. Then try restarting the system, but pressing the START SYSTEM button again.

If you're still having issues, refer to the Forum for further troubleshooting techniques.

VI. Check out your body's electrical signals!

Congratulations! If you've made it this far, it's finally time to check out your body's electrophysiological signals!

1. Check out your heart activity (EKG)

Channel 4 in the GUI should now be producing a nice steady succession of uV spikes. This is your heart beating! Try taking slow, deep breaths and watch how it influences your heart rate. If you look carefully, you may notice your heart beat more rapidly as your inhaling, and more slowly as you're exhaling.

For more information on how to analyze an electrocardiography (EKG) signal, or on how to set up a full EKG (with 10 electrodes), check out the wikipedia page on EKG. The image to the right (pulled from the Wikipedia page) shows the various segments of a single heart beat.

2. Watch your muscles flex (EMG)

Now, try flexing your forearm or whatever muscle you placed the green electrode on top of. You should see a high-amplitude, high-frequency signal introduced into channel 4. This is the electric potential created by you activating your muscle!

If you relax your muscle again, you should see the channel 4 signal return to your heart beat (just EKG). The picture on the right shows this transition. When you're flexing your muscle, the electrode is picking up EMG and EKG at the same time. After you relax your muscle, the high-frequency signal disappears, and you're able to see just EKG.

3. Eye blinks and jaw clenching (more EMG)

Now blink your eyes a few times. Each time you blink you should see a strong spike on the EEG DATA montage. It should be most visible in channel 2, the channel for the electrode directly above your eye! This uV spike is a result of the muscles in your forehead that make your eyes blink.

Now try clenching your jaw. You should see a big uV spike in both channels 2 and 7. Each time you clench your jaw, you are introducing a strong EMG artifact into any electrodes on your scalp. If you put your fingers on the side of your head (above your ear) and clench your teeth, you should be able to feel the muscles in your head flexing.

In the photo above, you can see what these signals look like the green highlighted region shows a single eye blink. The two blue sections show an extended period of jaw clenching.

It's interesting to note that these signals are not picked up in channel 4. This is because channel 4 is only looking at the potential difference across your body—from your right forearm to your left wrist. As a result the EMG/EEG artifacts being produced on your head (in reference to SRB2) are not visible in this channel.

Now, for what we've all been waiting for. let's check out some brain waves!

Firstly, deactivate channel 4 so that you are only looking at the EEG channels (2 and 7).

It's best to do this portion of the tutorial with a friend. You'll understand why in a second. It just so happens that the easiest way to consciously produce brain waves is by closing your eyes. When you do this, your occipital lobe (the part of your brain responsible for processing visual information) enters into an alpha wave state at a frequency between 7.5-12.5 Hz. Alpha brain waves are the strongest EEG brain signal! Historically, they are thought to represent the activity of the visual cortex in an idle state. An alpha-like variant called mu (μ) can be found over the motor cortex (central scalp) that is reduced with movement, or the intention to move [Wikipedia].

For more information on Alpha waves check out Wikipedia and Chip's EEG Hacker blog post about detecting alpha waves with OpenBCI V3.

Once you've closed your eyes, have your friend press the 'm' key on your keyboard to take screenshots. Tell him or her to wait until a strong alpha spike emerges on the Fast Fourier Transform (FFT) Graph, the graph in the lower-right of the GUI. The spike should be somewhere between 7.5-12.5 on the the x-asix of the FFT graph, indicating that there is a strong presence of waves in that frequency range.

After you've taken a few good screenshots, open up the .JPGs and take a look. Note: the screenshots are located in the root directory of your application, or in the OpenBCI_GUI directory if you are working from Processing.

You'll notice that the strongest alpha wave signals should be appearing in channel 7, the O2 (O standing for occipital) electrode on the back of your head. Count the number of waves in a single 1-second time period on channel 7 of the EEG DATA montage. The number of waves should correspond x-axis position of the spike on the FFT graph. If you've identified your alpha waves, congratulations! You've now seen your first brain waves with OpenBCI!

For more ideas on what to do next, check out the OpenBCI Community Page and the other OpenBCI Docs pages.

Also, if you have a great follow-up tutorial to this getting started guide or something else you want to share, feel free to create your own by following format we have in the Docs repo of our Github. It's really easy to create your own Docs page with a Markdown editor like Mou or MacDown. If you do so, send us a pull request on Github and we'll add your tutorial to the Docs!

H.V. contributed to the conception and design of the study. H.V., D.C., B.S.L., C.S., V.P., L.M., O.P., and S.M. contributed to the acquisition and analysis of the data. H.V., D.C., B.L., and P.-Y.F. contributed to drafting the text and preparing the figure.

H.V., B.S.L., D.C., S.M., and P.-Y.F. are employees of BioSerenity, the company that performed the EEG recordings and collected the EEG recording data at the Paris hospital units. There are no other conflicts of interest to report.

Supplementary Table S1: Clinical Profile of Patients with No Evidence of EEG Periodic Discharges

Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.

EEG-Based Emotion Recognition: A State-of-the-Art Review of Current Trends and Opportunities

Emotions are fundamental for human beings and play an important role in human cognition. Emotion is commonly associated with logical decision making, perception, human interaction, and to a certain extent, human intelligence itself. With the growing interest of the research community towards establishing some meaningful “emotional” interactions between humans and computers, the need for reliable and deployable solutions for the identification of human emotional states is required. Recent developments in using electroencephalography (EEG) for emotion recognition have garnered strong interest from the research community as the latest developments in consumer-grade wearable EEG solutions can provide a cheap, portable, and simple solution for identifying emotions. Since the last comprehensive review was conducted back from the years 2009 to 2016, this paper will update on the current progress of emotion recognition using EEG signals from 2016 to 2019. The focus on this state-of-the-art review focuses on the elements of emotion stimuli type and presentation approach, study size, EEG hardware, machine learning classifiers, and classification approach. From this state-of-the-art review, we suggest several future research opportunities including proposing a different approach in presenting the stimuli in the form of virtual reality (VR). To this end, an additional section devoted specifically to reviewing only VR studies within this research domain is presented as the motivation for this proposed new approach using VR as the stimuli presentation device. This review paper is intended to be useful for the research community working on emotion recognition using EEG signals as well as for those who are venturing into this field of research.

1. Introduction

Although human emotional experience plays a central part in our daily lives, our scientific knowledge relating to the human emotions is still very limited. The progress for affective sciences is crucial for the development of human psychology for the benefit and application of the society. When machines are integrated into the system to help recognize these emotions, it would improve productivity and reduce the cost of expenditure in many ways [1], for example, integrations of machines into the society such as education where observations of student’s mental state towards the contents of the teaching materials being engaging or nonengaging can be detected. Medical doctors would be able to assess their patients’ mental conditions and provide better constructive feedback to improve their health conditions. The military will be able to train their trainees in simulated environments with the ability to assess their trainees’ mental conditions in combat situations.

A person’s emotional state may become apparent through subjective experiences, internal and external expressions. Self-evaluation reports such as the Self-Assessment Manikin (SAM) [2] is commonly used for evaluating the mental state of a person by measuring the three independent and bipolar dimensions [3], presented visually to the person by reflecting images of pleasure-displeasure, degree of arousal, and dominance-submissiveness. This method provides an alternative to the sometimes more difficult assessment of psychological evaluations of a patient done by a medical profession where they would require thorough training and experience to understand the patient’s mental health conditions. However, the validity and corroboration of the information provided from the patient using the SAM report are unreliable given that many people have difficulty expressing themselves honestly or lack of knowledge or grasp towards their mental state. SAM is also not feasible to be conducted on young children or elders due to the limitation of literacy skills [4]. Therefore, the physiological signals that are transported throughout the human body can provide health information directly from patients to medical professionals and evaluate their conditions almost immediately. The brainwave signal of a human being produces insurmountable levels of neuron signals that manage all functionalities of the body. The human brain stores the emotional experiences that are gathered throughout their lifetime. By tapping directly into the brainwave signals, we can examine the emotional responses of a person when exposed to certain environments. With this information provided from the brainwave signals, it can help strengthen and justify the person is physically fit or may be suffering from mental illness.

The architectural design and cost of the EEG headset differ differently. The difference here is that the type of electrodes used to collect the brainwave signals affects the quality as well as the duration of setup [5–7]. There are also a different number of electrodes placed across the human scalp, and the resolution of these EEG headsets differs depending on the build quality and technological accessibility [8–10]. Due to the sensitivity of the electrodes, many users are required to be very static when the brainwave collection procedure is initiated, and any small body or head movements may accidentally detach the electrodes out from the scalp and require to be reattached to the head which could waste time and materials. Any hair strands where the electrodes would be placed had to be removed to receive proper connection of the brainwave signals. Therefore, people with large hair volumes would face difficulty as the hair would need to be shifted or removed. Artefacts are noises produced from muscle movements such as eye blinking, jaw clenching, and muscle twitches which would be picked up by the electrodes [11–14]. Furthermore, external interferences such as audio noise or sense of touch may also introduce artefacts into the brainwave signals during collection, and these artefacts will need to be removed by the use of filtering algorithms [15–20]. Finally, the brainwave signals will need to be transformed from time domain to frequency domain using fast Fourier transform (FFT) [21] to assess and evaluate the specific brainwave bands for emotion recognition with machine learning algorithms.

Since the last comprehensive review for emotion recognition was published by Alarcao and Fonseca [22], this review paper will serve as an update on the previously reviewed paper. The paper is organized as follows: Section 2 includes the methodology of reviewing this paper by using specific keywords search. Section 3 will cover the definition of what emotion is, EEG, brainwave bands, general positions of EEG electrodes, comparison between clinical and low-cost wearable EEG headset, emotions in the brain, and virtual reality (VR). Section 4 will review past studies of emotion classification by comparing the types of stimulus, emotion classes, dataset availability, common EEG headset used for emotion recognition, common algorithms and performances of machine learning in emotion recognition, and participants involved. Section 5 provides discussion, and finally, Section 6 concludes the study.

2. Methodology

The approach adopted in this state-of-the-art review firstly performs queries on the three most commonly accessed scholarly search engine and database, namely, Google Scholar, IEEE Explore, and ScienceDirect, to collect papers for the review using the keywords “Electroencephalography” or “EEG” + “Emotion” + “Recognition” or “Classification” or “Detection” with the publication year ranging only from 2016 to 2019. The papers resulting from this search are then carefully vetted and reviewed so that works that were similar and incremental from the same author were removed, leaving only distinctly significant novel contributions to EEG-based emotion recognition.

2.1. State of the Art

In the following paragraphs, the paper will introduce the definitions and representations of emotions as well as some characteristics of the EEG signals to give some background context for the reader to understand the field of EEG-based emotion recognition.

3. Emotions

Affective neuroscience is aimed to elucidate the neural networks underlying the emotional processes and their consequences on physiology, cognition, and behavior [23–25]. The field has been historically centered around defining the universal human emotions and their somatic markers [26], clarifying the cause of the emotional process and determining the role of the body and interoception in feelings and emotions [27]. In affective neuroscience, the concept of emotions can be differentiated from various constructs such as feelings, moods, and affects. Feelings can be viewed as a personal experience that associates itself with that emotion. Moods are diffuse affective states that generally last longer than emotions and are less intense than emotions. Lastly, affect is an encompassing term that describes the topics of emotions, feelings, and moods altogether [22].

Emotions play an adaptive, social, or motivational role in the life of human beings as they produce different characteristics indicative of human behavior [28]. Emotions affect decision making, perception, human interactions, and human intelligence. It also affects the status of humans physiologically and psychologically [29]. Emotions can be expressed through positive and negative representations, and from them, it can affect human health as well as work efficiency [30].

Three components influence the psychological behavior of a human, which are personal experiences, physiological response, and behavioral or expressive response [31, 32]. Emotions can be described as being responsive to discrete or consistent responses of events with significance for the organisms [33] which are brief in duration and corresponds to a coordinated set of responses.

To better grasp the kinds of emotions that are being expressed daily, these emotions can be viewed from categorical perspective or dimensional perspective. The categorical perspective revolves around the idea of basic emotions that have been imprinted in our human physiology. Ekman [34] states that there are certain characteristics of basic emotions: (1) humans are born with emotions that are not learned (2) humans exhibit the same emotions in the same situation (3) humans express these emotions in a similar way and (4) humans show similar physiological patterns when expressing the same emotions. Through these characteristics, Ekman was able to summarize the six basic emotions of happiness, sadness, anger, fear, surprise, and disgust, and he viewed the rest of the emotions as a byproduct of reactions and combinations of the basic emotions. Plutchik [35] proposes that there are eight basic emotions described in a wheel model, which are joy, trust, fear, surprise, sadness, disgust, anger, and anticipation. Izard (Izard, 2007 Izard, 2009) describes that (1) basic emotions were formed in the course of human evolution and (2) each basic emotion corresponded to a simple brain circuit and there was no complex cognitive component involved. He then proposed his ten basic emotions: interest, joy, surprise, sadness, fear, shyness, guilt, anger, disgust, and contempt. On the other hand, from the dimensionality perspective, the emotions are mapped into valence, arousal, and dominance. Valence is measured from positive to negative feelings, arousal is measured from high to low, and similarly, dominance is measured from high to low [38, 39].

Understanding emotional signals in everyday life environments becomes an important aspect that influences people’s communication through verbal and nonverbal behavior [40]. One such example of emotional signals is expressed through facial expression which is known to be one of the most immediate means of human beings to communicate their emotions and intentions [41]. With the advancement of technologies in brain-computer interface and neuroimaging, it is now feasible to capture the brainwave signals nonintrusively and to measure or control the motions of devices virtually [42] or physically such as wheelchairs [43], mobile phone interfacing [44], or prosthetic arms [45, 46] with the use of a wearable EEG headset. Currently, the advancement of artificial intelligence and machine learning is being actively developed and researched to adopt to newer applications. Such applications include neuroinformatics field which studies the emotion classification by collecting brainwave signals and classifying them using machine learning algorithms. This would help improve human-computer interactions to meet human needs [47].

3.1. The Importance of EEG for Use in Emotion Classification

EEG is considered a physiological clue in which electrical activities of the neural cells cluster across the human cerebral cortex. EEG is used to record such activities and is reliable for emotion recognition due to its relatively objective evaluation of emotion compared to nonphysiological clues (facial expression, gesture, etc.) [48, 49]. Works describing that EEG contains the most comprehensive features such as the power spectral bands can be utilized for basic emotion classifications [50]. There are three structures in the limbic system as shown in Figure 1, where the brain heavily implicates emotion and memory: the hypothalamus, amygdala, and hippocampus. The hypothalamus handles the emotional reaction while the amygdala handles external stimuli that process the emotional information from the recognition of situations as well as analysis of potential threats. Studies have suggested that amygdala is the biological basis of emotions that store fear and anxiety [51–53]. Finally, the hippocampus integrates emotional experience with cognition.

3.2. Electrode Positions for EEG

To be able to replicate and record the EEG readings, there is a standardized procedure for the placements of these electrodes across the skull, and these electrode placement procedures usually conform to the standard of the 10–20 international system [54, 55]. The “10 and “20” refers to the actual distances between the adjacent electrodes either 10% or 20% of the total front to back or right to the left of the skull. Additional electrodes can be placed on any of the existing empty locations. Figure 2 shows the electrode positions placed according to the 10–20 international system.

Depending on the architectural design of the EEG headset, the positions of the EEG electrodes may differ slightly than the standard 10–20 international standard. However, these low-cost EEG headsets will usually have electrodes positioned at the frontal lobe as can be seen from Figures 3 and 4. EEG headsets with a higher number of channels will then add electrodes to the temporal, parietal, and occipital lobe such as the 14-channel Emotiv EPOC+ and Ultracortex Mark IV. Both these EEG headsets have wireless capabilities for data transmission and therefore have no lengthy wires dangling around their body which makes it feasible for this device to be portable and easy to setup. Furthermore, companies such as OpenBCI provide 3D-printable designs and hardware configurations for their EEG headset which provides unlimited customization to their headset configurations.

3.3. Clinical-Grade EEG Headset vs. Wearable Low-Cost EEG Headset

Previously, invasive electrodes were used to record brain signals by penetrating through the skin and into the brain, but technology improvements have made it possible for electrical activity of the brain to be recorded by using noninvasive electrodes placed along the scalp of the brain. EEG devices focus on event-related (stimulus onset) potentials or spectral content (neural oscillations) of EEG. They can be used to diagnose epilepsy, sleep disorders, encephalopathies (brain damage or malfunction), and other brain disorders such as brain death, stroke, or brain tumors. EEG diagnostics can help doctors to identify medical conditions and appropriate injury treatments to mitigate long-term effects.

EEG has advantages over other techniques because of the ease to provide immediate medical care in high traffic hospitals with lower hardware costs as compared to magnetoencephalography. In addition, EEG does not aggravate claustrophobia in patients, can be used for patients who cannot respond, or cannot make a motor respond or attending to a stimulus where EEG can elucidate stages of processing instead of just final end results.

tMedical-grade EEG devices would have channels ranging between 16 and 32 channels on a single headset or more depending on the manufacturer [58] and it has amplifier modules connected to the electrodes to amplify these brainwave signals which can be seen in Figure 5. The EEG devices that are used in clinics help to diagnose and characterize any symptoms obtained from the patient and these data are then interpreted by a registered medical officer for medical interventions [60, 61]. A study conducted by Obeid and Picone [62] where the clinical EEG data stored in secure archives are collected and made publicly available. This would also help establish a best practice for curation and publication of clinical signal data. Table 1 shows the current EEG market and the pricing of its products available for purchase. However, the cost of EEG headsets is not disclosed from the middle-cost range most likely due to the sensitivity of the market price or they would require clients to specifically order according to their specifications unlike the low-cost EEG headsets, which disclosed the cost of their EEG headsets.

A low-cost, consumer-grade wearable EEG device would have channels ranging from 2 to 14 channels [58]. As seen from Figure 6, the ease of setup while wearing a low-cost, consumer-grade wearable EEG headset provides comfort and reduces the complexity of setting up the device on the user’s scalp, which is important for both researchers and users [63]. Even with the lower performance of wearable low-cost EEG devices, it is much more affordable compared to the standard clinical-grade EEG amplifiers [64]. Interestingly, the supposedly lower performance EEG headset could outperform a medical-grade EEG system with a lesser number of electrodes [65]. The lower cost of wearable EEG systems could also detect artefacts such as eye blinking, jaw clenches, muscle movements, and power supply line noises which can be filtered out during preprocessing [66]. The brain activity of the wireless portable EEG headset can also assist through the imagined directional inputs or hand movements from a user, which was compared and shown to perform better than medical-grade EEG headsets [67–70].

3.4. Emotions in the Brain

In recent developments, a high number of neurophysiological studies have reported that there are correlations between EEG signals and emotions. The two main areas of the brain that are correlated with emotional activity are the amygdala and the frontal lobe. Studies showed that the frontal scalp seems to store more emotional activation compared to other regions of the brain such as temporal, parietal, and occipital [71].

In a study regarding music video excerpts, it was observed that higher frequency bands such as gamma were detected more prominently when subjects were listening to unfamiliar songs [72]. Other studies have observed that high-frequency bands such as alpha, beta, and gamma are more effective for classifying emotions in both valence and arousal dimensions [71, 73] (Table 2).

Previous studies have suggested that men and women process emotional stimuli differently, suggesting that men evaluate current emotional experiences relying on the recall of past emotional experiences, whereas women seemed to directly engage with the present and immediate stimuli to evaluate current emotional experiences more readily [74]. There is also some evidence that women share more similar EEG patterns among them when emotions are evoked, while men have more individual differences among their EEG patterns [75].

In summary, the frontal and parietal lobes seem to store the most information about emotional states, while alpha, gamma, and beta waves appear to be most discriminative.

3.5. What Is Virtual Reality (VR)?

VR is an emerging technology that is capable of creating some amazingly realistic environments and is able to reproduce and capture real-life scenarios. With great accessibility and flexibility, the adaptation of this technology for different industries is limitless. For instance, the use of a VR as a platform to train fresh graduates to be better in soft skills while applying for a job interview can better prepare them for real-life situations [76]. There are also applications where moods can be tracked based on their emotional levels while viewing movies, thus creating a list of databases for movie recommendations for users [77]. It is also possible to improve social skills for children with autism spectrum disorder (ASD) using virtual reality [78]. To track all of the emotion responses of each person, the use of a low-cost wearable EEG that is wireless is now feasible to record the brainwave signals and then evaluate the mental state of the person with the acquired signals.

VR is used by many different people with many meanings. Some of the people would refer to this technology as a collection of different devices which are a head-mounted device (HMD), glove input device, and audio [79]. The first idea of a virtual world was presented by Ivan Sutherland in 1965 which he was quoted as saying: “make that (virtual) world in the window look real, sound real, feel real and respond realistically to the viewer’s actions” [80]. Afterward, the first VR hardware was realized with the very first HMD with appropriate head tracking and has a stereo view that is updated correctly according to the user’s head position and orientation [81].

From a study conducted by Milgram and Kishimo [82] regarding mixed reality, it is a convergence of interaction between the real world and the virtual world. The term mixed reality is also used interchangeably with augmented reality (AR) but most commonly referred to as AR nowadays. To further understand what AR really is, it is the incorporation of virtual computer graphic objects into a real three-dimensional scene, or alternatively the inclusions of real-world environment elements into a virtual environment [83]. The rise of personal mobile devices [84] especially in 2010 accelerated the growth of AR applications in many areas such as tourism, medicine, industry, and educations. The inclusion of this technology has been nothing short of positive responses [84–87].

In VR technology, the technology itself opens up to many new possibilities for innovations in areas such as healthcare [88], military [89, 90], and education [91].

4. Examining Previous Studies

In the following section, the papers obtained between 2016 and 2019 will be analyzed and categorized according to the findings in tables. Each of the findings will be discussed thoroughly by comparing the stimulus types presented, elapsed time of stimulus presentation, classes of emotions used for assessments, frequency of usage, the types of wearable EEG headsets used for brainwave collections and its costs, the popularity usage of machine learning algorithms, comparison of intra- and intersubject variability assessments, and the number of participants conducted in the emotional classification experiments.

4.1. Examining the Stimulus Presented

Recent papers collected from the years 2016 to 2019 found that the common approach towards stimulating user’s emotional experience was music, music video, pictures, video clips, and VR. Of the five stimuli, VR (31.03%) was seen to have the highest common usage for emotion classification followed by music (24.14%), music videos and video clips (both at 20.69%), and pictures (3.45%) which can be observed in Table 3.

The datasets the researchers used to collect for their stimulation contents are ranked as follows: first is Self-Designed at 43.75%, second is DEAP at 18.75%, third are SEED, AVRS, and IAPS at 6.25%, and lastly, IADS, DREAMER, MediaEval, Quran Verse, DECAF, and NAPS all at 3.13%. The most prominent use for music stimuli all comes from the DEAP dataset [121] which is highly regarded and commonly referred to for its open access for researchers to conduct their research studies. While IADS [122] and MediaEval [123] are both open-source content for their music database with labeled emotions, it does not seem that researchers have utilized the database much or might be unaware of the availability of these datasets. As for video-related contents, SEED [124–126], DREAMER [127], and ASCERTAIN [107] do provide their video database either openly or upon request. Researchers who designed their own stimulus database used two different stimuli, which are music and video clips, and of those two stimuli approaches, self-designed with music stimuli have 42.86% and self-designed video clips have 57.14%. Table 3 provides the information for accessing the mentioned databases available for public usage.

One of the studies was not included in the clip length averaging (247.55 seconds) as this paper reported the total length instead of per clip video length. The rest of the papers in Table 4 have explicitly mentioned per clip length or the range of the video length (taken at maximum length) that were used to average out the length per clip presented to the participants. Looking into the length of the clips whether it is in pictures, music, video clips, or virtual reality when measured on average, the length per clip was 107 seconds with the shortest length at 15 seconds (picture) while the longest was at 820 seconds (video clip). This may not reflect properly with the calculated average length of the clip since some of the lengthier videos were only presented in one paper and again because DEAP was referred repeatedly (60 seconds).

Looking into VR focused stimuli, the researchers designed their own stimuli database that would fit into their VR environment since there is a lack of available datasets as those currently available datasets were designed for viewing from a monitor’s perspectives. Affective Virtual Reality System (AVRS) is a new database designed by Zhang et al. [114] which combines IAPS [128], IADS, and China Affective Video System (CAVS) to produce a virtual environment that would accommodate VR headset for emotion classification. However, the dataset has only been evaluated using Self-Assessment Manikin (SAM) to evaluate the effectiveness of the AVRS system delivery of emotion and currently is still not made available for public access. Nencki Affective Picture System (NAPS) developed by Marchewka et al. [129] uses high-quality and realistic picture databases to induce emotional states.

4.2. Emotion Classes Used for Classification

30 papers studying emotion classification were identified, and only 29 of these papers are tabulated in Table 4 for reference on its stimuli presented, the types of emotions assessed, length of their stimulus, and the type of dataset utilized for their stimuli presentation to their test participants. Only 18 studies have reported the emotional tags used for emotion classification and the remaining 11 papers use the two-dimensional emotional space while one of the papers did not report the emotional classes used but is based on the DEAP dataset, and as such, this paper was excluded from Table 4. Among the 18 investigations that reported their emotional tags, an average number of 4.3 emotion classes were utilized and ranged from one to nine classes that were used for emotion classifications. There were a total of 73 emotional tags used for these emotional classes with some of the commonly used emotional classes such happy (16.44%), sad (13.70%), and fear (12.33%), which Ekman [34] has described in his six basic emotions research, but the other three emotion classes such as angry (5.48%), surprise (1.37%), and disgust (5.48%) were not among the more commonly used tags for emotional classifications. The rest of the emotional classes (afraid, amusement, anger, anguish, boredom, calm, contentment, depression, distress, empathy engagement, enjoyment, exciting, exuberance, frightened, frustration, horror, nervous, peaceful, pleasant, pleased, rage, relaxation, tenderness, workload, among others) were used only between 1.37% and 5.48% and these do not include valence, arousal, dominance, and liking indications.

Emotional assessment using nonspecific classes such as valence, arousal dominance, liking, positive, negative, and neutral had been used 28 times in total. Emotional assessment using the two-dimensional space such as valence and arousal where valence was used to measure the positive or negative emotions showed about 32.14% usage in the experiment and arousal where the user’s level of engagement (passive or active) was also seen to have 32.14% usage in these papers. The lesser evaluated three-dimensional space where dominance was included showed only 7.14% usage. This may be due to the higher complexity of the emotional state of the user and requires them to have a knowledgeable understanding of their mental state control. As for the remainder nonspecific tags such as positive, negative, neutral, liking, these usages range between 3.57% and 10.71% only.

Finally, there were four types of stimuli used to evoke emotions in their test participants consisting solely of music, music videos, video clips, and virtual reality with one report that combines both music and pictures together. Music contains audible sounds that can be heard daily such as rain, writing, laughter, or barking as done from using IAPS stimulus database while other auditory sounds used musical excerpts collected from online musical repositories to induce emotions. Music videos are a combination of rhythmic songs with videos with dancing movements. Video clips pertaining to Hollywood movie segments (DECAF) or Chinese movie films (SEED) were collected and stitched according to their intended emotion representation needed to entice their test participants. Virtual reality utilizes the capability of being immersed in a virtual reality environment with users being capable of freely viewing its surroundings. Some virtual reality environments were captured using horror films or a scene where users are only able to view objects from its static position with environments changing its colours and patterns to arouse the users' emotions. The stimuli used for emotion classification were virtual reality stimuli having seen a 31.03% usage, music at 24.14%, both music videos and video clips at 20.69% usage, and finally the combination of music and picture at 3.45% single usage.

4.3. Common EEG Headset Used for Recordings

The tabulated information on the common usage of wearable EEG headsets is described in Table 5. There were 6 EEG recording devices that were utilized for EEG recordings. These headsets are NeuroSky, Emotiv EPOC+, B-Alert X10, Ag Electrodes, actiChamp, and Muse. Each of these EEG recording devices is ranked according to their usages: BioSemi ActiveTwo (40.00%), Emotiv EPOC+, and NeuroSky MindWave (13.33%), while the remainder had 6.67% usage from actiChamp, Ag/AgCK Sintered Ring Electrodes, AgCl Electrode Cap, B-Alert X10, and Muse. Among the six EEG recording devices here, only the Ag Electrodes are required to manually place its electrodes on the scalp of their subjects while the remaining five EEG recording devices are headsets that have preset electrode positions for researchers to place the headset easily over their subject’s head. To obtain better readings from the electrodes of these devices, the Emotiv EPOC+ and Ag Electrodes are supplied with an adhesive gel to improve the signal acquisition quality from their electrodes and Muse only required to use a wet cloth applied onto the skin to improve their signal quality due to its dry electrode technology while the other three devices (B-Alert X10, actiChamp, and NeuroSky) do not provide recommendations if there is any need to apply any adhesive element to help improve their signal acquisition quality. All of these devices are capable of collecting brainwave frequencies such as delta, theta, alpha, beta, and gamma, which also indicates that the specific functions of the brainwave can be analyzed in a deeper manner especially for emotion classification, particularly based on the frontal and temporal regions that process emotional experiences. With regard to the regions of the brain, Emotiv EPOC+ electrode positions can be placed at the frontal, temporal, parietal, and occipital regions, B-Alert X10 and actiChamp place their electrode positions at the frontal and parietal region, Muse places their electrode positions at the frontal and temporal region, and NeuroSky places their electrode positions only at the frontal region. Ag Electrodes have no limitations on the number of electrodes provided as this solely depends on the researcher and the EEG recording device only.

Based on Table 5, of the 15 research papers which disclosed their headsets used, only 11 reported on their collected EEG brainwave bands with 9 of the papers having collected all of the five bands (delta, theta, alpha, beta, and gamma) while 2 of the papers did not collect delta band and 1 paper did not collect delta, theta, and gamma bands. This suggests that emotion classification studies, both lower frequency bands (delta and theta) and higher frequency bands (alpha, beta, and Gamma) are equally important to study and are the preferred choice of brainwave feature acquisition among researchers.

4.4. Popular Algorithms Used for Emotion Classification

The recent developments on human-computer interaction (HCI) that allows the computer to recognize the emotional state of the user provide an integrated interaction between human and computers. This platform propels the technology forward and creates vast opportunities for applications to be applied in many different fields such as education, healthcare, and military applications [131]. Human emotions can be recognized through various means such as gestures, facial recognition, physiological signals, and neuroimaging.

According to previous researchers, over the last decade of research on emotion recognition using physiological signals, many have deployed numerous methods of classifiers to classify the different types of emotional states [132]. Features such as K-nearest neighbor (KNN) [133, 134], regression tree, Bayesian networks, support vector machines (SVM) [133, 135], canonical correlation analysis (CCA) [136], artificial neural network (ANN) [137], linear discriminant analysis (LDA) [138], and Marquardt backpropagation (MBP) [139] were used by researchers to classify the different emotions. However, the use of these different classifiers makes it difficult for systems to port to different training and testing datasets, which generate different learning features depending on the way the emotion stimulations are presented for the user.

Observations were made over the recent developments of emotion classifications between the years 2016 and 2019 and it shows that many techniques described earlier were applied onto them with some other additional augmentation techniques implemented. Table 6 shows the classifiers used and the performance achieved from these classifications, and each of the classifiers is ranked accordingly by popularity: SVM (31.48%), KNN (11.11%), NB (7.41%), MLP, RF, and CNN (5.56% each), Fisherface (3.70%), BP, Bayes, DGCNN, ELM, FKNN, GP, GBDT, Haar, IB, LDA, LFSM, neural network, neuro-fuzzy network, WPDAI-ICA, and HC (1.85% each) while one other used Biotrace+ (1.85%) software to evaluate their classification performance and it was unclear as to which algorithm technique was actually applied for the performance obtained.

As can be seen here, SVM and KNN were among the more popular methods for emotion classification and the highest achieved performance was 97.33% (SVM) and 98.37% (KNN). However, there were other algorithms used for emotion classification that performed very successfully as well and some of these classifiers which crossed the 90% margin were CNN (97.69%), DGCNN (90.40%), Fisherface (91.00%), LFSM (92.23%), and RF (98.20%). This suggests that other classification techniques may be able to achieve good performance or improve the results of the classification. These performances only show the highest performing indicators and do not actually reflect the general emotion consensus as some of these algorithms worked well on the generalized arousal and/or valence dimensions and in other cases used very specific emotional tags, and therefore, it is difficult to directly compare the actual classification performance across all the different classifiers.

4.5. Inter- and Intrasubject Classification in the Study of Emotion Classification

The definition of intersubject variability is the differences in brain anatomy and functionality across different individuals whereas intrasubject variability is the difference in brain anatomy and functionality within an individual. Additionally, intrasubject classification conducts classification using the training and testing data from only the same individual whereas intersubject classification conducts classification using training and testing data that is not limited to only from the same individual but from across many different individuals. This means that in intersubject classification, testing can be done without retraining the classifier for the individual being tested. This is clearly a more challenging task where the classifier is trained and tested using different individuals’ EEG data. In recent studies, there has been an increasing number of studies that focused on appreciating rather than ignoring classification. Through the lens of variability, it could gain insight on the individual differences and cross-session variations, facilitating precision functional brain mapping and decoding based on individual variability and similarity. The application of neurophysiological biometrics relies on the intersubject variability and intrasubject variability where questions regarding how intersubject and intrasubject variability can be observed, analyzed, and modeled. This would entail questions of what differences could researchers gain from observing the variability and how to deal with the variability in neuroimaging. From the 30 papers identified, 28 indicated whether they conducted intrasubject, intersubject, or both types of classification.

The nonstationary EEG correlates of emotional responses that exist between individuals, namely, intersubject variability would be affected by the intrinsic differences in personality, culture, gender, educational background, and living environment, and individuals may have distinct behavioral and/or neurophysiological responses even when perceiving the same event. Thus, each individual is not likely to share the common EEG distributions that correlate to the same emotional states. Researchers have highlighted the significant challenges posed by intersubject classification in affective computing [140, 142–147]. Lin describes that for a subject-dependent exercise (intersubject classification) to work well, the class distributions between individuals have to be similar to some extent. However, individuals in real life may have different behavioral or physiological responses towards the same stimuli. Subject-independent (intrasubject classification) was argued and shown to be the preferable emotion classification approach by Rinderknecht et al. [148]. Nonetheless, the difficulty here is to develop and fit a generalized classifier that will work well for all individuals, which currently remains a grand challenge in this research domain.

From Table 6, it can be observed that not all of the researchers indicated their method of classifying their subject matter. Typically, setup descriptions that include subject-independent and across subjects refer to inter-subject classification while subject-dependent and within subjects refer to intra-subject classification. These descriptors were used interchangeably by researchers as there are no specific guidelines as to how these words should be used specifically in the description of the setups of these emotion classification experiments. Therefore, according to these descriptors, the table helps to summarize these papers in a more objective manner. From the 30 papers identified, only 18 (5 on intrasubject and 13 on intersubject) of the papers have specifically mentioned their classifications on the subject matter. Of these, the best performing classifier for intrasubject classification was achieved by RF (98.20%) by Kumaran et al. [93] on music stimuli while the best for intersubject classification was achieved by DGCNN (90.40%) by Song et al. [110] using video stimulations from SEED and DREAMER datasets. As for VR stimuli, only Hidaka et al. [116] performed using SVM (81.33%) but using only five subjects to evaluate its performance, which is considered to be very low when the number of subjects at minimal is expected to be 30 to be justifiable as mentioned by Alarcao and Fonseca [22].

4.6. Participants

From the 30 papers identified, only 26 of the papers have reported the number of participants used for emotion classification analysis as summarized in Table 7, and the table is arranged from the highest total number of participants to the lowest. The number of participants varies between the ranges from 5 to 100 participants, and 23 reports stated their gender population with the number of males (408) being higher than females (342) overall, while another 3 reports only stated the number of participants without stating the gender population. 7.70% was reported using less than 10 subjects, 46.15% reported using between 10 and 30 participants, and 46.15% reported using more than 30 participants.

16 reports stated their mean age groups ranging between 15.29 and 30 with an exception that there was a study on ASD (autism spectrum disorder) group being the youngest with the mean age of 15.29. Another 4 only reported their participants’ age ranging between 18 and 28 [106, 120, 141, 150] while 2 other studies only reported they had volunteers from their university students [98, 115] and 1 other report stated they had 2 additional institutions volunteered in addition to their own university students [118].

The 2 reported studies with less than 10 participants [92, 119] have had their justifications on why they would be conducting with these numbers such that Horvat expressed their interest in investigating the stability of affective EEG features by running multiple sessions on single subjects compared to running large number of subjects such as DEAP with single EEG recording session for each subject. Lan was conducting a pilot study on the combination of VR using NAPS database with the Emotiv EPOC+ headset to investigate the effectiveness of both devices and later found that in order to achieve a better immersion experience, some elements of ergonomics on both devices have to be sacrificed.

The participants who volunteered to join for these experiments for emotion classification had all reported to have no physical abnormalities or mental disorders and are thus fit and healthy for the experiments aside from one reported study which was granted permission to conduct on ASD subjects [117]. Other reports have evaluated their understanding of emotion labels before partaking any experiment as most of the participants would need to evaluate their emotions using Self-Assessment Manikin (SAM) after each trial. The studies also reported that the participants had sufficient educational backgrounds and therefore can justify their emotions when questioned on their current mental state. Many of the studies were conducted on university grounds with permission since the research of emotion classification was conducted by university-based academicians, and therefore, the population of the participants was mostly from university students.

Many of these reported studies only focused on the feature extractions from their EEG experiments or from SAM evaluations on valence, arousal, and dominance and presented their classification results at the end. Based on the current findings, no studies were found that conducted specifically differentiating the differences between male and female emotional responses or classifications. To have a reliable classification result, such studies should be conducted with at least 10 participants to have statistically meaningful results.

5. Discussion

One of the issues that emerged from this review is that there is a lack of studies conducted for virtual reality-based emotion classification where the immersive experience of the virtual reality could possibly evoke greater emotional responses over the traditional stimuli presented through computer monitors or audible speakers since virtual reality combines senses such as sight, hearing, and sense of “being there” immersively. There is currently no openly available database for VR-based emotion classification, where the stimuli have been validated for virtual reality usage in emotional responses. Many of the research have had to self-design their own emotional stimuli. Furthermore, there are inconsistencies in terms of the duration of the stimulus presented for the participants, especially in virtual reality where the emotion fluctuates greatly depending on the duration and content of the stimulus presented. Therefore, to keep the fluctuations of the emotions as minimal as possible as well as being direct to the intended emotional response, the length of the stimulus presented should be kept between 15 and 20 seconds. The reason behind this selected duration was that there is ample amount of time for the participants to explore the virtual reality environment to get oneself associated and stimulated enough that there are emotional responses received as feedback from the stimuli presented.

In recent developments for virtual reality, there are many available products in the market used for entertainment purposes with the majority of the products intended for gaming experiences such as Oculus Rift, HTC Vive, Playstation VR, and many other upcoming products. However, these products might be costly and overburdened with requirements such as the need for a workstation capable of handling virtual reality rendering environments or a console-specific device. Current smartphones have built-in inertial sensors such as gyroscope and accelerometers to measure direction and movement speed. Furthermore, this small and compact device has enough computational power to run virtual reality content provided with a VR headset and a set of earphones. The package for building a virtual reality environment is available using System Development Kits (SDKs) such as Unity3D which can be exported to multiple platforms making it versatile for deployments across many devices.

With regard to versatility, various machine learning algorithms are currently available for use in different applications, and these algorithms can achieve complex calculations with minimal time wasted thanks to the technological advancements in computing as well as efficient utilization of algorithmic procedures [151]. However, there is no evidence of a single algorithm that can best the rest and this makes it difficult for algorithm selection when preparing for emotion classification tasks. Furthermore, with regard to versatility, there needs to be a trained model for machine learning algorithms that can be used for commercial deployment or benchmarking for future emotion classifications. Therefore, intersubject variability (also known as subject-dependent, studies across subjects, or leave-one-out in some other studies) is a concept that should be followed as this method generalizes the emotion classification task over the overall population and has a high impact value due to the nonrequirement of retraining the classification model for every single new user.

The collection of brainwave signals varies differently depending on the quality or sensitivity of the electrodes when attempting to collect the brainwave signals. Furthermore, the collection of brainwave signals depends on the number of electrodes and its placements around the scalp which should conform to the 10–20 international EEG standards. There needs to be a standardized measuring tool for the collection of EEG signals, and the large variances of products of wearable EEG headsets would produce varying results depending on the handlings of the user. It is suggested that standardization for the collection of the brainwave signals be accomplished using a low-cost wearable EEG headset since it is easily accessible by the research community. While previous studies have reported that the emotional experiences are stored within the temporal region of the brain, current evidence suggests that emotional responses may also be influenced by different regions of the brain such as the frontal and parietal regions. Furthermore, the association of brainwave bands from both the lower and higher frequencies can actually improve the emotional classification accuracy. Additionally, the optimal selection of the electrodes as learning features should also be considered since many of the EEG devices have different numbers of electrodes and placements, and hence, the number and selection of electrode positions should be explored systematically in order to verify how it affects the emotion classification task.

6. Conclusions

In this review, we have presented the analysis of emotion classification studies from 2016–2019 that propose novel methods for emotion recognition using EEG signals. The review also suggests a different approach towards emotion classification using VR as the emotional stimuli presentation platform and the need for developing a new database based on VR stimuli. We hope that this paper has provided a useful critical review update on the current research work in EEG-based emotion classification and that the future opportunities for research in this area would serve as a platform for new researchers venturing into this line of research.

Data Availability

No data are made available for this work.

Conflicts of Interest

The authors declare that they have no competing interests.


This work was supported by a grant from the Ministry of Science, Technology, Innovation (MOSTI), Malaysia (ref. ICF0001-2018).


  1. A. Mert and A. Akan, “Emotion recognition from EEG signals by using multivariate empirical mode decomposition,” Pattern Analysis and Applications, vol. 21, no. 1, pp. 81–89, 2018. View at: Publisher Site | Google Scholar
  2. M. M. Bradley and P. J. Lang, “Measuring emotion: the self-assessment manikin and the semantic differential,” Journal of Behavior Therapy and Experimental Psychiatry, vol. 25, no. 1, pp. 49–59, 1994. View at: Publisher Site | Google Scholar
  3. J. Morris, “Observations: SAM: the Self-Assessment Manikin an efficient cross-cultural measurement of emotional response,” Journal of Advertising Research, vol. 35, no. 6, pp. 63–68, 1995. View at: Google Scholar
  4. E. C. S. Hayashi, J. E. G. Posada, V. R. M. L. Maike, and M. C. C. Baranauskas, “Exploring new formats of the Self-Assessment Manikin in the design with children,” in Proceedings of the 15th Brazilian Symposium on Human Factors in Computer Systems-IHC’16, São Paulo, Brazil, October 2016. View at: Publisher Site | Google Scholar
  5. A. J. Casson, “Wearable EEG and beyond,” Biomedical Engineering Letters, vol. 9, no. 1, pp. 53–71, 2019. View at: Publisher Site | Google Scholar
  6. Y.-H. Chen, M. de Beeck, L. Vanderheyden et al., “Soft, comfortable polymer dry electrodes for high quality ECG and EEG recording,” Sensors, vol. 14, no. 12, pp. 23758–23780, 2014. View at: Publisher Site | Google Scholar
  7. G. Boon, P. Aricò, G. Borghini, N. Sciaraffa, A. Di Florio, and F. Babiloni, “The dry revolution: evaluation of three different eeg dry electrode types in terms of signal spectral features, mental states classification and usability,” Sensors (Switzerland), vol. 19, no. 6, pp. 1–21, 2019. View at: Publisher Site | Google Scholar
  8. S. Jeon, J. Chien, C. Song, and J. Hong, “A preliminary study on precision image guidance for electrode placement in an EEG study,” Brain Topography, vol. 31, no. 2, pp. 174–185, 2018. View at: Publisher Site | Google Scholar
  9. Y. Kakisaka, R. Alkawadri, Z. I. Wang et al., “Sensitivity of scalp 10–20 EEG and magnetoencephalography,” Epileptic Disorders, vol. 15, no. 1, pp. 27–31, 2013. View at: Publisher Site | Google Scholar
  10. M. Burgess, A. Kumar, and V. M. J, “Analysis of EEG using 10:20 electrode system,” International Journal of Innovative Research in Science, Engineering and Technology, vol. 1, no. 2, pp. 2319–8753, 2012. View at: Google Scholar
  11. A. D. Bigirimana, N. Siddique, and D. Coyle, “A hybrid ICA-wavelet transform for automated artefact removal in EEG-based emotion recognition,” in IEEE International Conference on Systems, Man, and Cybernetics, SMC 2016-Conference Proceedings, pp. 4429–4434, Budapest, Hungary, October 2016. View at: Publisher Site | Google Scholar
  12. R. Bogacz, U. Markowska-Kaczmar, and A. Kozik, “Blinking artefact recognition in EEG signal using artificial neural network,” in Proceedings of the 4th Conference on Neural, Zakopane, Poland, June 1999. View at: Google Scholar
  13. S. O’Regan, S. Faul, and W. Marnane, “Automatic detection of EEG artefacts arising from head movements using EEG and gyroscope signals,” Medical Engineering and Physics, vol. 35, no. 7, pp. 867–874, 2013. View at: Publisher Site | Google Scholar
  14. R. Romo-Vazquez, R. Ranta, V. Louis-Dorr, and D. Maquin, “EEG ocular artefacts and noise removal,” in Annual International Conference of the IEEE Engineering in Medicine and Biology-Proceedings, pp. 5445–5448, Lyon, France, August 2007. View at: Publisher Site | Google Scholar
  15. M. K. Islam, A. Rastegarnia, and Z. Yang, “Methods for artifact detection and removal from scalp EEG: a review,” Neurophysiologie Clinique/Clinical Neurophysiology, vol. 46, no. 4-5, pp. 287–305, 2016. View at: Publisher Site | Google Scholar
  16. A. S. Janani, T. S. Grummett, T. W. Lewis et al., “Improved artefact removal from EEG using Canonical Correlation Analysis and spectral slope,” Journal of Neuroscience Methods, vol. 298, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  17. X. Pope, G. B. Bian, and Z. Tian, “Removal of artifacts from EEG signals: a review,” Sensors (Switzerland), vol. 19, no. 5, pp. 1–18, 2019. View at: Publisher Site | Google Scholar
  18. S. Suja Priyadharsini, S. Edward Rajan, and S. Femilin Sheniha, “A novel approach for the elimination of artefacts from EEG signals employing an improved Artificial Immune System algorithm,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 28, no. 1-2, pp. 239–259, 2016. View at: Publisher Site | Google Scholar
  19. A. Szentkirályi, K. K. H. Wong, R. R. Grunstein, A. L. D'Rozario, and J. W. Kim, “Performance of an automated algorithm to process artefacts for quantitative EEG analysis during a simultaneous driving simulator performance task,” International Journal of Psychophysiology, vol. 121, no. August, pp. 12–17, 2017. View at: Publisher Site | Google Scholar
  20. A. Tandle, N. Jog, P. D'cunha, and M. Chheta, “Classification of artefacts in EEG signal recordings and EOG artefact removal using EOG subtraction,” Communications on Applied Electronics, vol. 4, no. 1, pp. 12–19, 2016. View at: Publisher Site | Google Scholar
  21. M. Murugappan and S. Murugappan, “Human emotion recognition through short time Electroencephalogram (EEG) signals using Fast Fourier Transform (FFT),” in Proceedings-2013 IEEE 9th International Colloquium on Signal Processing and its Applications, CSPA 2013, pp. 289–294, Kuala Lumpur, Malaysia, March 2013. View at: Publisher Site | Google Scholar
  22. S. M. Alarcao and M. J. Fonseca, “Emotions recognition using EEG signals: a survey,” IEEE Transactions on Affective Computing, vol. 10, pp. 1–20, 2019. View at: Publisher Site | Google Scholar
  23. J. Panksepp, Affective Neuroscience: The Foundations of Human and Animal Emotions, Oxford University Press, Oxford, UK, 2004.
  24. A. E. Penner and J. Stoddard, “Clinical affective neuroscience,” Journal of the American Academy of Child & Adolescent Psychiatry, vol. 57, no. 12, p. 906, 2018. View at: Publisher Site | Google Scholar
  25. L. Pessoa, “Understanding emotion with brain networks,” Current Opinion in Behavioral Sciences, vol. 19, pp. 19–25, 2018. View at: Publisher Site | Google Scholar
  26. P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality and Social Psychology, vol. 17, no. 2, p. 124, 1971. View at: Publisher Site | Google Scholar
  27. B. De Gelder, “Why bodies? Twelve reasons for including bodily expressions in affective neuroscience,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 364, no. 1535, pp. 3475–3484, 2009. View at: Publisher Site | Google Scholar
  28. F. M. Plaza-del-Arco, M. T. Martín-Valdivia, L. A. Ureña-López, and R. Mitkov, “Improved emotion recognition in Spanish social media through incorporation of lexical knowledge,” Future Generation Computer Systems, vol. 110, 2020. View at: Publisher Site | Google Scholar
  29. J. Kumar and J. A. Kumar, “Machine learning approach to classify emotions using GSR,” Advanced Research in Electrical and Electronic Engineering, vol. 2, no. 12, pp. 72–76, 2015. View at: Google Scholar
  30. M. Ali, A. H. Mosa, F. Al Machot, and K. Kyamakya, “Emotion recognition involving physiological and speech signals: a comprehensive review,” in Recent Advances in Nonlinear Dynamics and Synchronization, pp. 287–302, Springer, Berlin, Germany, 2018. View at: Google Scholar
  31. D. H. Hockenbury and S. E. Hockenbury, Discovering Psychology, Macmillan, New York, NY, USA, 2010.
  32. I. B. Mauss and M. D. Robinson, “Measures of emotion: a review,” Cognition & Emotion, vol. 23, no. 2, pp. 209–237, 2009. View at: Publisher Site | Google Scholar
  33. E. Fox, Emotion Science Cognitive and Neuroscientific Approaches to Understanding Human Emotions, Macmillan, New York, NY, USA, 2008.
  34. P. Ekman, “Are there basic emotions?” Psychological Review, vol. 99, no. 3, pp. 550–553, 1992. View at: Publisher Site | Google Scholar
  35. R. Plutchik, “The nature of emotions,” American Scientist, vol. 89, no. 4, pp. 344–350, 2001. View at: Publisher Site | Google Scholar
  36. C. E. Izard, “Basic emotions, natural kinds, emotion schemas, and a new paradigm,” Perspectives on Psychological Science, vol. 2, no. 3, pp. 260–280, 2007. View at: Publisher Site | Google Scholar
  37. C. E. Izard, “Emotion theory and research: highlights, unanswered questions, and emerging issues,” Annual Review of Psychology, vol. 60, no. 1, pp. 1–25, 2009. View at: Publisher Site | Google Scholar
  38. P. J. Lang, “The emotion probe: studies of motivation and attention,” American Psychologist, vol. 50, no. 5, p. 372, 1995. View at: Publisher Site | Google Scholar
  39. A. Mehrabian, “Comparison of the PAD and PANAS as models for describing emotions and for differentiating anxiety from depression,” Journal of Psychopathology and Behavioral Assessment, vol. 19, no. 4, pp. 331–357, 1997. View at: Publisher Site | Google Scholar
  40. E. Osuna, L. Rodríguez, J. O. Gutierrez-garcia, A. Luis, E. Osuna, and L. Rodr, “Development of computational models of Emotions : a software engineering perspective,” Cognitive Systems Research, vol. 60, 2020. View at: Publisher Site | Google Scholar
  41. A. Hassouneh, A. M. Mutawa, and M. Murugappan, “Development of a real-time emotion recognition system using facial expressions and EEG based on machine learning and deep neural network methods,” Informatics in Medicine Unlocked, vol. 20, p. 100372, 2020. View at: Publisher Site | Google Scholar
  42. F. Balducci, C. Grana, and R. Cucchiara, “Affective level design for a role-playing videogame evaluated by a brain-computer interface and machine learning methods,” The Visual Computer, vol. 33, no. 4, pp. 413–427, 2017. View at: Publisher Site | Google Scholar
  43. Z. Su, X. Xu, D. Jiawei, and W. Lu, “Intelligent wheelchair control system based on BCI and the image display of EEG,” in Proceedings of 2016 IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference, IMCEC 2016, pp. 1350–1354, Xi’an, China, October 2016. View at: Publisher Site | Google Scholar
  44. A. Campbell, T. Choudhury, S. Hu et al., “NeuroPhone: brain-mobile phone interface using a wireless EEG headset,” in Proceedings of the 2nd ACM SIGCOMM Workshop on Networking, Systems, and Applications on Mobile Handhelds, MobiHeld ’10, Co-located with SIGCOMM 2010, New Delhi, India, January 2010. View at: Publisher Site | Google Scholar
  45. D. Bright, A. Nair, D. Salvekar, and S. Bhisikar, “EEG-based brain controlled prosthetic arm,” in Proceedings of the Conference on Advances in Signal Processing, CASP 2016, pp. 479–483, Pune, India, June 2016. View at: Publisher Site | Google Scholar
  46. C. Demirel, H. Kandemir, and H. Kose, “Controlling a robot with extraocular muscles using EEG device,” in Proceedings of the 26th IEEE Signal Processing and Communications Applications Conference, SIU 2018, Izmir, Turkey, May 2018. View at: Publisher Site | Google Scholar
  47. Y. Liu, Y. Ding, C. Li et al., “Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network,” Computers in Biology and Medicine, vol. 123, p. 103927, 2020. View at: Publisher Site | Google Scholar
  48. G. L. Ahern and G. E. Schwartz, “Differential lateralization for positive and negative emotion in the human brain: EEG spectral analysis,” Neuropsychologia, vol. 23, no. 6, pp. 745–755, 1985. View at: Publisher Site | Google Scholar
  49. H. Gunes and M. Piccardi, “Bi-modal emotion recognition from expressive face and body gestures,” Journal of Network and Computer Applications, vol. 30, no. 4, pp. 1334–1345, 2007. View at: Publisher Site | Google Scholar
  50. R. Jenke, A. Peer, M. Buss et al., “Feature extraction and selection for emotion recognition from EEG,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 327–339, 2014. View at: Publisher Site | Google Scholar
  51. J. U. Blackford and D. S. Pine, “Neural substrates of childhood anxiety disorders,” Child and Adolescent Psychiatric Clinics of North America, vol. 21, no. 3, pp. 501–525, 2012. View at: Publisher Site | Google Scholar
  52. K. A. Goosens and S. Maren, “Long-term potentiation as a substrate for memory: evidence from studies of amygdaloid plasticity and pavlovian fear conditioning,” Hippocampus, vol. 12, no. 5, pp. 592–599, 2002. View at: Publisher Site | Google Scholar
  53. M. R. Turner, S. Maren, K. L. Phan, and I. Liberzon, “The contextual brain: implications for fear conditioning, extinction and psychopathology,” Nature Reviews Neuroscience, vol. 14, no. 6, pp. 417–428, 2013. View at: Google Scholar
  54. U. Herwig, P. Satrapi, and C. Schönfeldt-Lecuona, “Using the international 10–20 EEG system for positioning of transcranial magnetic stimulation,” Brain Topography, vol. 16, no. 2, pp. 95–99, 2003. View at: Publisher Site | Google Scholar
  55. R. W. Homan, J. Herman, and P. Purdy, “Cerebral location of international 10–20 system electrode placement,” Electroencephalography and Clinical Neurophysiology, vol. 66, no. 4, pp. 376–382, 1987. View at: Publisher Site | Google Scholar
  56. G. M. Rojas, C. Alvarez, C. E. Montoya, M. de la Iglesia-Vayá, J. E. Cisternas, and M. Gálvez, “Study of resting-state functional connectivity networks using EEG electrodes position as seed,” Frontiers in Neuroscience, vol. 12, no. APR, pp. 1–12, 2018. View at: Publisher Site | Google Scholar
  57. J. A. Blanco, A. C. Vanleer, T. K. Calibo, and S. L. Firebaugh, “Single-trial cognitive stress classification using portable wireless electroencephalography,” Sensors (Switzerland), vol. 19, no. 3, pp. 1–16, 2019. View at: Publisher Site | Google Scholar
  58. M. Abujelala, A. Sharma, C. Abellanoza, and F. Makedon, “Brain-EE: brain enjoyment evaluation using commercial EEG headband,” in Proceedings of the ACM International Conference Proceeding Series, New York, NY, USA, September 2016. View at: Publisher Site | Google Scholar
  59. L. H. Chew, J. Teo, and J. Mountstephens, “Aesthetic preference recognition of 3D shapes using EEG,” Cognitive Neurodynamics, vol. 10, no. 2, pp. 165–173. View at: Publisher Site | Google Scholar
  60. G. Mountstephens and T. Yamada, “Pediatric clinical neurophysiology,” Atlas of Artifacts in Clinical Neurophysiology, vol. 41, 2018. View at: Google Scholar
  61. C. Miller, “Review of handbook of EEG interpretation,” The Neurodiagnostic Journal, vol. 55, no. 2, p. 136, 2015. View at: Google Scholar
  62. I. Obeid and J. Picone, “The temple university hospital EEG data corpus,” Frontiers in Neuroscience, vol. 10, no. MAY, 2016. View at: Publisher Site | Google Scholar
  63. A. Aldridge, E. Barnes, C. L. Bethel et al., “Accessible electroencephalograms (EEGs): A comparative review with openbci’s ultracortex mark IV headset,” in Proceedings of the 2019 29th International Conference Radioelektronika, pp. 1–6, Pardubice, Czech Republic, April 2019. View at: Publisher Site | Google Scholar
  64. P. Bialas and P. Milanowski, “A high frequency steady-state visually evoked potential based brain computer interface using consumer-grade EEG headset,” in Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014, pp. 5442–5445, Chicago, IL, USA, August 2014. View at: Publisher Site | Google Scholar
  65. Y. Wang, Z. Wang, W. Clifford, C. Markham, T. E. Ward, and C. Deegan, “Validation of low-cost wireless EEG system for measuring event-related potentials,” in Proceedings of the 29th Irish Signals and Systems Conference, ISSC 2018, pp. 1–6, Belfast, UK, June 2018. View at: Publisher Site | Google Scholar
  66. S. Sridhar, U. Ramachandraiah, E. Sathish, G. Muthukumaran, and P. R. Prasad, “Identification of eye blink artifacts using wireless EEG headset for brain computer interface system,” in Proceedings of IEEE Sensors, Montreal, UK, October 2018. View at: Publisher Site | Google Scholar
  67. M. Ahmad and M. Aqil, “Implementation of nonlinear classifiers for adaptive autoregressive EEG features classification,” in Proceedings-2015 Symposium on Recent Advances in Electrical Engineering, RAEE 2015, Islamabad, Pakistan, October 2015. View at: Publisher Site | Google Scholar
  68. A. Mheich, J. Guilloton, and N. Houmani, “Monitoring visual sustained attention with a low-cost EEG headset,” in Proceedings of the International Conference on Advances in Biomedical Engineering, Beirut, Lebanon, October 2017. View at: Publisher Site | Google Scholar
  69. K. Tomonaga, S. Wakamizu, and J. Kobayashi, “Experiments on classification of electroencephalography (EEG) signals in imagination of direction using a wireless portable EEG headset,” in Proceedings of the ICCAS 2015-2015 15th International Conference On Control, Automation And Systems, Busan, South Korea, October 2015. View at: Publisher Site | Google Scholar
  70. S. Wakamizu, K. Tomonaga, and J. Kobayashi, “Experiments on neural networks with different configurations for electroencephalography (EEG) signal pattern classifications in imagination of direction,” in Proceedings-5th IEEE International Conference on Control System, Computing and Engineering, ICCSCE 2015, pp. 453–457, George Town, Malaysia, November 2015. View at: Publisher Site | Google Scholar
  71. R. Sarno, M. N. Munawar, and B. T. Nugraha, “Real-time electroencephalography-based emotion recognition system,” International Review on Computers and Software (IRECOS), vol. 11, no. 5, pp. 456–465, 2016. View at: Publisher Site | Google Scholar
  72. N. Thammasan, K. Moriyama, K.-i. Fukui, and M. Numao, “Familiarity effects in EEG-based emotion recognition,” Brain Informatics, vol. 4, no. 1, pp. 39–50, 2017. View at: Publisher Site | Google Scholar
  73. N. Zhuang, Y. Zeng, L. Tong, C. Zhang, H. Zhang, and B. Yan, “Emotion recognition from EEG signals using multidimensional information in EMD domain,” BioMed Research International, vol. 2017, Article ID 8317357, 9 pages, 2017. View at: Publisher Site | Google Scholar
  74. T. M. C. Lee, H.-L. Liu, C. C. H. Chan, S.-Y. Fang, and J.-H. Gao, “Neural activities associated with emotion recognition observed in men and women,” Molecular Psychiatry, vol. 10, no. 5, p. 450, 2005. View at: Publisher Site | Google Scholar
  75. J.-Y. Zhu, W.-L. Zheng, and B.-L. Lu, “Cross-subject and cross-gender emotion classification from EEG,” in World Congress on Medical Physics and Biomedical Engineering, pp. 1188–1191, Springer, Berlin, Germany, 2015. View at: Google Scholar
  76. I. Stanica, M. I. Dascalu, C. N. Bodea, and A. D. Bogdan Moldoveanu, “VR job interview simulator: where virtual reality meets artificial intelligence for education,” in Proceedings of the 2018 Zooming Innovation in Consumer Technologies Conference, Novi Sad, Serbia, May 2018. View at: Publisher Site | Google Scholar
  77. N. Malandrakis, A. Potamianos, G. Evangelopoulos, and A. Zlatintsi, A Supervised Approach To Movie Emotion Tracking, National Technical University of Athens, Athens, Greece, 2011.
  78. H. H. S. Ip, S. W. L. Wong, D. F. Y. Chan et al., “Enhance emotional and social adaptation skills for children with autism spectrum disorder: a virtual reality enabled approach,” Computers & Education, vol. 117, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  79. J. Wong, “What is virtual reality?” Virtual Reality Information Resources, American Library Association, Chicago, IL, USA, 1998. View at: Publisher Site | Google Scholar
  80. I. E. Sutherland, C. J. Fluke, and D. G. Barnes, “The ultimate display. Multimedia: from wagner to virtual reality,” pp. 506–508, 1965, View at: Google Scholar
  81. R. G. Klein and I. E. Sutherland, “A head-mounted three dimensional display,” in Proceedings of the December 9–11, 1968, Fall Joint Computer Conference, Part I, pp. 757–764, New York, NY, USA, December 1968. View at: Publisher Site | Google Scholar
  82. P. Milgram and F. Kishimo, “A taxonomy of mixed reality,” IEICE Transactions on Information and Systems, vol. 77, no. 12, pp. 1321–1329, 1994. View at: Google Scholar
  83. Z. Pan, A. D. Cheok, H. Yang, J. Zhu, and J. Shi, “Virtual reality and mixed reality for virtual learning environments,” Computers & Graphics, vol. 30, no. 1, pp. 20–28, 2006. View at: Publisher Site | Google Scholar
  84. M. Mekni and A. Lemieux, “Augmented reality: applications, challenges and future trends,” Applied Computational Science, vol. 20, pp. 205–214, 2014. View at: Google Scholar
  85. M. Billinghurst, A. Clark, and G. Lee, “A survey of augmented reality foundations and trends R in human-computer interaction,” Human-Computer Interaction, vol. 8, no. 3, pp. 73–272, 2014. View at: Publisher Site | Google Scholar
  86. S. Martin, G. Diaz, E. Sancristobal, R. Gil, M. Castro, and J. Peire, “New technology trends in education: seven years of forecasts and convergence,” Computers & Education, vol. 57, no. 3, pp. 1893–1906, 2011. View at: Publisher Site | Google Scholar
  87. Y. Yang, Q. M. J. Wu, W.-L. Zheng, and B.-L. Lu, “EEG-based emotion recognition using hierarchical network with subnetwork nodes,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 2, pp. 408–419, 2018. View at: Publisher Site | Google Scholar
  88. T. T. Beemster, J. M. van Velzen, C. A. M. van Bennekom, M. F. Reneman, and M. H. W. Frings-Dresen, “Test-retest reliability, agreement and responsiveness of productivity loss (iPCQ-VR) and healthcare utilization (TiCP-VR) questionnaires for sick workers with chronic musculoskeletal pain,” Journal of Occupational Rehabilitation, vol. 29, no. 1, pp. 91–103, 2019. View at: Publisher Site | Google Scholar
  89. X. Liu, J. Zhang, G. Hou, and Z. Wang, “Virtual reality and its application in military,” IOP Conference Series: Earth and Environmental Science, vol. 170, no. 3, 2018. View at: Publisher Site | Google Scholar
  90. J. Mcintosh, M. Rodgers, B. Marques, and A. Cadle, The Use of VR for Creating Therapeutic Environments for the Health and Wellbeing of Military Personnel , Their Families and Their Communities, VDE VERLAG GMBH, Berlin, Germany, 2019.
  91. M. Johnson-Glenberg, “Immersive VR and education: embodied design principles that include gesture and hand controls,” Frontiers Robotics AI, vol. 5, pp. 1–19, 2018. View at: Publisher Site | Google Scholar
  92. Z. Lan, O. Sourina, L. Wang, and Y. Liu, “Real-time EEG-based emotion monitoring using stable features,” The Visual Computer, vol. 32, no. 3, pp. 347–358, 2016. View at: Publisher Site | Google Scholar
  93. D. S. Kumaran, S. Y. Ragavendar, A. Aung, and P. Wai, Using EEG-validated Music Emotion Recognition Techniques to Classify Multi-Genre Popular Music for Therapeutic Purposes, Nanyang Technological University, Nanyang Ave, Singapore, 2018.
  94. C. Lin, M. Liu, W. Hsiung, and J. Jhang, “Music emotion recognition based on two-level support vector classification,” Proceedings-International Conference on Machine Learning and Cybernetics, vol. 1, pp. 375–379, 2017. View at: Publisher Site | Google Scholar
  95. S. H. Chen, Y. S. Lee, W. C. Hsieh, and J. C. Wang, “Music emotion recognition using deep Gaussian process,” in Proceedings of the 2015 asia-pacific Signal and information processing association annual Summit and conference, vol. 2015, pp. 495–498, Hong Kong, China, December 2016. View at: Publisher Site | Google Scholar
  96. Y. An, S. Sun, and S. Wang, “Naive Bayes classifiers for music emotion classification based on lyrics,” in Proceedings-16th IEEE/ACIS International Conference on Computer and Information Science, ICIS 2017, no. 1, pp. 635–638, Wuhan, China, May 2017. View at: Publisher Site | Google Scholar
  97. J. Bai, K. Luo, J. Peng et al., “Music emotions recognition by cognitive classification methodologies,” in Proceedings of the 2017 IEEE 16th International Conference on Cognitive Informatics and Cognitive Computing, ICCI∗CC 2017, pp. 121–129, Oxford, UK, July 2017. View at: Publisher Site | Google Scholar
  98. R. Nawaz, H. Nisar, and V. V. Yap, “Recognition of useful music for emotion enhancement based on dimensional model,” in Proceedings of the 2nd International Conference on BioSignal Analysis, Processing and Systems (ICBAPS), Kuching, Malaysia, July 2018. View at: Google Scholar
  99. S. A. Y. Al-Galal, I. F. T. Alshaikhli, A. W. B. A. Rahman, and M. A. Dzulkifli, “EEG-based emotion recognition while listening to quran recitation compared with relaxing music using valence-arousal model,” in Proceedings-2015 4th International Conference on Advanced Computer Science Applications and Technologies, pp. 245–250, Kuala Lumpur, Malaysia, December 2015. View at: Publisher Site | Google Scholar
  100. C. Shahnaz, S. B. Masud, and S. M. S. Hasan, “Emotion recognition based on wavelet analysis of Empirical Mode Decomposed EEG signals responsive to music videos,” in Proceedings of the IEEE Region 10 Annual International Conference/TENCON, Singapore, November 2016. View at: Publisher Site | Google Scholar
  101. S. W. Byun, S. P. Lee, and H. S. Han, “Feature selection and comparison for the emotion recognition according to music listening,” in Proceedings of the International Conference on Robotics and Automation Sciences, pp. 172–176, Hong Kong, China, August 2017. View at: Publisher Site | Google Scholar
  102. J. Xu, F. Ren, and Y. Bao, “EEG emotion classification based on baseline strategy,” in Proceedings of 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems, Nanjing, China, November 2018. View at: Publisher Site | Google Scholar
  103. S. Wu, X. Xu, L. Shu, and B. Hu, “Estimation of valence of emotion using two frontal EEG channels,” in Proceedings of the 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pp. 1127–1130, Kansas City, MO, USA, November 2017. View at: Publisher Site | Google Scholar
  104. H. Ullah, M. Uzair, A. Mahmood, M. Ullah, S. D. Khan, and F. A. Cheikh, “Internal emotion classification using EEG signal with sparse discriminative ensemble,” IEEE Access, vol. 7, pp. 40144–40153, 2019. View at: Publisher Site | Google Scholar
  105. H. Dabas, C. Sethi, C. Dua, M. Dalawat, and D. Sethia, “Emotion classification using EEG signals,” in ACM International Conference Proceeding Series, pp. 380–384, Las Vegas, NV, USA, June 2018. View at: Publisher Site | Google Scholar
  106. A. H. Krishna, A. B. Sri, K. Y. V. S. Priyanka, S. Taran, and V. Bajaj, “Emotion classification using EEG signals based on tunable-Q wavelet transform,” IET Science, Measurement & Technology, vol. 13, no. 3, pp. 375–380, 2019. View at: Publisher Site | Google Scholar
  107. R. Subramanian, J. Wache, M. K. Abadi, R. L. Vieriu, S. Winkler, and N. Sebe, “Ascertain: emotion and personality recognition using commercial sensors,” IEEE Transactions on Affective Computing, vol. 9, no. 2, pp. 147–160, 2018. View at: Publisher Site | Google Scholar
  108. M. K. Abadi, R. Subramanian, S. M. Kia, P. Avesani, I. Patras, and N. Sebe, “DECAF: MEG-based multimodal database for decoding affective physiological responses,” IEEE Transactions on Affective Computing, vol. 6, no. 3, pp. 209–222, 2015. View at: Publisher Site | Google Scholar
  109. T. H. Li, W. Liu, W. L. Zheng, and B. L. Lu, “Classification of five emotions from EEG and eye movement signals: discrimination ability and stability over time,” in Proceedings of the International IEEE/EMBS Conference on Neural Engineering, San Francisco, CA, USA, March 2019. View at: Publisher Site | Google Scholar
  110. T. Song, W. Zheng, P. Song, and Z. Cui, “EEG emotion recognition using dynamical graph convolutional neural networks,” IEEE Transactions on Affective Computing, vol. 3045, pp. 1–10, 2018. View at: Publisher Site | Google Scholar
  111. N. V. Kimmatkar and V. B. Babu, “Human emotion classification from brain EEG signal using multimodal approach of classifier,” in Proceedings of the ACM International Conference Proceeding Series, pp. 9–13, Galway, Ireland, April 2018. View at: Publisher Site | Google Scholar
  112. M. Zangeneh Soroush, K. Maghooli, S. Kamaledin Setarehdan, and A. Motie Nasrabadi, “Emotion classification through nonlinear EEG analysis using machine learning methods,” International Clinical Neuroscience Journal, vol. 5, no. 4, pp. 135–149, 2018. View at: Publisher Site | Google Scholar
  113. J. Marín-Morales, J. L. Higuera-Trujillo, A. Greco et al., “Affective computing in virtual reality: emotion recognition from brain and heartbeat dynamics using wearable sensors,” Scientific Reports, vol. 8, no. 1, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  114. W. Zhang, L. Shu, X. Xu, and D. Liao, “Affective virtual reality system (AVRS): design and ratings of affective VR scenes,” in Proceedings of the 2017 International Conference on Virtual Reality and Visualization, ICVRV 2017, pp. 311–314, Zhengzhou, China, October 2017. View at: Publisher Site | Google Scholar
  115. A. Kim, M. Chang, Y. Choi, S. Jeon, and K. Lee, “The effect of immersion on emotional responses to film viewing in a virtual environment,” in Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces, pp. 601-602, Reutlingen, Germany, March 2018. View at: Publisher Site | Google Scholar
  116. K. Hidaka, H. Qin, and J. Kobayashi, “Preliminary test of affective virtual reality scenes with head mount display for emotion elicitation experiment,” in Proceedings of the International Conference On Control, Automation And Systems, (Iccas), pp. 325–329, Ramada Plaza, Korea, October 2017. View at: Google Scholar
  117. J. Fan, J. W. Wade, A. P. Key, Z. E. Warren, and N. Sarkar, “EEG-based affect and workload recognition in a virtual driving environment for ASD intervention,” IEEE Transactions on Biomedical Engineering, vol. 65, no. 1, pp. 43–51, 2018. View at: Publisher Site | Google Scholar
  118. V. Lorenzetti, B. Melo, R. Basílio et al., “Emotion regulation using virtual environments and real-time fMRI neurofeedback,” Frontiers in Neurology, vol. 9, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  119. M. Horvat, M. Dobrinic, M. Novosel, and P. Jercic, “Assessing emotional responses induced in virtual reality using a consumer eeg headset: a preliminary report,” in Proceedings of the 2018 41st International Convention On Information And Communication Technology, Electronics And Microelectronics, Opatija, Croatia, May 2018. View at: Publisher Site | Google Scholar
  120. K. Guo, J. Huang, Y. Yang, and X. Xu, “Effect of virtual reality on fear emotion base on EEG signals analysis,” in Proceedings of the 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), Nanjing, China, May 2019. View at: Publisher Site | Google Scholar
  121. S. Koelstra, C. Muhl, M. Soleymani et al., “DEAP: a database for emotion analysis using physiological signals,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 18–31, 2012. View at: Publisher Site | Google Scholar
  122. A. Patras, G. Valenza, L. Citi, and E. P. Scilingo, “Arousal and valence recognition of affective sounds based on electrodermal activity,” IEEE Sensors Journal, vol. 17, no. 3, pp. 716–725, 2017. View at: Publisher Site | Google Scholar
  123. M. Soleymani, M. N. Caro, E. M. Schmidt, C. Y. Sha, and Y. H. Yang, “1000 songs for emotional analysis of music.,” in CrowdMM 2013-Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia, Barcelona, Spain, October 2013. View at: Publisher Site | Google Scholar
  124. X. Q. Huo, W. L. Zheng, and B. L. Lu, “Driving fatigue detection with fusion of EEG and forehead EOG,” in Proceedings of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, July 2016. View at: Publisher Site | Google Scholar
  125. M. Soleymani, S. Asghari-Esfeden, M. Pantic, and Y. Fu, “Continuous emotion detection using EEG signals and facial expressions,” in Proceedings of the IEEE International Conference on Multimedia and Expo, Chengdu, China, July 2014. View at: Publisher Site | Google Scholar
  126. W. L. Zheng and B. L. Lu, “A multimodal approach to estimating vigilance using EEG and forehead EOG,” Journal of Neural Engineering, vol. 14, no. 2, 2017. View at: Publisher Site | Google Scholar
  127. S. Katsigiannis and N. Ramzan, “DREAMER: a database for emotion recognition through EEG and ecg signals from wireless low-cost off-the-shelf devices,” IEEE Journal of Biomedical and Health Informatics, vol. 22, no. 1, pp. 98–107, 2018. View at: Publisher Site | Google Scholar
  128. A. C. Constantinescu, M. Wolters, A. Moore, and S. E. MacPherson, “A cluster-based approach to selecting representative stimuli from the International Affective Picture System (IAPS) database,” Behavior Research Methods, vol. 49, no. 3, pp. 896–912, 2017. View at: Publisher Site | Google Scholar
  129. A. Marchewka, Ł. Żurawski, K. Jednoróg, and A. Grabowska, “The Nencki Affective Picture System (NAPS): introduction to a novel, standardized, wide-range, high-quality, realistic picture database,” Behavior Research Methods, vol. 46, no. 2, pp. 596–610, 2014. View at: Publisher Site | Google Scholar
  130. S. M. U. Saeed, S. M. Anwar, M. Majid, and A. M. Bhatti, “Psychological stress measurement using low cost single channel EEG headset,” in Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, Abu Dhabi, United Arab Emirates, December 2015. View at: Publisher Site | Google Scholar
  131. S. Jerritta, M. Murugappan, R. Nagarajan, and K. Wan, “Physiological signals based human emotion recognition: a review,” in Proceedings-2011 IEEE 7th International Colloquium on Signal Processing and its Applications, Penang, Malaysia, March 2011. View at: Publisher Site | Google Scholar
  132. C. Maaoul and A. Pruski, “Emotion recognition through physiological signals for human-machine communication,” Cutting Edge Robotics, vol. 13, 2010. View at: Publisher Site | Google Scholar
  133. C. Liu, P. Rani, and N. Sarkar, “An empirical study of machine learning techniques for affect recognition in human-robot interaction,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, Sendai, Japan, September 2005. View at: Publisher Site | Google Scholar
  134. G. Rigas, C. D. Katsis, G. Ganiatsas, and D. I. Fotiadis, A User Independent, Biosignal Based, Emotion Recognition Method, Springer, Berlin, Germany, 2007.
  135. C. Zong and M. Chetouani, “Hilbert-Huang transform based physiological signals analysis for emotion recognition,” in Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, ISSPIT, pp. 334–339, Ajman, United Arab Emirates, December 2009. View at: Publisher Site | Google Scholar
  136. L. Li and J. H. Chen, “Emotion recognition using physiological signals from multiple subjects,” in Proceedings of the International Conference on Intelligent Information Hiding and Multimedia, pp. 437–446, Pasadena, CA, USA, December 2006. View at: Publisher Site | Google Scholar
  137. A. Haag, S. Goronzy, P. Schaich, and J. Williams, “Emotion recognition using bio-sensors: first steps towards an automatic system,” Lecture Notes in Computer Science, Springer, Berlin, Germany, 2004. View at: Publisher Site | Google Scholar
  138. J. Kim and E. Andre, “Emotion recognition based on physiological changes in music listening,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 12, pp. 2067–2083, 2008. View at: Publisher Site | Google Scholar
  139. F. Nasoz, K. Alvarez, C. L. Lisetti, and N. Finkelstein, “Emotion recognition from physiological signals using wireless sensors for presence technologies,” Cognition, Technology & Work, vol. 6, no. 1, pp. 4–14, 2004. View at: Publisher Site | Google Scholar
  140. Y. Li, W. Zheng, Y. Zong, Z. Cui, and T. Zhang, “A Bi-hemisphere domain adversarial neural network model for EEG emotion recognition,” IEEE Transactions on Affective Computing, 2019. View at: Publisher Site | Google Scholar
  141. K. Zhou, H. Qin, and J. Kobayashi, “Preliminary test of affective virtual reality scenes with head mount display for emotion elicitation experiment,” in Proceedings of the 17th International Conference on Control, Automation and Systems (ICCAS), pp. 325–329, Jeju, South Korea, October 2017. View at: Publisher Site | Google Scholar
  142. M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, “A multimodal database for affect recognition and implicit tagging,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 42–55, 2012. View at: Publisher Site | Google Scholar
  143. S. Gilda, H. Zafar, C. Soni, and K. Waghurdekar, “Smart music player integrating facial emotion recognition and music mood recommendation,” in Proceedings of the 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), pp. 154–158, IEEE, Chennai, India, March 2017. View at: Publisher Site | Google Scholar
  144. W. Shi and S. Feng, “Research on music emotion classification based on lyrics and audio,” in Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), pp. 1154–1159, Chongqing, China, October 2018. View at: Publisher Site | Google Scholar
  145. A. V. Iyer, V. Pasad, S. R. Sankhe, and K. Prajapati, “Emotion based mood enhancing music recommendation,” in Proceedings of the 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), pp. 1573–1577, Bangalore, India, May 2017. View at: Publisher Site | Google Scholar
  146. Y. P. Lin and T. P. Jung, “Improving EEG-based emotion classification using conditional transfer learning,” Frontiers in Human Neuroscience, vol. 11, pp. 1–11, 2017. View at: Publisher Site | Google Scholar
  147. Y. P. Lin, C. H. Wang, T. P. Jung et al., “EEG-based emotion recognition in music listening,” IEEE Transactions on Bio-Medical Engineering, vol. 57, no. 7, pp. 1798–1806, 2010. View at: Publisher Site | Google Scholar
  148. M. D. Rinderknecht, O. Lambercy, and R. Gassert, “Enhancing simulations with intra-subject variability for improved psychophysical assessments,” PLoS One, vol. 13, no. 12, 2018. View at: Publisher Site | Google Scholar
  149. J. H. Yoon and J. H. Kim, “Wavelet-based statistical noise detection and emotion classification method for improving multimodal emotion recognition,” Journal of IKEEE, vol. 22, no. 4, pp. 1140–1146, 2018. View at: Google Scholar
  150. D. Liao, W. Zhang, G. Liang et al., “Arousal evaluation of VR affective scenes based on HR and SAM,” in 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), Nanjing, China, May 2019. View at: Publisher Site | Google Scholar
  151. T. Karydis, F. Aguiar, S. L. Foster, and A. Mershin, “Performance characterization of self-calibrating protocols for wearable EEG applications,” in Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments-PETRA ’15, pp. 1–7, Corfu, Greece, July 2015. View at: Publisher Site | Google Scholar


Copyright © 2020 Nazmi Sofian Suhaimi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

About the Online EEGer Introduction to Neurofeedback Course (EIN)

This is our complete introductory package, and includes the three core components you need to get started in clinical neurofeedback. This section is a brief overview of what's included. For more detailed descriptions, and frequently asked questions, see below.

  • EEGer Training Kit (ETK) -- You will be sent the EEG equipment you'll need. You'll need to provide a computer that meets minimum requirements. Installation & configuration help is available.
  • Introduction to Neurofeedback Didactic (IND) You'll get access to these lectures with Angelika Sadar -- BCIA-approved instructional lecture recordings to learn at your own pace.
  • EEGer Online Practicum (EOP) -- You'll attend 4 hands-on instructional webinars to learn placements, applying sensors, the basics of using the software. These sessions will also introduce you to the mentoring process.

8. Conclusion

In this paper, we reviewed the clinical applications of neurofeedback, various protocols of treatment and some of the systems designs by BCI and VR technology.

In neurofeedback, EEG is usually recorded, and various brain-activity components are extracted and feedbacked to subjects. During this procedure, subjects become aware of the changes that occur during training and are able to assess their progress in order to achieve optimal performance. Electrode placement is performed according to specific brain functions and specific symptoms. Considering information about these skull regions, the entire treatment process is simplified. There are several protocols in neurofeedback training, but alpha, beta, theta, and alpha/theta protocol are the most commonly used ones.

BCI is an EEG-based communication device. VE is a human-computer interface system with which users can virtually move their viewpoint freely in real time. The purpose of using VE is to construct a virtual environment with natural interactivity and to create a real sensation from multimodality. Three-dimensional VR is much more attractive and interesting than most of two-dimensional environments.

To date, many studies have been conducted on the neuro-feedback therapy and its effectiveness on the treatment of many diseases. However, there are some methodological limitations and clinical ambiguities. For example, considering the alpha treatment protocols, there are some issues to deal with such as how many sessions are needed before participants can learn to exert an alert control over their own alpha waves, or how many sessions are needed before such training procedures produce the expected effect on the optimal performance, and how long the desired effects last without feedback (long-term effects). Thus, it is necessary to provide standard protocols to perform neurofeedback.

Similar to other treatments, neurofeedback has its own pros and cons. Although it is a safe and non-invasive procedure that showed improvement in the treatment of many problems and disorders such as ADHD, anxiety, depression, epilepsy, ASD, insomnia, drug addiction, schizophrenia, learning disabilities, dyslexia and dyscalculia, its validity has been questioned in terms of conclusive scientific evidence of its effectiveness. Moreover, it is an expensive procedure which is not covered by many insurance companies. It is also time-consuming and its benefits are not long-lasting. Finally, it might take several months to see the desired improvements (Mauro & Cermak, 2006).

Watch the video: I Got an EEG Technologist Job! (February 2023).