Sunday, January 24, 2016

Team 4 Progress Report 1-24-16

This past week, Sarah and I compared one of last year's group's box scenario with the bci examples in the OpenVibe library to determine where and why that group specifically got rid of certain boxes. In doing so, we learned and researched the different individual boxes. These are our notes currently as of last week:

Jan 20


  • Identity can be used to replicate exact same things and make scenario neater. It's never necessary though.
  • Modify inputs/outputs by right clicking.
  • I'm doing this scenario without identity so I can see what everything individually does, and then I'll add it in the end if it becomes messy.
  • I'm scattering signal display boxes throughout the motor-imagery-bci-2-classifier-trainer to see what the algorithms individually accomplished.
  • Last year apparently made their scenario by modifying motor-imagery-bci-2-classifier-trainer, not creating a new one entirely. I think it's easier to create a new one but I'll work with both examples.
  • Putting signal display in motor-imagery-bci-2-classifier-trainer didn't work, I think it's cuz the data file in the generic stream reader isn't for this. I'm trying different files right now to see what'll work, so I can visually depict what the algorithms do.
  • The original motor-imagery-bci-2-classifier-trainer works with a file that has to be recorded. So I'll be working with the raw data 1 min file from last year.
  • Why did the last year group cut out the preprocessing and started right away with feature extraction?
  • So the three boxes of preprocessing that last years group cut out are reference channel, channel selector, and spatial filter-surface laplacian.
  • All of those boxes deal with different channels, and the neurosky only has one channel. So they're useless.
  • reference channel takes a selected channel and subtracts it from all other channels. This is used to establish a ‘normal’ base, and subtract it, so all that's left is the activity. For example: there's normal resting activity in my brain, + the signals from this activity that we want to extract. So we subtract the normal activity to only be left with the desired signals. So let's say normal resting level is 4, and the activity in my brain now is 10. So 10-4= 6. Etc. To select a channel with a good resting level choose one located somewhere with no activity - ex, the nose.
  • channel selector simply selects the channels to view. Again, useful if several channels and information about them.
  • The spatial filter used, surface laplacian, works by multiplying the signals from a channel by 4 and subtracting from it the values from the 4 neighboring channel’s signals. Imagine an idea of taking an average but in reverse. This is used to sharpen the signals of the chosen channel, and it makes up for blurred data resulting from like the scalp being in the way etc. this is best used with a 64 channel headset. C3 and C4 are usually used for right-left hand movements, apparently there are specific ways to put the sensors and decode them. Not relative to our project. (Googled around a lot but too complicated to understand. Openvibe forum was simplest and explained it well enough.)


Jan 21
  • what is a temporal filter?
  • Chebychev filter, low pass high pass, band pass, band stop
  • temporal filter filters high and low frequency outliers.
  • default settings for temporal filter are low cut = 29, high cut = 40. The motor-imagery-bci-2-classifier-trainer has it set to low cut = 8, high cut = 24.
  • alpha and beta waves are between this frequency? Let’s ask Ty.
  • alpha = 9-13. beta = 13-30.
  • now it kinda makes sense, because they’re isolating all brain signals except alpha and beta.
  • the last year’s modification made two temporal filter boxes, one for alpha and one for beta. they thus forked the extraction and improved on the scenario.
  • the last years scenario is missing the stimulation based epoching box.
  • stimulation based epoching is to cut activity based on stimulation and time. so this one for example, cuts 4 seconds half a second after the stimulation starts. so since recording is 5 secs, we cut the half second at start and finish. so all we get is one signal, either left or right, we don’t know.
  • Time based epoching Box also cuts off certain fragments of data, but based on time not stimulation. So technically it can be used as the same thing if time and stimulation are known.
  • That's probably why the last years group cut out stimulation based epoching, because they just used the time based epoching box to work for both functions.

No comments:

Post a Comment