Wednesday, June 1, 2016

Handball Log updated for the month of May

We have been continuously exploring how to write files, utilizing the same method as in April. Below are notes that we took, documenting this process.

May 5
  • We used the last years desktop scenario for this (motor-imagery-bci-2-classifier-trainer.xml)
  • The timeout box is useful in stopping the scenario
  • The console is the box of info
  • The percentage in the console was 52.52%.
  • They recommended that if it was less than 65% we should record a new scenario
  • I’ll record a new data file using the 1-acquisition scenario
  • I’ll do a full timed one, not just one minute
  • Recorded a full file
  • Tried running it. The timeout stopped it after 60 secs, so I changed it to 500 secs (8 minutes and 20 seconds)
  • Even though! The one minute cut off data yielded a 57.56%. Already better than the old one, even if this is just a cut off piece!
  • Hmm. the finished result is 55.59%. That means that the one minute cut off was better.
  • Perhaps its because i recorded the scenario in a noisy room, and i was a bit restless throughout. Next time i’ll try to record a new scenario with less noise.

May 10
  • 54.89%, with the new scenario that brian recorded.
  • We can try cutting it to one minute and see if its better.
  • We need at least 65%, ideally.

May 17
  • Cut it to 60 seconds. It worked better, 60.5%. Yay.
  • Were using that one.

May 19
  • Cut it to 200 seconds. Nope.
  • Cut it to 30 seconds. 64.4%. Doesn’t prove anything cuz first 30 seconds of data is only nothing.
  • I’m trying again the old scenario, with 500 seconds. Too bad. 55.59 is great.

May 24

  • Tried motor-imagery-bci-4. Shows everything. Gonna use this at presentation.
  • Lets try SVM once. I set it to brian’s recording for 200 seconds.
  • 56%.
  • Lets try the same thing with 120 seconds, two minutes.
  • Lets try PLDA. i set it to Brian’s recording for 120 seconds.
  • 56% again.
  • Lets try probalistic shrinkage thingie. Same settings.
  • 55%.
  • Lets try shrinkage LDA. same settings.
  • Also 55%. Back to regular LDA.
  • 56%.
  • Tried moving up partitions for k-fold test to 10.
  • Same 56%.
  • Tried moving it down to 5.
  • Also 56%.
  • Tried moving it up to 15.
  • Did not help. Moved it back to 8, lets see.
  • Moved back to 6, lets see
  • Ok this clearly doesnt influence the data. Let it at 8

Tuesday, April 19, 2016

Handball Log up to April 14



  • Experiment: we tried to connect the headset and record and write a scenario to a a desktop file.
  • Result: failure. The headset wouldn’t connect to port 2. After multiple tries it finally connected, but it wouldn’t work with the acquisition server.
  • Change of plans: we built a new scenario using generic stream reader, generic stream writer, and signal display. In the generic stream reader we put the raw data file. We made sure it writes to a desktop file.
  • Then, we copied and pasted the desktop file into the generic stream reader, cut out the generic stream writer, and played it with signal display. It worked! The file played the same as the raw data.
  • We have figured out how to write a file!

Tuesday, April 5, 2016

Link to OpenVibe Steminar Presentation

LINK

https://docs.google.com/a/erhsnyc.net/presentation/d/1G9hXCRowvgIMHbEosMKluwAcj1BqS0DAO1oVyMZJrzo/edit?usp=sharing

Saturday, March 12, 2016

How to connect the neurosky headset to openvibe

This post shows how to connect the NeuroSky Mindwave headset to the OpenVibe acquisition server:

  1. Take the NeuroSky USB and insert it into the USB port of the computer
  2. Turn on the NeuroSky headset
  3. Go to the search bar and type "NeuroSky device manager"
  4. You should see "NeuroSky installation complete"
  5. If you want, you can press "forget this device" and allow the headset to be paired again.if you choose to do this, it will say "searching for new device," and the NeuroSky headset should repair within a minute
  6. Open the NeuroSky Acquisition Server
  7. Press Connect 

Tuesday, February 23, 2016

Progress Report 2/23/16

Before the mid-winter break, Sarah and I were finally able to connect the headset and pair it with OpenVibe. The next step is to record live data utilizing this headset and implementing it into our box scenarios.The steps for connecting the headset to the computer are below:


  1. Change the com port because the neurosky headset is defaulted to port 22, and the computer only scans ports 1-16. Go to "modify Bluetooth settings" and create more ports.
  2. Go to control panel and then to all control panel items
  3. Go to device manager and then ports (COM & LPT)
  4. Right-click the mouse and then select properties followed by port settings
  5. Go to advanced and switch the COM port to COM 2
  6. To check the device's connection, uninstall and reinstall the headset. Then run the acquisition server and see if data is being recorded. 

Tuesday, February 2, 2016

Team 4 Progress Report 1-31-16

During the Hackathon seminar on Friday,  Sarah and I made a lot of progress. We completed the preprocessing and data extraction part of our project by constructing a box scenario, using references from last year's model. Now, we have to move onto the processing aspect of our project. In addition, we added onto our handball log.

Jan 29
  • why didn’t they use a stimulation based epoching box? its not efficient to use all the data, stimulation based epoching cuts if off to a convenient one signal block.
  • how would you know how to set the time intervals to minimize overlap in the time based epoching?
  • I think we should use a stimulation based epoching box, increases efficiency.
  • x= how much hz
  • intput signal is most possibly amplitude
  • x*x, average, then log(x+1). computes amplitude and wave power.
  • timeout is an unstable box and we will not use it.
  • You want to be in alert state of mind when you think left or right- set threshold at beta
  • we need to figure out how to duplicate/ share scenarios across computers.
  • we used the same settings for our time based epoching box.
  • feature aggregator converts the matrices into feature vectors. This is important because all the algorithms deal with extraction and training using 3D vectors etc.
  • graz visualization box provides feedback for the experiment.
  • the online scenario doesnt divide it in two parts. doesnt seem to make mucn of a difference.
  • why do they use identity to copy the original data back again into the classifier trainer? were not gonna do that.
  • the difference between the online and offline version is that the offline can only be used with a prerecorded scenario etc. the online works with the Acquisition client to receive raw original data and visualize the end result back to the user.
  • so technically, the Graz visualization box can be substituted with an actual app etc to feed the data into it and make it work.
  • so motor-imagery-bci-4-replay basically is the same as online one, but substitutes it with a pre-recorded file and replays it.
  • were gonna use that with last year’s data and LDA specifications and see if it works.
  • Openvibe designer always opens with four scenarios
  • We are going to paste the file from last year into the classifier processor box to see what it does--- nothing happened
  • handball-replay.xml: JACKPOT: this allows us to replay the online recorded file and watch the corresponding feedback using the openvibe-vr-demo-handball.
  • The classifier processor is classifying the mental activity in 2 classes: left and right movements
  • button VRPN server is used as multiple switches operating at once. each button can be set at what time to become active/inactive. Tells the handball application which step the experiment is and also gives signals to user.
  • classifier processor box: is a generic box for classifying data (feature vectors), works in conjunction with the classifier trainer box. Its role is to expose a generic interface to the rest of the BCI pipeline.
  • so we understand the preprocessing, we now go to classifier trainer part where we have to train the algorithm.
  • adding to Gantt chart: LDA algorithm
  • adding to Gantt chart: watch part of video training algorithm

Sunday, January 24, 2016

Team 4 Progress Report 1-24-16

This past week, Sarah and I compared one of last year's group's box scenario with the bci examples in the OpenVibe library to determine where and why that group specifically got rid of certain boxes. In doing so, we learned and researched the different individual boxes. These are our notes currently as of last week:

Jan 20


  • Identity can be used to replicate exact same things and make scenario neater. It's never necessary though.
  • Modify inputs/outputs by right clicking.
  • I'm doing this scenario without identity so I can see what everything individually does, and then I'll add it in the end if it becomes messy.
  • I'm scattering signal display boxes throughout the motor-imagery-bci-2-classifier-trainer to see what the algorithms individually accomplished.
  • Last year apparently made their scenario by modifying motor-imagery-bci-2-classifier-trainer, not creating a new one entirely. I think it's easier to create a new one but I'll work with both examples.
  • Putting signal display in motor-imagery-bci-2-classifier-trainer didn't work, I think it's cuz the data file in the generic stream reader isn't for this. I'm trying different files right now to see what'll work, so I can visually depict what the algorithms do.
  • The original motor-imagery-bci-2-classifier-trainer works with a file that has to be recorded. So I'll be working with the raw data 1 min file from last year.
  • Why did the last year group cut out the preprocessing and started right away with feature extraction?
  • So the three boxes of preprocessing that last years group cut out are reference channel, channel selector, and spatial filter-surface laplacian.
  • All of those boxes deal with different channels, and the neurosky only has one channel. So they're useless.
  • reference channel takes a selected channel and subtracts it from all other channels. This is used to establish a ‘normal’ base, and subtract it, so all that's left is the activity. For example: there's normal resting activity in my brain, + the signals from this activity that we want to extract. So we subtract the normal activity to only be left with the desired signals. So let's say normal resting level is 4, and the activity in my brain now is 10. So 10-4= 6. Etc. To select a channel with a good resting level choose one located somewhere with no activity - ex, the nose.
  • channel selector simply selects the channels to view. Again, useful if several channels and information about them.
  • The spatial filter used, surface laplacian, works by multiplying the signals from a channel by 4 and subtracting from it the values from the 4 neighboring channel’s signals. Imagine an idea of taking an average but in reverse. This is used to sharpen the signals of the chosen channel, and it makes up for blurred data resulting from like the scalp being in the way etc. this is best used with a 64 channel headset. C3 and C4 are usually used for right-left hand movements, apparently there are specific ways to put the sensors and decode them. Not relative to our project. (Googled around a lot but too complicated to understand. Openvibe forum was simplest and explained it well enough.)


Jan 21
  • what is a temporal filter?
  • Chebychev filter, low pass high pass, band pass, band stop
  • temporal filter filters high and low frequency outliers.
  • default settings for temporal filter are low cut = 29, high cut = 40. The motor-imagery-bci-2-classifier-trainer has it set to low cut = 8, high cut = 24.
  • alpha and beta waves are between this frequency? Let’s ask Ty.
  • alpha = 9-13. beta = 13-30.
  • now it kinda makes sense, because they’re isolating all brain signals except alpha and beta.
  • the last year’s modification made two temporal filter boxes, one for alpha and one for beta. they thus forked the extraction and improved on the scenario.
  • the last years scenario is missing the stimulation based epoching box.
  • stimulation based epoching is to cut activity based on stimulation and time. so this one for example, cuts 4 seconds half a second after the stimulation starts. so since recording is 5 secs, we cut the half second at start and finish. so all we get is one signal, either left or right, we don’t know.
  • Time based epoching Box also cuts off certain fragments of data, but based on time not stimulation. So technically it can be used as the same thing if time and stimulation are known.
  • That's probably why the last years group cut out stimulation based epoching, because they just used the time based epoching box to work for both functions.