Tuesday, February 2, 2016

Team 4 Progress Report 1-31-16

During the Hackathon seminar on Friday,  Sarah and I made a lot of progress. We completed the preprocessing and data extraction part of our project by constructing a box scenario, using references from last year's model. Now, we have to move onto the processing aspect of our project. In addition, we added onto our handball log.

Jan 29
  • why didn’t they use a stimulation based epoching box? its not efficient to use all the data, stimulation based epoching cuts if off to a convenient one signal block.
  • how would you know how to set the time intervals to minimize overlap in the time based epoching?
  • I think we should use a stimulation based epoching box, increases efficiency.
  • x= how much hz
  • intput signal is most possibly amplitude
  • x*x, average, then log(x+1). computes amplitude and wave power.
  • timeout is an unstable box and we will not use it.
  • You want to be in alert state of mind when you think left or right- set threshold at beta
  • we need to figure out how to duplicate/ share scenarios across computers.
  • we used the same settings for our time based epoching box.
  • feature aggregator converts the matrices into feature vectors. This is important because all the algorithms deal with extraction and training using 3D vectors etc.
  • graz visualization box provides feedback for the experiment.
  • the online scenario doesnt divide it in two parts. doesnt seem to make mucn of a difference.
  • why do they use identity to copy the original data back again into the classifier trainer? were not gonna do that.
  • the difference between the online and offline version is that the offline can only be used with a prerecorded scenario etc. the online works with the Acquisition client to receive raw original data and visualize the end result back to the user.
  • so technically, the Graz visualization box can be substituted with an actual app etc to feed the data into it and make it work.
  • so motor-imagery-bci-4-replay basically is the same as online one, but substitutes it with a pre-recorded file and replays it.
  • were gonna use that with last year’s data and LDA specifications and see if it works.
  • Openvibe designer always opens with four scenarios
  • We are going to paste the file from last year into the classifier processor box to see what it does--- nothing happened
  • handball-replay.xml: JACKPOT: this allows us to replay the online recorded file and watch the corresponding feedback using the openvibe-vr-demo-handball.
  • The classifier processor is classifying the mental activity in 2 classes: left and right movements
  • button VRPN server is used as multiple switches operating at once. each button can be set at what time to become active/inactive. Tells the handball application which step the experiment is and also gives signals to user.
  • classifier processor box: is a generic box for classifying data (feature vectors), works in conjunction with the classifier trainer box. Its role is to expose a generic interface to the rest of the BCI pipeline.
  • so we understand the preprocessing, we now go to classifier trainer part where we have to train the algorithm.
  • adding to Gantt chart: LDA algorithm
  • adding to Gantt chart: watch part of video training algorithm

No comments:

Post a Comment