Demonstration of MEG data analysis using Brainstorm
Date Posted:
May 8, 2019
Date Recorded:
May 6, 2019
Speaker(s):
Dimitrios Pantazis
All Captioned Videos MEG Workshop
Description:
Dimitrios Pantazis, MIT - McGovern Institute for Brain Research
PRESENTER: So there are a lot of species of software for analysis of MEG and EEG. I will discuss very quickly how we can use Brainstorm to analyze some data. Another very powerful software is MNE and MNE-Python developed by [INAUDIBLE]. And two more, FieldTrip and SPM, are really popular tools as well. And I think more or less looking at these four tools, the majority of groups are using these for software to analyze their data.
And today I will discuss Brainstorm, as I mentioned. It is actually quite considerably used. These are the number of user accounts registered in the website. And it doesn't mean that every single user is actually doing anything useful with the software. But some of them do, because they're actually publishing papers with the software. So that is really good.
Brainstorm is one of the popular tools. And I've been involved in the development since I was a PhD student in that group in USC with the principal-- one of the principal investigators. And that's more or less why this is my software preference, because I'm most familiar with it.
Users are worldwide using this software, and it has very good support and website, including tutorials. So what I will be discussing here is something very brief. But for those of you interested in learning more, there is a rich source of tutorials online for people to use.
The data that I will use are from a study that displayed these visual stimuli multiple times to participants. And today so I will focus to this and these stimuli so we can extract the contrast of faces versus objects, and hopefully see some face fusiform activity, as one would expect.
And let me switch to the software. And to start the Brainstorm is basically a toolbox, a model of a toolbox, a free open source model of a toolbox, and all you need to do is call it by typing Brainstorm on your interface. I actually have a circuit here to run it.
So I installed Brainstorm in my home in my documents directory. So if you go to that directory and type Brainstorm you can start Brainstorm. I did use a circuit to do that. And OK. I am discussing about this, and I supposedly duplicate screens. But I don't see anything displayed over there. So we will try to resolve that.
I don't know why it didn't stay before, but we want to keep-- OK. So what I was mentioning before without you able to see, and I didn't realize that, is that we are starting the interface now. You see an already completed data project here, but I will create a new one.
The interface consists of this left panel, showing a directory three structure of the files in our project, some contextual panels here, and some process window down there. It's quite simple and optimized. And to start, we will make a new protocol. I will call it work sub demo. And yes, you can see everything.
We will use an individual lab anatomy rather than a project default anatomy. And I will make this choice here because I want to have a single channel file per subject. Channel file is essentially registration, because it contains the location of the sensors with respect to the participant. And it was the same throughout our recording. So I'm making this selection here.
We will make a new protocol which is empty right now. And what I will do first is select this button to display anatomies in my project. Of course there is none. But I will start by making a new subject. And the new subject I'm going to call it demo subject. It doesn't matter.
And now in the left panel you will see our first subject. We will begin by importing anatomy for this participant. It's very useful to have custom anatomy for your participants, because then you can localize on the cortex and have all of the localization power that MEG affords to display activity and localize properly activity, rather than using generic MRI.
So this is quite simple to do. By left click, you will see a lot of contextual menus. Let me note here that I am using the interface to present Brainstorm to you. But at the very end, I can show you how you can create pipelines which can automate this procedure. So a lot of these things don't need to be done by clicking, but rather by running a code. And it's very useful when you have several participants.
So I will make the selection import anatomy folder. And I will point to this directory which is actually FreeSurfer data structures. So the MRI, the stacks of MRI was preprocessed with FreeSurfer, obtaining all the relevant information that FreeSurfer provides, including segmenting surfaces, wide matter, real surfaces, labels for different brain areas, segmented, and so on.
So to do this, I select this volume and this folder. And the first question that I get is how many vertices do I want to use for the cortical surface. The original free surface surfaces are hundreds of thousands of vertices. And since this is a Matlab interface, that would be very cumbersome and slow to include all of these vertices. Instead, we will down sample to the typical number of 15,000. And we will wait a little bit now to soon afterwards present the MRI from the MRI viewer.
This MRI viewer is incorporated Brainstorm. And this interface allows us to select some fiducial points which will be used to register MEG data with anatomy data. And we'll go back to the PowerPoint to display here what I mean. So this is the MRI viewer. And what we need to do is select these fiducials, which for those of you that have already collected MEG data, we use the 3D digitizer device to record the coordinates of these points, which were incorporated in the MEG data.
So now we need to do the same with anatomy to co-register the anatomy with the MRI-- with the MEG data. And now I see what is happening whenever I do this. And I will not do that again. Full screen. I'm losing this duplicate option. So I will do that again [INAUDIBLE] change this again.
Great. So what I will do now is I will use this interface to identify the [INAUDIBLE], which is about here. So I will set the coordinates of this point. And then the left [INAUDIBLE] point-- and again, as a reminder, I will be clicking on the top base of the tragus. This is a tragus structure. I'm going to identify this point in the MRI volume and click it.
So we are right about here. Set. And the same for the right one. This one takes a little bit of practice. But after you do it a few times, you get familiar where to click. So this would be the tragus over here, and I want to click on the top base. I will also select three more points. The anterior commissure, posterior commissure, and an inter-hemispheric point.
The purpose of these points are to align these subjects with other subjects when you have a study with several subjects. I'm going the wrong way.
So this would be the anterior commissure. This is the white matter bundle that connects the two hemispheres for those of you who are not familiar. And there is the posterior one, right here, again connecting the two hemispheres. And then any points as between the two hemispheres, doesn't have to be an exact point.
And then after I selected these six points, I can save these points. And then this will trigger the remaining importation of the anatomy into Brainstorm software. And you will see progressively the messages importing the different surface files, the left pial surface, the white matter surfaces, left and right hemisphere, anatomy volumes, labels, and so on. You will see all of these things-- all these things happening.
It will be faster in a desktop computer. But this is quite reasonable here. I think it will take a few more seconds to complete the importation. And this is quite automated, as we can see. All I had to do is select a single folder and this points, and then everything else is handled automatically by the software.
So this is quite powerful. We will have the cortical surface, which we'll use the model of the source space. And we will also have atlas data imported. So we can identify different regions of interest and in the cortex. All of the data segmented automatically by FreeSurfer will be imported here.
You can see how we also have the very large surfaces with 267,000 vertices, but also the down sampled surfaces here, which would be used for analysis. And again, this would be way faster in the desktop computer rather than this laptop here.
After the completion of this procedure, I will show you how to import MEG data, which will be the files produced by our tri-x scanner or device, MEG device, and how to correct for eye blinks, and how to import trials, and eventually construct contrast offenders, which would be faces versus objects.
What you see here is a rendering of the surfaces, including the cortical surface that will be used to map brain MEG data eventually. So I'm going to close. And I will move to this button here, which now allows me to work on functional data. The first button was anatomical data.
So I will right click-- again, a lot of contextual menus --and select review [INAUDIBLE] file, which is going to allow me to directly explore the file that was produced by the MEG device. And this would be it. Oh, let me point out that there are a lot of different data MEG [INAUDIBLE], [INAUDIBLE] and ECoG and so on supported by this software.
So making sure that we have the [INAUDIBLE], which is our device here. We select this file. And by opening this file, we will soon see-- it will ask me first how to read event codes. Whenever we record data, we also send event codes whenever an interesting event is happening in the timeline. Otherwise we will end up with a file of, let's say, one hour, and we will not know where is happening what.
So the event codes, indicating an image was presented in this data, is already included in the data in the event channel. So by selecting this one, the software reads the event codes. Or in other words, the timeline of events that were presented in this experiment. And soon after this, we will be able to see a rendering, again, of the scalp together with the helmet to evaluate how reliable is the registration between the data sets.
And you can see here, the rendering of the helmet together with the scalp. Also these points are points that we collected using the 3D digitizer during the data acquisition, which is not only we collected the fiducial points, but also a cloud of points over the head surface. The reason for this, you will see immediately here, is because we can refine the registration now, which originally relies only on these three points. And that I may have selected correctly or incorrectly to some extent. We can refine it now by forcing all these points surrounding the head to actually lie exactly on top of the head surface.
To do this, I will accept the refine registration now, and the software will move the head to align as best as possible the scalp with their points. And you can see now that these points are effectively touching the surface throughout the head, which is ideal. So this is really ideal registration. We also collected some points on the nose.
So this is really ideal. And we can see how the head is really touching the helmet really close, which also indicates that we position the participant well, very close to the helmet. So we accept this. I'm closing this.
And now we also have something called link [INAUDIBLE] file created here. If I double click on this file, Brainstorm will display me the data we recorded, and the data we recorded is effectively 306 time series, which can be viewed in this way. Or it can be viewed, and now we'll have this contextual panels here that I can use. We can also see time series. And this 2 meter display, so I can reduce the amount. But let's say displaying the left occipital. So we can see several of the times series that we recorded in different axes here.
I can also examine the timeline at different segments by clicking down here. You can see these dots indicating all the events that happened. And up here, you can see these event codes, 52, 62, and so on, indicate different images that will display during the experiment.
The event codes are also displayed here. For example, the first-- the image coded with number one was first displayed at 127 milliseconds at this time instance here. I will change the view back to my favorite, which is displaying all the time series overlaid in the same axes. And I will change the time that we display here from 10 seconds, let's say, to 15 seconds. We can display longer time periods.
And notice, you see some deflections here, and possibly different time segments. So this is a characteristic deflection of an eye blink, and this is the most common artifact that we need to correct and make data. By right clicking anywhere here, I can view the topography. Also pressing control T, very useful circuit. And you will see how the blinks look like. In this case, and basically this is a very typical eye blink topography, with red and blue separated in the frontal areas and then some other patterns in the back, not originating from eye blinks.
So this is the kind of activity that we want to remove. And I will show you quickly now how to remove it by first identifying a sensor that I can use as a reference exposed to very strong eye blink artifacts. So this one here, it displays the strongest signal.
And if I right click, I can actually see that this is the sensor MEG 1411. I can also go down here and by, again, using contextual menus, display the sensors or channels. And also you will see here highlighted is this sensor that I selected above. So these panels communicate with one another, this figure, so I can display time series from different sensors. Or I can select times series here, and identify where the sensors are.
What is important here is that this sensor is very sensitive to eye blink artifacts, so I can actually use it to dramatically detect these blink artifacts, even though I did not explicitly record an EOG sensor, which will give me direct access to eye blinks.
So what I will do is I will select this option here, detect eye blinks, and then I will tell Brainstorm that I want to take the eye blinks using this MEG 1411 sensor as a reference. So Brainstorm will look at the data, and whenever it finds large deflections in these sensors will mark them as blink events.
So I will run this. It will go through the entire file, identify time instances with strong deflections, and mark them as event called blink. And truthfully, this works really, really well to the extent that we don't really need to record EOG data. So we can remove these artifacts in this way.
Interestingly, in this case, this is one of the very few if ever cases that I've seen that the blink event, it's going to mark this event blink2 rather than blink. So it can happen. And of course it always-- you know, if it is to happen, it will happen in demos. But with my experience, having seen so many participants, nearly always Brainstorm is very successful in identifying the event as blink.
Notice here, I have blink, blink2, and 3 and 4 and so on. These are actually groupings that happen on similar deflections. So it identified deflections, groups them in some meaningful way, and almost always the first one is the reliable one, to the extent that I have automated that in my pipeline. And in this case, it's blink2, it is OK. So we'll keep that in mind.
And the next step after having these events is to instruct Brainstorm to perform a principal component analysis by selecting this data around these blink2 events, perform principal component analysis, find the principal components of blinks, and then project away from my data.
This is done by using signal subspace projection for eye blinks. It already has here blink2, which will be my events of reference to protect away. And if I run this, you will soon see how this blink artifacts disappear without really affecting the remaining time series.
So this is the interface with all the components from the principal component analysis. I select the first one, which explains 90% of the variance. And I can actually plot it using this option here. And you will soon see that this looks like a text book eye blink typography. From the gradiometer SENSYS magnetometers, the typography is really more or less the same. And it's really exactly what I want to remove.
And you can already see that we don't have these artifacts anymore. By disabling this component, we can see how we have these strong artifacts. By enabling it, you can see how these components go away.
So I like what I see. I will save it, and this is how I remove eye blink artifacts. And again, this can be scripted, so we don't have to do this process again and again separately.
Now, I have all these different codes, up to 92, because we use 92 stimuli. But I don't really want to use all of them. That was a topic of another study. What I want to do is collect all the stimuli corresponding to faces, and some corresponding to objects, so I can construct in meaningful contrast. And I know that images of faces were coded from 13 up to 24.
So I will collectively select all of these codes. And what I want to do is call them a different name, call them faces. I'll do that by going to this event option here and merging the groups of events. But before doing that, I want to actually keep my regional events unaffected. So I will duplicate the existing ones and then merge them.
So I will duplicate these two groups, these groups, they are all here. And then merge them by a different name called faces. So these are responses to face stimuli. And I will do the same with some objects from 49 up to 60. So I will select these ones. I will duplicate them, and then merge them into a new group called objects.
So now I have my objects and faces stimuli. What I want to do is import the data in Brainstorm. Until now, we were simply previewing the file in the native format, the FIF format. So Brainstorm has very fast algorithms to read on the fly data from this custom format. But now we want to import them in Brainstorm in [INAUDIBLE] file so we can analyze them eventually in our pipeline.
So from file, I will select import in database. And I will select only these two events to import. The epochs that I will create is minus 200 from the presentation of the stimulus until 1,000 milliseconds after the presentation of the stimulus. And this would be my epoch, my epochs. I will apply the SSP projectors that I designed before, which are the eye blink correction projectors. And I will also correct my data by removing the DC in the baseline time. For minus 200 until minus 1 millisecond. So I will force every sensor to be 0 mean.
And import this data in Brainstorm. So now Brainstorm goes through the raw file, identifies the event of interest, objects and faces, cuts segments of this data to generate trials, and inputs them in the database.
Unfortunately it doesn't write FIF files. No. But there may be other ways to do that by saving some structures and then importing the original data in some other structures. But no, it doesn't write FIF files.
AUDIENCE: [INAUDIBLE]
PRESENTER: Yes. You will often find that different softwares are good in different things, and eventually you may be combining things sometimes in unconventional ways. It happens to all of us.
So I have this data. And if I can expand, you will see now in this tree structure here, you will see different trials. And it will not be very meaningful to plot individual trials, because you will see that they are very, very noisy. Instead, I want to average them first.
You will see when I plot different windows, things rescale and get organized. To close all of the windows so I can proceed with my analysis, there is a circuit here to close every single window over there. OK. So what I want to do is actually average the responses to faces and objects to create evoked responses, and I do that by drag and drop my data down in the process window, which will give me access now to a menu with a lot of different process options.
So by selecting run, now with this gearbox, I will get access to so many different processing options and artifacts and other things as well, preprocess and so on. What I want to do is average files. And I want to average them by folder. Because I have faces and objects in different folders, I want to average them separately. And that will be an arithmetic mean. That's it. What I do is select run, and soon I will see everything averaged, and I will be able to display the average data.
So by plotting the average trials, I can already see some very clear responses. And I can move in time by clicking anywhere in the white area. If you click in the time series, you actually select time series. And to display topography, I can right click and say view topography or Control T, and this is the circuit that I always use. So by pressing Control T, I can display the other topography down here.
And you will see that early on, the topographies are quite similar to one another. So faces and objects don't really generate so different early responses. But once I move forward in time, these responses eventually change patterns and they become very, very distinct.
Around this time, 140 milliseconds, you see that they are very, very different, indicating some activity down here, some possible dipolar activity down here, because you have with red outgoing, with blue incoming magnetic lines, indicating the presence of some strong activity down here, the presence of current dipole, as we call it, which is not observed in this data. Rather, we have a different activation profile down here.
So we do expect to see different activities. What I will do, as well, is selecting the process window the process two, because I have two files to process now, the faces and the objects, and I want to construct the difference between them. So again, run. It runs, but that it needs to close all the open windows to give me access to this. This is fine.
And then I will simply take a difference between faces and objects, state forward and fast. And I will see now the face responses, the object responses, and as one may expect, also the difference which indicates a very, very clear difference around this time. So the brain visual cortex was able to dissociate faces from the remaining objects at around this time, which agrees with the times reported in seminal articles and the seminal work from Nancy Kanwisher, identifying the face responses that are around this time, between 130, 170 milliseconds.
And I also want to show you how to localize brain activity. And I will conclude there, because now we are running it over time. And to do that, I will select my face and object individual trials. And I select individual trials because I want to use them to also construct the noise covariance matrix, which is important and necessary to construct an inverse matrix.
So having all my data down there, I will again run a process, and this process will be from sources. First compute head model. I will point you now to the excellent talk from [INAUDIBLE] about the estimation of forward and inverse models. And you may have thought like it's really complicated and so many different methods and so on. They are actually. But the work has been done. So what you have to do here is only click a button, and then you get all these things.
So I will have the-- we will compute the head model. It will be constructed by an overlapping spheres model and analytical model, so computations will be immediate. And accuracy is really, really good. The next one will be to compute a noise covariance.
So the default settings are fine. We will compute a noise covariance block by block to avoid any drift effects between trials. I'm not changing anything in the default parameters here. And the final step is to construct our inverse. And again, this is done by pressing a button.
And there is an option that I want to change, and this is-- I want to use a dynamic statistical parametric mapping, which gives me a less biased solution, meaning that deeper sources will remain deep, rather than become superficial, because minimum norm tends to shift all the activity close to the sensors, because it is then-- that sources have the smallest power to have the biggest impact on the sensors.
So I don't want that. I want solutions that aren't biased. And I can achieve this with this DSPM. So I accept this. And now my process pipeline will include actually three steps. Compute the head model, compute the noise covariance, and then compute sources.
I will click run. And this will initiate the processes. This part here is computing the forward matrix. So mapping from sources to sensors. This is the analytical problem to solve. And it is practically done at this time. The head model is saved. Then we will see the estimation of the covariance matrix. And this is also done very quickly.
And finally, we will see the estimation of an inverse matrix, which is also done. So with that, I have solved the mapping of sensors to sources. So we are ready to see cortical maps. I will close all this individual trials, and I'll keep the average.
I will do one more step, not just by the way, but I have all these links here, which is the cortical sources. And it's a link, because the actual matrix is stored here in a common area. But then it's referred here to be applied to every data trial to construct the source maps.
And this is data efficient, because sources are 15,000. Sensors are 306 only. So it is better to save sensors rather than sources whenever possible. However, I will do one more steps-- one more step before displaying this, which is to get my sources, and drag and drop them again in the process window, and run a spatial smoothing process just to make things more visually appealing.
So this will also be done very fast, and it will apply a small spatial smoothing on this sources and then I will soon display the results, hopefully showing some face fusiform activity. We'll see that very soon.
So what I can do is I can actually double click on these maps to plot them on the right side. Or I have another option, which is to right click and select view sources from here, or press Control S. So I'm going to click here, and I'm going to press Control S here. They're all doing exactly the same thing. I like the Control S resources because it is very fast and convenient.
And I do expect some activity, first in early visual cortex. I can turn this way or press number 2, which brings me to a standard view from ventral. If I press 1, 2, 3, you see the different visualization options. 2 is the one that I want to see ventral activity. And indeed, here we see with the [INAUDIBLE] here, we see early visual cortex activity early on, which is shared between faces and objects.
But when I move at the time of the maximum difference, I actually see activity in the right hemisphere, which is shared between faces and the difference. But we don't have it in objects. So this is unique to faces.
And if I select this area here, now I will go to this area in the interface to show you how I can select regions of interest. The panel is called scout and I can define my own scouts. Or I can recall-- or I can call some standard atlases from FreeSurfer. Remember, we imported anatomy from FreeSurfer, so we have access to all of these things now for free.
So selecting this atlas, I can actually go and it plots all the the ROIs at the same time. I will disable this. And I will only request to plug the selected one. And which one is the selected one? I actually want the right fusiform in this case. We see the right fusiform, and I actually see the activity exclusive for faces is selected within the fusiform area.
This is a very strong activity, separating faces from objects. And beyond that, I can also plot the activity of this ROI. I will close this one now to provide a simpler view. And I will plot the time series of this ROI. I want the relative. I don't want absolute value.
And I want to overlay the different files. By file, it means these different conditions. I could also overlay scouts if I had multiple-- if I were plotting multiple scouts, plot them all in the same plot. But with this option here, I will plot the time series of the fusiform.
And you will see that there is a very, very strong deflection only available in the faces but not objects. So in this particular participant, face responses were very obvious. They were right lateralized as expected, because the face system produces strongest responses in the right cortex.
And I will close this. And finally, I will just briefly show you how you can do similar analysis with a pipeline. And let's say, for example, that we had this link to the [INAUDIBLE] file. What we can do is start adding processing steps as we were analyzing separately before.
So let's say, for example, I want to import events in my database, and these events I already actually typed here. This would be faces and objects in the folder, and the folder names will also be faces and objects. The epoch times would be this. I can say process the entire file. And after I import this events, which will create trials, I can, let's say, average my file, as we did before, by folder. So I can get my averages.
I can do other things that I did and ask before, which would be, for example, band pass my data with a 40 Hertz low-pass filter. I can add any choices I want. I can remove artifacts and so on. I can keep adding basically processes here, and then eventually I can generate a script from that.
So by selecting this option, it asks me to save a MAT file. I can keep the default one for now. And then having a look at the file, you will see how arrow file is read, and then starts a process which is very well delineated in different steps, including first the importing steps. And you can see the epoch time here from minus 0.2 to 1 second.
We can average the corresponding trials by folder. We can apply a low-pass filter, and so on and so forth. So we can construct scripts like this, which we can then include in for loops to go through all our subjects in the protocol and analyze the entire database of all our participants, without having to do all this multiple interface choices, which are great to explore data to understand your data in one, two, a few participants. But eventually you want to write scripts so we can avoid errors.
By clicking a lot of buttons, this is prone to errors. Instead by scripting and analyzing eventually, you can share your analysis pipelines to other groups, make sure you done make mistakes, and be able to do run analysis as needed, and so on.
So that was a very quick view of Brainstorm. And for those of you interested for more complex analysis, I'm happy to discuss in the future and show you how to do different things, and of course also I want to point you to tutorials available online, because I only covered a very small aspect of what you can do.
And there are dedicated tutorials for very specific types of analysis that one may want to do. And of course I'm happy to help individuals as well. So thank you so much. We are only like 20 minutes late in the workshop. But I hope you enjoyed it as much as I did. I thank you for attending. And if you have any questions, I'm happy to answer.
[APPLAUSE]
[INAUDIBLE]
AUDIENCE: I was just wondering what would you say of the [INAUDIBLE].
PRESENTER: The advantages of using Brainstorm-- and I'm sorry, I couldn't hear you.
AUDIENCE: Brainstorm plus this SPM.
PRESENTER: Yes. So Brainstorm has an excellent interface. It's very, very easy to do all this. As I told you, it's very quick to learn, and it's quite a complicated search to do different things. SPM has a good connection with fMRI. Possibly if you want to do common projects, it would be beneficial to combine both. They also have some ways of integrating fMRI solutions with MEG, bias one solution to another one.
FieldTrip, another major software, has excellent time frequency routines. Really, really excellent implementations of these. And MNE-Python is the only Python software. And because of Python, it also greatly benefits from all these decoding tools that are already written in Python, which we don't have such extensive in Matlab yet. So that's a very quick description of the benefits in different softwares.
And while it can be like not desirable to have so many different options rather than one complete package, I still think it is beneficial, because we need to have different choices to avoid potential biases in flows of a single implementation, and self correct each other as a community. It's good to have a few, not chaos, but it's not good to have only one. So with that said, I use some of them on different cases. But I mostly use Brainstorm actually.
AUDIENCE: [INAUDIBLE]
PRESENTER: Yes. I mentioned that. So Brainstorm has a very unique interface. It's very easy to handle. Another thing that I really like in Brainstorm is that these inverse solutions are so easy. Like you saw how fast it is. In FieldTrip, it's really complicated. They mostly base their analysis on the informers.
And they're fine. It's just, it's not my choice of analysis. And it's really hard to do other types of inverses, like this minimum norm inverse, Surprisingly hard. I would expect-- maybe I'm speaking because of my own experience, though. So while I like FieldTrip on construction of time frequency kind of analysis or spectral analysis, or even connectivity, actually I don't like it in other aspects, including the interface and the inverses.