27 - Surface Analysis: SUMA: Part 4 of 5
Date Posted:
February 12, 2019
Date Recorded:
May 31, 2018
Speaker(s):
Rick Reynolds, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Rick Reynolds, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu.ezproxyberklee.flo.org/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
RICK REYNOLDS: For the most part, these things are benign. It's good to look at them and understand where they come from. Some warnings are OK, some are bad. That's why warnings, that's why they're warnings. You can't necessarily say for sure whether they're OK. In this case, they are.
In this case, you can see, for example, we looked at that sum ideal plot. We have two ideal time series, one for visual, one for auditory. And we add them up, just to see the sum. Sometimes you find timing issues in that. And that this is the command. This is the 3D t-stat command used to add up those two columns from our no sensor x matrix, 12 and 13, add them together to make the sum. We only have two regresses of interest in this case.
And this warning is from 3D t-stat whining about that, because 3D t-stat expects to see time series data that has a tr in it. And it just saying there's no tr. I don't know what the tr is. But that's OK. This is a 1D file that doesn't really have a tr associated with it. So that's OK.
These other things down here, no IRTS data set. Shouldn't there be an IRTS data set. I'll have to look at. That. But this is the Gen EPI, Gen SS review script. This is what writes the at SS review driver and SS review basic scripts.
So it looks at the data sets in your current directory, and it tries to create these driver scripts for you. So this is-- that's the program that's whining about no mask data set. Yes, we don't have a mask data set. We don't have a G core data set. We didn't create a volumetric comparing average correlation of each voxel with the rest of the neighbors.
IRTS. It could-- perhaps it should be able to find one. So that's something I might have to fix. But not terribly important. It doesn't really use the IRTS. Oh, yeah it does, for tSNR. What's that?
AUDIENCE: I'm wondering if that means that they [INAUDIBLE] for it but I don't understand the principle.
RICK REYNOLDS: IRTS is the air time series. The residuals from the linear regression. And if you were doing a resting state analysis, you know in resting state you often project the bad stuff out, so what you're left with is the good stuff. So the IRTS would actually be the good data output from a resting state pre-processing analysis.
So you notice, we're still getting a maximum F stat, but some of those-- some of those other pieces of information that we saw on Tuesday are not here. And that's related to whining about missing data sets. But otherwise, anyway it's done.
So now I'm going to do the two terminal window thing where we-- where I'll look at the script while I follow along with it. You can do the same thing if you so choose. Otherwise you can just look at the script up here, play with data on your own laptop. [INAUDIBLE] into AFNI data6 slash FT analysis.
And when we ran that AFNI proc command, it created proc at ft.surf, so that's what I'm going to look at now. And the analysis created the ft.surf surf results directory. ft.surf is the subject ID.
So FT surf results is the result directory. And then we have the text output. That's how all the stuff that was on the screen is stored in this file. Most of you will not have this text output file, because you just ran TCSH proc.ft.surf.
So I'll just-- I'll run less on proc than ft.surf. And look at the script here while we goof around in the other terminal window. So you see the beginning of the script is all the same. And remember, it's going to do t-shift, and then the align block and then vol break. So aside from the TLR C block, this should start just like the other one.
So basically, all the beginning stuff shouldn't-- nothing's special. It's just all the same commands as before. We have our 3D T-Cat where we drop the first two time points from every run. Now we have 150 time points left. We go into the output directory, and now we count our outliers, looking for time points that have a lot of outliers across the brain, a high fraction.
We still create that min out run, and min out TR variable pair. So we know which time point seems to have the least motion, according to the outlier count, or the outlier fraction. And so that's the time point we're going to use for registration later on. So those are just saved in a pair of variables. We do our 3D T Shift stuff. And then we move on to some alignment blocks.
So before the alignment block, that's when we should extract our volume registration base. The EPI volume. So 3D bucket dash prefix will call it, VR base min outlier. And again, we're using run main out run. And time point min out TR.
These are going to be the exact same ones as before, because nothing in the script has changed so far. We're missing the TLRC block, but the TLRC block doesn't yet impact the EPI data. So at this point in time, the EPI data is-- it should be identical to before, unless there's random-- some randomness involved, which there should not be for us. Some
Methods include randomness. These so far do not. I think the q warp operation includes some randomness. So if you run it twice, the results might not be exactly the same. FreeSurfer includes randomness. You run it twice, the results are not exactly the same.
So here's our volreg block. We do 3Dvolreg. We do the CAT [INAUDIBLE] except we're not combining a transformation to standard space. We're just staying in ridge space. We're aligning the EPI to the anatomical data set, and the EPI to the EPI base. So the EPI data will be aligned together and to the T1. But not in standard space. And here's where we apply the transformation.
And that's great. So that's just the same as before, except for the standard space stuff. Any questions up until here? So is that clear? The EPI data now will be slightly different from before, because we haven't gone to standard space. But other than that transformation, it's identical.
What about the volume registration parameters? Same or different? These are the same. Yeah. These are the same, because again, the EPI, the lack of the tlrc step has not yet impacted the EPI data. And we're just running 3Dvolreg like we did on Tuesday here on the transformation parameters. So since there is no randomization involved in this, and unless there are some bugs in the code that weren't-- produced in consistencies, these will be identical to the four.
After the volreg block, finally now we come down to the surf block. So if I just, in my other terminal I'm just going to CD into the ft.surf results directory. And let's just run LS there and see what we have.
So you notice, we have the exact same files as Tuesday. PD 0 1, except they're called ft.surf surf, instead of FT. But that t-cat, t-shift, and volreg plus a ridge. Those files should all be identical. After that, suddenly now we have something .surf dot [INAUDIBLE] .dset. So now the files will change. And that's because in our script, we finally coming to our surf processing block. And now we will map our volumetric data to surface space.
So how do we do that? Now we have a variable here that mentions our surface data sets are located under FT analysis, FT SUMA That's where we looked earlier, right? That has all the GIFTI files. That has the FT SurfVol.knee file. So those are all in a SUMA directory.
So now this is where we run SUMA line to experiment. And this is where we align the surface T1 to our current experimental T1. And that affine transformation will apply to all the surfaces. They will just-- that transformation will be stored in the resulting AFNI data set. And that transformation will be applied internally by at the SUMA functions.
So this step right here is simply going to create a new anatomical data set that-- an anatomical surface volume that is in registration with the current T1, the experimental T1 volume. So when we run SUMA commands, now we'll use this as our surface volume. And if we look all over here at the capital FT stuff, that's what's created. ft.surf_SufVol that goofy syntax if you just read it, generally it says aligned to experiment. That's the capital A, L, and D underscore EXP. So this is the surface volume aligned to the experimental data. And that's what we'll use in our SUMA based commands.
So now that we have a T1 for the surfaces, now let's actually map our volumetric EPI data to the surfaces. By doing that registration, now we can actually do the mapping. Before that, we can't. But now our surfaces are effectively on top of our current data. And our current data is all aligned together. The EPI to a net, and EPI together.
So here, for each hemisphere and for each run, we use 3D vol to surf with a specification file. The spec file is that STD.60, FT dot each hemisphere, left hemisphere, right hemisphere, that spec file. So this sees all the surfaces, all those GIFTI surfaces that are mentioned in the spec file. The surface volume is now the one that we just created, this SurfVol aligned to experiment.
So again, that has that affine transformation in it. So that brings the surfaces to this space, this domain. And then we're going to map everything from this smooth white matter-- that'll be the first surface --out to the peel. So for each node pair in the smooth white matter out to the peel, we're going to see what voxels are intersected in the volume, and do that weighted average mapping in this case.
That's done by saying we're going to map across nodes, and we're going to break each node segment into 10 knots, actually 9 segments defined by 10 points. And we're going to take the average across them, which is effectively a weighted average for how many of these 10 points are in each voxel. And we're going to get our data from the strangely named, I'm sorry for that, grid parent data set. PB02 subject ID for each run, the .volreg. So we're taking-- we're going to take our volume register EPI data and map that to the surface.
And the output is going to be in our affine base niml format, but PB03 we could be using [INAUDIBLE] here. Blame us for laziness, if you will. But PB03 for each hemisphere, for each run, surf niml dset. And so back when we typed LS there, you see the PB03s, PB03 is where the surf niml dsets begin there.
So that's actually the surface, now data mapped on the surface. Let's look. So let's run SUMA here. So if I'll type LS run SUMA just to show you there is a run SUMA script here. So the proc-- the proc script created a little run SUMA script. And again, these are just little commands, nothing you couldn't possibly type on your own. But it's just easier if you put this in a spec-- in a little script.
So SUMA we give it a spec file. It's up one directory. But this uses the full path so you can really run it from anywhere. But the spec file is under ft.suma.std60 60 FG left hemisphere.spec, right? So that's the same spec file we were talking about.
But it's still in the SUMA directory. It's not in this directory. It's back in the SUMA directory. But the SurfVol the SV data set, the SurfVol data set is the one we just created above, before the vol-- or in the surf block before volreg. Or after all volreg. The surf block after volreg.
So let's just run that. TCSH runs from the results directory-- ft.surf.results. So what are we looking at? We're looking at one of the surfaces in the spec file. Is this any different from before? No, we're not looking at data mapped to the surface yet. This is just an actual surface with the colorization based on the pattern. We can hit the dots to scroll through all the surfaces that are in the spec file, just like before. We're not looking at data yet.
So let's-- oops. So let's-- stop it. So let's look at data. Where shall we look at? How about on the-- oh, whatever. I know, how about the smooth-- the slightly inflated surface. So I'll hit either view and click on object controller, or we can hit Control S. Note, Control S doesn't match object controller. This used to be surface controller. So that's why it was Control S.
But then-- now we can control volumes, graphs, matrices. We can show many more objects than just surfaces. So it's no longer just a surface controller. So let's open that up, and let's load one of the surface data sets that we just created.
So does everyone have this open? Trouble? Just raise your hand if you have a question. Maybe Daniel or Gong can leap around. So let's load dset in the bottom left, the lower left corner here. Load dset. And the data set we can select is the first one, the first surf niml dset in the list.
So I clicked on load dset in the lower left corner, and then we get this window. And we don't have to really modify anything right now. We can just select the first PB03 data set. And hit Open or Enter.
Glorious. So what's the colorization here? What is the colorization here? What are we seeing? What is red versus yellow versus green here? These are just the MRI intensities in the EPI time series data sets. For volume index or now surface index number zero. For the left hemisphere. For the volume registered data.
So red is just probably-- I don't know why it's red. You know, it's just brighter in the EPI data. In some cases, the subject could be closer to the gradient, closer to the coil, and it'll get brighter because of that. Whatever. The green, why is it green? Green's a lower intensity. What's that?
AUDIENCE: This is probably one of them [INAUDIBLE].
RICK REYNOLDS: Yeah, this is actually signal drop out. So remember, in the EPI data, it's more clear in the volume, but less clear on the surface now. Remember, surfaces look fantastic. Whatever you dump on them, it looks great. But you have to keep in mind, this is coming from the volume, and then in the lower part of the volume, you have signal drop out in the temporal lobes, for example.
And what about-- what about this down here? It's like nothing there. Why is that? Yeah. Exactly. There is nothing there. We don't have data there. We don't have full coverage. The EPI volumes don't cover the entire surface. So the lower part of the brain, we're not capturing.
Or maybe it's even at an angle, the way we're missing a whole angular part of the brain. So we'll see all that here. But you want to understand what it's coming from. So it's not-- it shouldn't be a strange artifact. So you want to look at these things and be sure you know why you're seeing what you're seeing.
AUDIENCE: I'm only getting a subset of possible surfaces. Is it-- does that determine the script?
RICK REYNOLDS: You're getting fewer than I am? I have a gray, smooth gray, peel, inflated. Yeah.
AUDIENCE: But what if there's-- there were other [INAUDIBLE] or others [INAUDIBLE].
RICK REYNOLDS: We may be--
AUDIENCE: Mine did not have that.
RICK REYNOLDS: So if we look at that spec file, we have the white matter, smooth white matter, peel. Inflated. And then another white and inflated. So that's the contents of this spec file. What if you wanted different surfaces in here, you can just put them in here. So you have a whole list of surfaces in the SUMA directory. You can choose what to look at at once, just by dumping extra sections in here.
AUDIENCE: [INAUDIBLE]
RICK REYNOLDS: Yeah. Remember, this SUMA directory itself is a substantial decrease from the original SUMA directory. So we wanted to keep it a little lighter for the class.
So what are we supposed to be-- what are we supposed to be looking at now really? This is time series data, right? I want to see a time series. So let's click some location in the brain. If I right click-- so again, this is where the three blind mice come in. I guess you've been suffering through this with Daniel, so hopefully you know how to right click now.
So let's just right click somewhere. And how do you see a graph window? Since you've memorized that SUMA hits-- what do we call it? SUMA keystrokes dot text file. You know you can hit a G now. So let's hit G. And there we go, there's a time series.
Now let's be wild. I'm going to-- I'm going to drag my-- drag the mouse around while I right click. That's actually kind of quite cool. So you can go surfing around and look for your happy data locations. That looks beautiful, right? That's clearly something that's happening in response to our stimuli.
How do we all go look at the same location? Well, let me find a happier location again. Let's leave it. I don't want a huge spike. We want a medium. OK, this is perfect. Good enough. I am on node 29937.
So it's OK for you to select type-- like this node box here, if you can even read that, highlight the node box and type in 29937. If you do that, you're going to go to the same location on the surface. If you want to.
So you can still keep in correspondence. If you want to, you can even hit freeze on this window here. I like this node for some reason. I hit freeze there. It opens a new window. Now if I move around with right clicking again, I'm all over the place. But my frozen window is still holding the old time series. Now I have a new window to look at the new time series.
And you notice, the frozen window-- well, each of these windows, they have the node index on there. 29937 up here. So you know where it came from. There's still a spike in here. So it didn't go [INAUDIBLE] surface. Why would it? That Spike was in the EPI data. It was in the gray matter EPI data. And so it's here too. We will still like to censor that out.
The next step was to blur. We blur on the surface. So let's just briefly look at the blurred results. I'm going to-- I'm going to jump back to that node. Just so we stay there, 29937. And I'll go back and load D-step, but I'm going to load that first run of the blurred results. So right there. So PB04, ft.surf, left hemisphere run one .blur.
And if I open that, doesn't really look any different. This is a slightly confusing aspect. This graph window here isn't showing the blurred result. It's showing the previous result. I think we have to close the graph window and open it with-- now that we're looking at-- and I'll go back here and hit G.
Now if you notice, in these two windows there's still both node 29937, but on the top I have this surf niml dset. On the bottom, it's blurred niml dset. So this is from the blur process of the block. So that's a little tidbit you have to keep in mind when you're looking F window. If you open a new data set, you need to hit the G again. Actually, I don't know if I have to close it. I should try that. I might not have to close it, maybe just hit the G. We'll see. Maybe I'll test that next time.
So now it's blurred. And you notice, to some degree, if you look at this, look at these first couple of humps here. They look better. They look smoother. So the blurring operation in this case, it looks like it kept the signal more or less as it is, but they reduced the error. The error blurs towards 0, as we might hope.
So we would hope that the blur operation does a better cleaning job on the signal in the surface domain, rather as opposed to the volume domain. Hopefully.
AUDIENCE: If you-- if in a node to do that?
RICK REYNOLDS: Yep. Yeah. You can-- whatever here node you want to, you can just type it there. So you can type in the 29937 there and jump to the same one. So going back to the script, we've blurred our surfaces. Now we scale the data, just like before. This is exactly the same operation. Compute the mean. And then we take our unscaled data, divide it by the mean, multiply it by 100. 100, and then the new mean is 100.
We also do these little niceties. We prevent the values from exceeding 200. And we prevent any negatives from creeping in on us. But otherwise, it's just that same scalar. If the data values are fluctuating around 100, a brain location should never hit 100. You don't get 100% signal change from a bold effect in the brain, right?
So we can load one of those data sets. I'll load the first scale data set, PB05 ft.surf, left hemisphere, run 1 .scale this time, .niml.dset. And I'll open that. And it's green. Fantastic. So worse than in the volume, there's basically no contrast to look at here.
But Alt L. Let's see what happens and if I hit G here, do I have to actually close it or not? If I hit G. Oh, I got a new window. Woo-hoo. So you don't have to close the old one. So now we can compare on the left we have the scaled time series. On the right we have just the blurred data set. How do they look?
They look the same, right? It's just a scaling of them. And then the niml dset format, these are no longer scaled short integers or just short integers. So I think these are actually stored as floats, though they may be-- the binary float may be converted to a text format in the niml file. Actually, that depends. But anyway, they're floats. So in this case you see very tiny truncation artifacts. They're just identical.
Except on the right, I see the values are around 1,200. On the left, they're around 100. So essentially you're dividing every value by 12 or so. So now we get to the big finish, the linear regression. So now how does this compare between surface and volume?
First of all, we take our T file for all runs and dmean it, take the derivative, we create a sensor file with sensor 0.3 millimeters-ish. How does this compare to volume space? Same or different? What are we actually playing with here? We're playing with this. Same. This dfile, this comes from the volreg block, which happened-- that was the last step before going to surface space. These are the motion parameters from running 3D volreg.
These should be identical to before. If we plot them on the side, 1D plot subscale-- you don't have to do the subscale volreg dfile_rall. There you go. These are identical to the ones we saw on Tuesday. And therefore what about sensor? Identical.
Well, I won't show the sensor. Let's show the e norm time series. This is the exact same e norm time series and the censoring that's based on where this crosses 0.3, right? So this is the same e norm times series. So censoring will be identical. So given that given that, how does our 3DD convolve model compared Tuesday?
What's in the model? [INAUDIBLE] three cubit polynomials to model each run. Eight times series. We have our two stimuli of interest. And we have our six motion parameters. So how does this compare to Tuesday? Same. That's right. This is the exact same regression as Tuesday.
Except now the data is sitting in the surface domain. So the whole model's identical. We should see all the same numbers in the same ranges, give or take, except the results may be similar or different. Hopefully a little better in some way. That's why we would do something on the surface.
So let's just briefly look at this. So now I'm going to overlay a dset. I will go to the bottom and get that stats data set. So load dset and on the very bottom we have stats.ft.surf surf left hemisphere. And note that the filter on the top here where we're choosing what data sets to even consider, it's showing left hemisphere files that have LH and end in dset.
So if you wanted to look at other things for some reason, you may have to manipulate this wild card pattern. That wild card pattern defines what we see in the box. So I'll open this. And now, let's just look at something. I'm going to set for the intensity-- so for the intensity, this is just like, this is similar to the volume. But there's a lot more control in SUMA.
For the intensity is the color, which responds to the [INAUDIBLE] volume. So here, let's look at the auditory stuff. So let's set the overlay, the colorization to be 4. Index 4. And note just like in the volume, just like in FT-- see, this a selector box, a dropdown with one of the little rectangles on the right. That's one of the cases where you can right click to the text on the left of the box and get a different type of menu, if that's better for some reason, if you have a lot of indices to work through.
So now we can choose the audio beta wait here. And then I could do, if I wanted, right click in this case and get the tstat. So I and T, that's intensity and threshold, are indices 4 and 5. That B there is brightness. That actually controls how bright the colorization is. I'm not going to I'm just going to leave that.
If you're going to use SUMA much, go through the whole SUMA.PDF with your data. And get used to what you can do in this GUI. This GUI does a lot of stuff. It's not covering it here, and you're not going to remember it much anyway. But you can overlay all sorts of surface data sets on top of each other with different opacities. The ordering matters then. You can say, I don't want to look at anyone but this one. You can do all sorts of stuff. It can be very confusing if you want it to be.
So that only really matters if you're going to play around with this. So let's not-- yes?
AUDIENCE: [INAUDIBLE], right?
RICK REYNOLDS: Right, AFNI will not mix the colonization for multiple overlays. But SUMA you can do a whole stack of them.
AUDIENCE: But it can happen [INAUDIBLE], right, at Switch.
RICK REYNOLDS: Yeah. Just like here, if we hit Switch Dset in the lower left corner, these are all the data sets that we have in SUMA. So we can choose any of them to be the one we're driving in this data set mapping display over here.
Yeah, and if you want to imagine getting complicated, hopefully Daniel will dazzle you tomorrow. You can do a lot with this stuff. So anyway, we will be kind of gentle today. We will be.
OK, so let's scale the data to something more useful. So the beta weights. The beta weights, it's still holding the scale from the-- probably from the full F stat, so it's coloring from negative 778 to positive. So let's change this to 1.5. So I'm going to type in a 1.5 here. And then it handles the min and max from negative to positive. It just automatically makes it symmetric. And that's actually because there's this sym I down here, symmetric intensity. Is toggled on right now.
And so lo and behold, there's the data on the surface. Let's take a break and come back. And I'll show you this data in MNI space. We're in single subjects space, but remember, this is also sort of a group space. I use MNI coordinates. So I will try to make those things a little more clear when we come.