35 - Advanced Visualization with AFNI & SUMA - Part 2 of 2
Date Posted:
February 15, 2019
Date Recorded:
June 1, 2018
Speaker(s):
Daniel Glen, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Daniel Glen, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu.ezproxyberklee.flo.org/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
DANIEL GLEN: Let's start and look at the demo for the DTI, and go back to your FATCAT cat demo directory and do TCSH Do_09_VISdti_SUMA_visual_ex3. So tab as you go to make it easier on you.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: All right.
OK. So it tells us to position the windows on the screen. And let's move those around as we want. And we can tell there's something going on, because we can see color coming out through the-- behind the brain there. So there's something interesting happening. So let's-- once you've put everything in the places you like, hit enter-- I hit OK on the dialog pop up there.
So we're in the FATCAT demo. And we're doing Do_09. We're doing the last one, exercise 3 from Do_09.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: Yes. Yes. It's subject native space. And it's got-- we have a white-- the white matter surfaces that have been moved into that native space. This will give us a tutorial as we do it. It will give us instructions-- say, do this and then do that. So we'll follow its instructions.
So here, it says to-- where is-- this will be about instatract and instacorr in a single subject. It's going to create a masking sphere and only tracks going through it are displayed. Says right click a point on the tracks and open the controller. And then click on masks twice.
So I clicked on the track, and now I'm going to double click on the masks. So I have the mask controller setup here. So just go through and follow the instructions. And creates a masking sphere, and only tracks going through it are displayed. To move the sphere, right double click on it, then select any location on the tracks. So let's do that. So right double click.
When you see them the mesh of it, that's when it's activated. It's in editing mode. And then we can have it talk to AFNI by hitting the t key.
OK. So we can see that
AFNI has the surfaces. And it also has this circle on the slices. So that's the same as our sphere for the masks. Let's continue on with this. And you can bring those-- the surfaces back in the SUMA menu by right-- by using the left and right brackets.
OK. So this has already got instacorr set. So we should be able to set a-- we should be able to run instacorr as it is.
We probably have to reduce the threshold.
Oh. It hasn't been set up. So let's set it up here. So the first one, just use the error time series. And seed radius blur of 4 millimeters. So we're just following the instruction. The seed radius to 6 millimeters. So this is what the instructions say to put into the instacorr. So we'll do that. And do set up and keep.
And now, as we change-- use instacorr in AFNI, it will send that to SUMA. So we're able to see that correlation-- both the volume and on the surface at the same time. And we're moving at the same time. We're also seeing the tracks that go through that. So let's give a little extra room for this.
So here, we can change our seed points with Control-Shift-Drag. And we're moving correlation and the mask at the same time. So there's a lot of things going on at the same time-- a lot of communication back and forth, coloring in SUMA's coming from AFNI for that correlation. So we're seeing correlation.
We're seeing the fibers that connect through that point, too. So it's a pretty amazing interface for all this. So that's a pretty interesting part of AFNI and SUMA's interaction. And you can also select somewhere on the surface here. You can click over on the surface. And you'll see you'll get this dotted shape. I don't know if you can make that out here.
If you get the-- the dotted shape is the doppelganger reflection of the mask. So we're seeing the mask, but because it's been pried apart, we can move along the surface, too.
So here, we're seeing instant correlation and tracking at the same time. We're looking at-- we have AFNI open. It's doing a correl-- instacorr there-- sending the correlation and the track mask seed location. And we can do that either way-- either controlling the location in AFNI or in SUMA.
Any questions about that? Let me continue on in that. Not sure if there's anything that-- I'm going to close that and continue on to some other demos.
OK. So here's something that we can do. If you've saved clusters with a clusterized plug-in or 3D clusterize command, or 3D clust-- something like that-- you can make surfaces out of that, and see them in SUMA. And so I have done that. You can-- you should have-- I think you have this in your handouts, right? The advanced vis notes?
I think that's there. I think I may have done this already, so close that. And so let me see if I have some of those GIfTI files. I do. So isosurface will produce a set of GIfTI files, all starting with clust test here, based on the clusters from-- that I've saved out of my clusterized program, or my clusterized plug-in.
And then I'm going to call SUMA to look at these. And the format of this is-- so we're calling SUMA. We're going to call it with onestate. We're not using a spec file here. We're using just the names of the data sets. So a series of GIfTI data sets. So we've got one GIfTI data set for every cluster.
And we're going to show it together with some volume. I'll use the skull stripped volume here, and an anatomical reference volume, so that if I talk-- if I do want to talk to AFNI, it will know what coordinates to use. So let's just do that quickly. So I'm going to copy and paste that command into my terminal.
So here are the same kinds of clusters we've been looking at during the week for a visual stimulus and for that experiment we've been seeing. And this is what it looks like in three dimensions. And it's another way to visualize your data. So here-- I find it really useful to be able to see the whole extent. When you see it in a 2D slice, you don't really get a great feel for how far it goes and the shape of it.
So this is-- I find this very useful. We can click on the volume and control that. So if I want to see this within the context of the whole head, I can change the-- put the rendered volume on. I double clicked that by accident. So now it's poking through. I can change the transparency.
AUDIENCE: [INAUDIBLE] in SUMA, right?
DANIEL GLEN: This rend-- I created the surfaces with isosurface-- that one line there. So all I did was call isosurface and tell it to give each one a different color and a different-- make a different surface for each one of those. We've done the same kind of thing for ECOG data, where each CT cluster is a separate electrode.
And there we put it all into one data set, slightly different options to merge the ROIs, to make one data set of a lot of surfaces together. And so that's another handy way to look at data. So if you have it in the volume, you can look at it in SUMA in some way, either as a new surface or as a volume-- or both at the same time.
AUDIENCE: So you [INAUDIBLE]?
DANIEL GLEN: That's right. So I'm going to do that-- we can do that together for an atlas in a second.
AUDIENCE: And the [INAUDIBLE]?
DANIEL GLEN: I'm sorry. I couldn't hear you.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: It can be any parcellation, any segmentation. This is a segmentation itself-- really, the clusters of different numbers for each region. So the isosurface will look through, see what has the same number, and make a surface out of that.
AUDIENCE: [INAUDIBLE] we can just apply isosurface and [INAUDIBLE].
DANIEL GLEN: It will, yes.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: It's not the level of Freesurfer at doing this. It does a rendering of it. It's maybe a bit better than our render plug-in, but it's not smoothing out the data. We do-- there is a smooth option, but it's not doing this in a very rigorous way. But that said, if your data is very well segmented, it does a pretty good job.
Each of these-- well, maybe I'll show on the atlas a little bit better. Yeah. All of these things are individually controllable. So that when I say that, I mean that I can click on any region. And the object controller opens up. And then I can-- well, I can change the parameters. So if I do Control-P-- Control-P, not just p-- I can change that.
And if you look closely, this peach-colored region becomes-- well, there's actually two different things going on in here. So there's two regions, actually, that are similar colors. So sometimes the colors come out a little bit close to each other. So I can look through them, change to the other one. So we have two kind of overlapping clusters, which I hadn't noticed it before, but there are two of them.
And we can change opacity of one with Control-O. So our Control-O and Control-P to control the opacity and points mode of each of the surfaces. So I find this a very useful tool for looking at. If you've got activation how does it form-- what's the shape of it throughout the brain? Does it make some kind of sense?
And I think it's also good for demonstration of what you're looking at. If you've found something interesting, show it within the context of the brain. It's a reasonable way to look at your data. I'm going to close that. And I'm going to continue on. In this example, you can follow the same.
You can make a new directory of atlas surfaces, then make an atlas like that. So that's-- let's see where I've got mine. TT_desai in, I think, an atlas surface directory.
AUDIENCE: Sir?
DANIEL GLEN: What's that?
AUDIENCE: Should we follow along?
DANIEL GLEN: You can do it. It doesn't-- it takes a little bit of time and it's-- but it's not too outrageous.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: No. You're going to do this from an atlas that's in your a bin directory.
AUDIENCE: So you could be [INAUDIBLE].
DANIEL GLEN: Yeah. So I've already done that. But you can-- I'll show you what that looks like. So here I have a directory. It's TT_desai. It's got the left and right regions, all-- these came out of Freesurfer, so these are Freesurfer's segmentation for over the 75 subjects. And you can run it in the same way that we looked at the clusters. So I'm going to do that here.
I'll copy it here. suma -onestate -i *.gii for all the GIfTI files in that directory. And -vol to load the volume of the TTN27 data set. I've done a re-meshing, which has a kind of funny effect. We have a unicorn horn to it. But other than that, it's a pretty decent. The re-meshing has a mistake in it. So here, we can click on a surface.
And we have the label of the surface up here. And as before, you can change the opacity of that particular region. And change the points mode. So you could see it as a mesh or as points. If you look closely, you see they're just-- it's a speckle of points in there, or hide it completely. And then once it's hidden completely, you can click down to the next level down below that, and repeat the process with that region below it.
And so I find it's useful for figuring out what regions are next to each other, and what's under each other. And so that's another way to look at an atlas, look at your data. And you also have things like these-- the TTN27 is in there, too. And that could be rendered, too. So that just ends up peeking through like that.
So this is-- with all that in there, the N27 doesn't make too much sense to show. But if you have just one or two regions, then showing it in the context is really useful.
AUDIENCE: And where can we find the notes for [INAUDIBLE]?
DANIEL GLEN: That's in the handouts-- the advanced viz notes-- should be there.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: You give it the parcellated data set, which is a either-- which is typically an atlas in AFNI, and it will read the header of that and make separate surfaces, or one giant surface that is composed of the separate regions.
AUDIENCE: So if you have your inflated ROIs, then you can [INAUDIBLE]?
DANIEL GLEN: The inflated ROIs?
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: Yeah. I think that will work. I think so. I don't see why it wouldn't, actually. So it should work. If it has a label, it will use it. If it doesn't have a label, it'll just assign a number. In either case, you get the set of structures out of it. So we're seeing the TTN27 data set with it, but you don't actually have to include a volume with it. I just did that just to show that it could be done together.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: That's right. Right. I can also do this. So though the structures that I've selected would show up separately. So I have all the-- that's what was showing up here. We have the white matter surface-- the white matter region in there. So you can mix [INAUDIBLE] you're customizing on one region or all the regions.
And if you wanted to just change everything back to points mode, you can do Shift-P. And everything is back. And Shift-O changes to the default opacity for the whole surface. You can turn off the convexity that gives it these dark spots with the letter b-- Control-P. Yes, that's what that's for. But Shift-P turns on and off everything. It reverts back to all of it together.
These, of course, can continue to talk to AFNI. You could have AFNI open at the same time with this. I wanted to show-- this is a new part of AFNI. We have new ways to measure thickness. So measuring thickness. So Freesurfer provides a very commonly used way to measure thickness. And that is available for everyone.
But sometimes, it's not useful for people that are doing studies that aren't the run of the mill. So if there are things with macaques or marmosets or children-- infants, toddlers-- or lesion-- studies that have lesions in it-- Freesurfer doesn't do a terribly good job of tracking the thickness. So I was interested in making a new set of thickness tools.
And talking with Rick and Paul-- this is the kind of topic that gets us really excited, because we've been doing image processing for years. And we can come up with-- each of us came up with different ways to do it, and they all work. So I'll go through a couple of these ways. These are some examples.
We try it-- we wanted to make sure we were right. So we tried it on some models of objects-- some volumes that we created-- either these kind of wafers or cylinders or spears or things like that, and then applied it to other data sets. So here, these are macaque images. And this is a view in SUMA.
SUMA has the ability to clip through surfaces and show you the interior of this. This is a map. The thickness has been mapped from the volume. All the methods are basically volumetric methods to calculate thickness. So we're calculating thickness in the volume and then projecting it out to the surface. And then we can bring it back into the volume afterwards, too.
Because there's some advantages to doing that. But let's start with this. So you'll see that we measure things, like [INAUDIBLE] here comes out very thick. We can even measure ventricles-- doesn't really matter. It just-- it's just a mask for the data. We're just-- the basic input is a mask data set. So that requires the segmentation of that mask be good.
Because that's all we're going on. We don't do the segmentation really. We have-- there are some tools, but mostly we're going to get the segmentation from something else. And so we did want to compare it with-- to make sure that we're somewhere in the ballpark-- we did compare it with Freesurfer. And Freesurfer morning evening session data.
And so we compared it. This is for a particular subject in the top row. And these are our three methods. One is called in out, the other is erosion. And then we have the ball and box method. And this is Freesurfer's version of it. See that they're all very similar. The in out method works by-- this one does require three input masks, basically.
You need an inside and an outside, and it measures how far every voxel is to the inside and outside masks. So you have, inside a mask it will show you how much distance it is to the inside, and how much distance to the outside. If you add the two together, you have something that's like thickness.
The in out measure. Another way to measure thickness is to look at the erosion of a data set. So how many times you remove voxels on the outside of a cluster of voxels before it disappears. And you can measure how long that's there, and then project, so you get a kind of depth measurement. And then project the maximum depth out onto a surface, and onto [INAUDIBLE] volume, too.
So that's what the erosion method is. We also have the ball and box method. So the ball and box method-- you take a sphere and you put it down in your volume and see what's the largest sphere that you can fit at any place. And then you'll also do that with a box, just to make sure you get the corners-- kind of like you're playing Tetris, and you want to get the corners of the box.
So we'll put cubes down. So I'll call that the ball and box method. And all the methods seem to work out OK. They're very similar to Freesurfer. And this is a [INAUDIBLE] across-- I forget how many subjects. I think there's 35 subjects or so, versus the Freesurfer's data. That is actually, I think, maybe on just-- this is just one subject. But we did this repeatedly for multiple subject.
And they were very similar. So the in out versus Freesurfer-- Freesurfer stop-- their peak, their maximum thickness is 5 millimeters. That's hard coded. So they won't-- they won't let anything go below that. And so mostly, it's very similar. This is Freesurfer morning versus evening.
Freesurfer will give you different numbers on a repeated test. And they will even give you different numbers on a repeated test if you've zero padded your volume. So you give it the same volume, and you get-- whoops. At the slice of zeros on the end, you will get a different set of thicknesses. And that will vary in roughly the same kind of differences here.
So that's another thing to think about, is that Freesurfer is going to give you [INAUDIBLE]. And each of these gives you a slightly different interpretation of what thickness means. Like I said, this has been used useful for macaques, marmosets, and we're applying it now for toddler studies, too.
AUDIENCE: So the red means that very thick.
DANIEL GLEN: Red is thick, yes. This is-- I think the images are scaled from one to five, or zero to five. So they're very similar kinds of numbers. The Freesurfer and the in out method are probably the most similar, although they don't-- it looks here like the erosion method is most similar to Freesurfer.
But there's a difference in that if you're looking at [INAUDIBLE] of some sort, it could be either a very small thing or a very big thing, depending on how it ends up on the surface. So do you want to a protuberance office surface to be a small thing or a big thing? Are you measuring that as part of your thickness?
So there's a slightly different interpretation, but overall, it's kind of-- they all give kind of similar measurements. And these things-- these tools are very fast. It's doing things in the volume, and then they work just-- mostly just in seconds [INAUDIBLE]. Macaque atlas with the connections-- so many of you are interested in macaques.
Anyway, I have a terminal open for it. So if anyone is interested I can provide that to you. So macaque connections. Let me close the SUMA.
AUDIENCE: We don't have it?
DANIEL GLEN: I don't think that you do. I don't think you do. But if you send me a note, I will-- it is available on our web server-- the macaque connections. I can send a link out for that. So I'll just start SUMA here. We're showing the D99 atlas, and the connections that Salim has given us here. So he's given us a set of connections.
We can look at these connections in lots of different ways. They're all labeled with the regions that-- these are from different tracer studies, many of which he did. So he knows something about them. And some of them are from other people. So we can click on any particular region. It shows what region is connected to that. And click on another region to see what that-- what's connected to that from the tracer studies.
AUDIENCE: Red and yellow mean what?
DANIEL GLEN: Red and yellow mean--
AUDIENCE: [INAUDIBLE].
AUDIENCE: The edge coloration.
DANIEL GLEN: The edge coloration. So the edge coloration is the intensity of-- the intensity of the connection. So, was there a strong connection, a moderate connection, a weak connection? And how many cells showed up in the tracer study?
AUDIENCE: Red--
DANIEL GLEN: And red is the most. We can bring up the color scale-- the SUMA controller for that. And this is just following this, but generally, I like to use a striped color scalar, one intensity for a different thing. But we can also threshold on [INAUDIBLE]. So when I pass one, the weak connections are gone.
And if I pass to the moderate connections, they're gone and I see only the strong connections. So only the strong survive here.
AUDIENCE: [INAUDIBLE]
DANIEL GLEN: This doesn't have directionality in it. We can also see this in-- if I open up another controller, with a Control-N, I can take this. So I've got two copies now. But if I change the view of it, I can see it as a matrix mode instead. And so, rather than traverse this geometrically, I can do this through the graph if I just click [INAUDIBLE]. See, I can even turn it.
AUDIENCE: [INAUDIBLE] and I want to [INAUDIBLE].
DANIEL GLEN: It's relatively simple. I'm sure you can do it. I mean, it's a list of edges. It's a list of labels and edge strings. So you say this edge, this region is, this index is connected to this index, with this strength. And you can even have-- each index can have a group number.
So we could color all the ones in one part of a brain one color, and all of the parts in another, another color. And if I want to, I can show all of them again with the double click-- double right click there. So you can see that AFNI and SUMA-- they can take in a lot of different kinds of data. SUMA can now show multiple kinds if you have connections in different ways, tractography or connection graphs like this-- you can bring those in, show them in anatomical or as a graph. I have not used a spec file for these.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: These are basically text files of number and labels.
AUDIENCE: [INAUDIBLE]?
DANIEL GLEN: Yeah. Yeah, actually, so all right. So here's one that-- well, I made it with collaborations here. So this is another way of showing a graph. So I've got my data here. So here is a list of connections. Connection zero is connected to connection one. So these regions are connected.
This one doesn't have a strength in it. It's just-- there could be a third column with a strength. And so this is the list of edges. You can also have-- I think this is it. So here are the nodes in my connection table. And this [INAUDIBLE] core, one [INAUDIBLE] so on. And then there's the connection group at the end.
So I should have something. Let's look at collaborations here. So given that collaboration thing, we can look at the same data in a different way. After I've taken those data sets as input, and I bring it into a-- convert it to connections.niml.dset-- contributor.connections.niml.dset. So let's do that.
So let me set [INAUDIBLE] data [INAUDIBLE]. Know what? I can just show this as-- just with the data set itself. And so SUMA-- [INAUDIBLE] Just that. [INAUDIBLE] is a way to show your collaborations. So you can-- and it's just another kind of data. It doesn't-- it's got a label. It's got nodes. And here I can manipulate this in the same way I did the others.
So let's do that. I'm going to change the connection here. So I can change the size and spheres. I can change the size of the lines. I didn't assign the strength of the connections-- how much they contributed to this. Because nobody wants to offend anybody. So it's already hard enough.
AUDIENCE: [INAUDIBLE]?
DANIEL GLEN: Yeah. You could do it by-- you can rate something like that if you want to be in trouble with everyone. But yes, you can do that. So here, this works just as it does here. You can set fonts and stuff like that. If I turn on momentum, I can do that and that's a fine close for a talk, right? Are there any questions about how to visualize stuff inside AFNI?
AUDIENCE: [INAUDIBLE]?
DANIEL GLEN: So that's just the third column on the edge list. So you show that this is connected to this. And then how strong is that? There's an edge.
AUDIENCE: [INAUDIBLE]?
DANIEL GLEN: Yeah. So it will take that-- the edge is mapped from the color bar or to the color bar. So here, it's in red, because they're all equal. But on the--
AUDIENCE: The edge [INAUDIBLE].
DANIEL GLEN: That's part of the graph. Right. So when it converts those, it says that this is connected to this, and this is connected to this. Those are-- it puts in a default edge-- a default strength of one.
AUDIENCE: Is it a grid file [INAUDIBLE]?
DANIEL GLEN: It's-- yes. It's basically a grid file. Yes.
AUDIENCE: [INAUDIBLE]?
DANIEL GLEN: It's not the grid file. It's a GD set file. But bringing into AFNI, it's pretty much the same thing-- into SUMA-- the -gdset.
AUDIENCE: Do we have functions [INAUDIBLE] strength [INAUDIBLE]?
DANIEL GLEN: I can--
AUDIENCE: [INAUDIBLE]. So that, you'll have your [INAUDIBLE].
AUDIENCE: [INAUDIBLE].
AUDIENCE: [INAUDIBLE].
AUDIENCE: [INAUDIBLE]. It's a [INAUDIBLE].
AUDIENCE: [INAUDIBLE].
AUDIENCE: Yeah. Yes. [INAUDIBLE] sort of a matrix.
DANIEL GLEN: Let's see, I wanted to show one or two other little things. So over here, I made a movie of some of the EPI data, as we were looking at it before, but now mapped onto the surface. So here's the same data that we've been seeing throughout the week. But now the time is-- over time it's mapping this to the surface.
I used 3D-- I used [INAUDIBLE] plug-in to actually do this. So it's-- you can see some things that are very unusual. You see the visual cortex is all kind of together on and off. And then it-- every once in a while, you see it all turns red. And that-- when it turns red, that's either the pre-steady state or the motion.
And it all goes out of scale. So this is all mapping into [INAUDIBLE] well, I can't turn it around, because it's just a movie. It's not SUMA. So it's mapping into the volume by a few millimeters. So that was another interesting way to look at this data. [INAUDIBLE] with your-- these are clusters. So you can do that with the clusters over time, to see how activation happens-- to see it back in the original time. Now this one, I did a little trick. And I demeaned it to be able to bring up the color scales in the proper way.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: Yeah. So there's another thing that I wanted to show. So David Jangraw was working on a project here. And he contributed this script. I'll give you this. So I think I may already have it open. I'll just put it here. So David Jangraw is Peter Bandettini's group-- he made us a script that makes montages of SUMA views.
So you can get different views-- left, right, up, down. And this will save this for you. Now he was working on this, and as part of another project which was pretty interesting to me, that-- and this one made it to the news. So we were pretty happy about that. Paul Taylor and I worked on this.
Let's see if I have one of these. Here it is. I should probably turn the volume on. So you see what's [INAUDIBLE] here?
[SINGING]
[VIDEO NARRATION]
So this was Music and the Brain at the Kennedy Center. And we got to see SUMA with our rendering there, hanging over the orchestra and over Renee Fleming as she was singing the song, because this was scanned in the scanner at NIH. And so that's-- and David Jangraw's explaining to her what they're doing there. So what's that?
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: I'm sorry. It's hard to hear you.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: Yes. There are a lot of issues-- respiration, motion. They had her do different kinds of things in the scanner. Yeah. So there, we used the same kinds of things that we've been talking about today, on how [INAUDIBLE] activation as you go. So again, remove this.
In this case, we just-- David removed everything that was outside some standard deviation over a baseline. So [INAUDIBLE] close with that. If you have any more questions, just feel free to ask. We'll stay around for a while, until we've finished answering your questions. All right. Thank you for all your attention. Thank you, Frederico, for your help. Thanks, Chris, too. Thanks.
[APPLAUSE]