34 - Advanced Visualization with AFNI & SUMA - Part 1 of 2
Date Posted:
February 15, 2019
Date Recorded:
June 1, 2018
Speaker(s):
Daniel Glen, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Daniel Glen, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu.ezproxyberklee.flo.org/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
DANIEL GLEN: Before we start, I want you to-- if you have the Fat Cat demo, you can get that started. And so you can see what that demo is. We can look at the visualization for that. So just do TSH. Do 00 presto all runs. Yes?
AUDIENCE: So the CP under AFNI on cd2 AFNI underscore demos slash Fat Cat demo?
DANIEL GLEN: Yeah. AFNI_demos. Fat Cat demo. That's the directory. And then do this. Do 00 presto. Hit Tab. It goes through a lot of different steps. Not sure if mine's finished yet. Nope. It's not finished. OK. All right.
So while that's running, we can talk about some other things. So I want to describe some of the ways that you can drive AFNI and SUMA. This is a really powerful function of AFNI that you can script a lot of things, even the GUI in AFNI. So we know that we can script your regular commands, but we can even call the GUI and have it do things for us.
Let's go through some examples. So everything about driving AFNI is written-- is described in more detail in the ReadMe.driver document that should be in your ABin directory. I think I may have a tab open for it already. So you can drive AFNI with a command and you give it arguments. Or you can call AFNI-com, and then you give it a command on the command line.
So when you start AFNI, you can give it a command and it will do this series of commands. So here we started AFNI, tell it to open an axial image, and then switch the underlay to the anatomical data set, and then save that axial image into a JPEG file called SS.jpeg. And then quit out of AFNI. And then it follows with a regular command just to look at a particular directory.
That's kind of the simplest example. But all of the possible functions are described in this that we have how to add collars, set thresholds, set how many panes in your color bar, whether you're looking at positive or negative. So a long list of different commands that you can tell AFNI to do from outside AFNI or on the command line when you AFNI as it is.
So this is useful because you can create lots of scripts to make AFNI do things for you. Either you can take advantage that when you're doing this, AFNI is still interactive as you're doing it. So if you want to look at some other aspect, you can do that. But you can have your users, if you write it for somebody else, to always look at the same thing. You can have it for yourself so that you always look at this slice and save some slice as a quality control image. So these kinds of things are very useful just reviewing and for quality control tools.
OK. So let's go through that. And we have another ReadMe.environment. I showed you that in the first session, that these are the environment variables, this controls other aspects of AFNI. And here's another long list of things that control what AFNI uses, things like AFNI orient, what coordinate orders should AFNI use by default, what gets shown there, where do you keep your plug-ins, what directory those are in, whether hints are turned on as you hover over a button. So lots of different choices there too.
Let's look at an example of a way to call AFNI. And one of those that comes with the AFNI binaries is called at drive AFNI. So in a terminal window you can go ahead and do that by switching to your home directory. We'll look to see what it's actually doing. But just so you can use it, let me open up a new terminal window.
And in your home directory, you can call at drive AFNI somewhere just above your AFNI data 6 directory because it's going to look at the AFNI data 6 AFNI directory that we've been looking at all week, and it will run these series of command. So before I start it, let's take a look and see what it's doing.
So the script starts out with some help. And then it has a tag, so that it can talk to AFNI with a specific instance of communication, so it knows that this is a drive AFNI demo. That's just-- it's not necessary, but if you have multiple AFNI and plug-out drives talking to each other, they will know about-- you'll know which one is which.
OK. So some details here. So this starts AFNI with these important options for dash niml and yes plugouts. And the dash niml tells it that it could communicate with something like SUMA. Or some other program that can talk in the NIML markup language over TCP/IP with AFNI. And the dash yes plugout says that we can use plugout drive, and this is what we're going to use here. And then what directory to look at. And we're going to redirect all the output to slash dev null, so we don't even see anything that AFNI will report here.
So then we call this program plugout drive. So this is a convenient way to send things to AFNI. Plugout drive, you give it to the command just as you would on the command line of AFNI. So you put AFNI dash com, all these different things. Or you can do plugout drive dash com, and in the commands there.
So here, where you tell it to switch to the A session of AFNI and open an axial image, that window should be this large and have these properties. And there are other properties, of course. Axial image, open a sagittal image. Switch the underlay to the anatomical data set. And switch the overlay to the skull strip data set.
And then turn the overlay on. And then simulate pressing the V key. So here we can simulate key presses by scripting. So this is really handy. So this will turn on that video mode. So we're going to go through all the slices with that V key.
All right. And then we're going to prompt the user whether we should go on or not. So we have a command prompt user, and that we'll stop it for a moment. And it goes through. It says open another window, put the space bar to stop the video. Now put-- set up a montage for me at 6 by 3, 8 slices apart, and then go to a specific coordinate in DICOM RAI order.
And then save a picture of that montaged axial, those axial slices. Goes on. We can turn on the-- we can go through a graph window and press the V key. We can turn on the overlay here, look at the func slim data set that we've been looking at, set the threshold to a P value of 0.000, 0.001 and set the range of the overlay. And go to another coordinate. And then at the end, we can tell AFNI to quit.
So simple program. Use this as a kind of template for your own plugout drive scripts. And I think you'll find it very handy. So let's just try that. From the home directory or somewhere over your AFNI data6 directory, let's just hit add drive AFNI. It's echoing out our plugout drive command.
So we've got the overlay on. It's pressed the video key for us, and it's prompted us to go on to the next step. So AFNI is live while we're doing this. We can still do other things in AFNI. Even though it's being scripted, we can do other things. It's still available to us.
Now it's set the montage for us. And the montage is still available. We can change-- move through the image. We can select OK. And it's now saved the axial image for us and it's opened it up for us to see with AIV, the AIV program, so we can see our montage as a separate image.
So these kinds of things are really useful for quality control. You want to see all your slices, you want to see overlays, underlays. It's an automated way to do all this. Now it's going to go through and it's put the EPI data set here as our underlay. And it's simulated pressing the V key over the graph window. So we can use that to look for motion.
So if you have a new student, postdoc, somebody who isn't familiar with fMRI and you want them to see how to look at things, you can make a script for them to do this. So this can be a kind of teaching tool as well. So hit OK to pause the movie.
And now it's changed the overlay to the contrast between the reliable visual and auditory stimulus for the coefficient and the t-stat is the threshold. So it's thresholded this for us, and it's jumped to particular coordinate. And that's it for the app drive AFNI script. Were there any questions about that? About any kinds of its capabilities?
AUDIENCE: Yeah, I tried doing that on my own data and I tried doing it [INAUDIBLE] and I tried by changing just the names on my [INAUDIBLE]
DANIEL GLEN: OK. Yeah. It's usually not too difficult. If you have multiple copies of AFNI running at the same time, it could be a little bit confusing because then you need to set up separate port blocks for each one. But there's a kind of automated way to do that. But it could just be some small syntax issue.
The one thing that you should know about the plugout drive is you can open plugout drive all by itself. And then it becomes a kind of interactive prompt. You can just try different commands, and AFNI will tell you whether they're OK or not. You should turn off that's the dev null business, so you can actually see what comes out of it, what comes out of AFNI, because if there's an error, it won't say it otherwise.
AUDIENCE: [INAUDIBLE] 45 subjects. Can you do this for every subject or we can go ahead and then have it non-definitely [INAUDIBLE]?
DANIEL GLEN: So you can do both, right? I mean, it depends how much time you want to dedicate to it. We have procedures for doing both these kinds of things. I'll show you another thing that we use for kind of looking at lots of different subjects, and maybe a little bit faster way. So it depends how much data you've got to look through, whether you've got 10 subjects, 20, 100, 1,000, 10,000, it's more or less detail. So we've got different tools for different jobs.
AUDIENCE: What you were editing for [INAUDIBLE], we ran it, we ran it above our [INAUDIBLE] data. Can you specify [INAUDIBLE] drive AFNIs for different, doing different things with different data sets that go out on [INAUDIBLE] and drive AFNI.
DANIEL GLEN: Yeah. So this just an example script. So at drive AFNI is just an example. Plugout drive is going to be the tool that you're going to use inside your own script. So this is just made for the class data as an example. So if you have marmosets or macaques or humans, different studies, you're going to have different kinds of scripts most likely for each of those data sets.
AUDIENCE: Sorry, so then that means we add-- I we add drive [INAUDIBLE]
DANIEL GLEN: No, no. That's--
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: We provide them as an example, but they're really made to go with our class data. The story of the at is it's an old, old story. The at symbol comes from an old way to run a command language script on old VACS systems. VACS VMS I believe had the DCL language, which you would run with an at symbol. You would say at this script.
And somehow it got transitioned into our script names. So a kind of code for us to say that these are script names, this is a t-shell script usually. And it's not Python. And it's not C-- complied C code. But it is just an example. It is one of the unusual-- one of the more unusual things in our distribution, that it's really made for specific data.
Probably shouldn't be there. It should probably go with the data I think. But it's such a good example that every-- most people will need to see it. So good question, thank you.
So let's go on. So this is quitting out of AFNI. It doesn't quit out of AIV, so there, that's still up there. So we'll just close that manually. We have a similar script called-- OK, so we can do AFNI dash com. We get a plugout drive. We have the AFNI environment variables. We have a similar script called @DriveSuma which calls a program called DriveSuma, and DriveSuma is very similar in that we can-- similar to plugout drive, it tells SUMA what to do.
So we have a script here. We can try it. I found out it was a little bit buggy yesterday and so I fixed my version, I think. But you-- I mean, you don't have the version that I fixed on my system. I found I had to put in extra time delays. Not sure why in this case. But it seems to work with those time delays.
So I'm going to do another script called @DriveSuma. We may want to look at that first. So @DriveSuma, also in our distribution. Let's just scroll down to the more interesting aspects of this. It says, make sure that you don't have any other SUMAs running. I don't think I do. And it's going to create an icosahedron, and then do some stuff with it in SUMA.
So DriveSuma, show a surface, input it here, CreateIco.asc. And if you look at DriveSuma dash help, it gives you some help. But mostly, I would say use these kinds of things as examples of the scripts that we give you. Because it's a little tricky to figure out all the contexts of these things. All right, so DriveSuma is fairly long help. So there's that there.
So let's try @DriveSuma. I'm not sure if it will work on your system, but let's see. It prompts us to make sure no current SUMA is running, and press OK. It shows us some shape. And it tells us it's going to be doing some other stuff and that we should look at this prompt. And that SUMA is still running, so we can do stuff just as we could in AFNI while it's doing this driving business.
So I just clicked select OK to continue. OK, this is loaded, our icosahedron, our simple one. It's a little difficult to see because it's zoomed in so much. But here's a simple icosahedron. It's going to be reprojecting all the nodes onto a sphere. So here our nodes are moved out. And then another coordinate assignment for those nodes. So it looks very different. Still the same topology.
And now it's going to control the viewer. It's going to record a movie for us but the control right and hand press the control right key. And we'll jump to a location, as if we press the control-J. So here you can simulate pressing keystrokes. That's in this @DriveSuma script.
And then it will press the up key three times, sleep for 0.3 seconds between presses, and then press the left key twice, and so on. And this is going to be kept in our movie. And it pauses it there. So let's pause.
And now it's turned on momentum. This is just a remapped icosahedron It's just a toy example. It's not-- it has nothing to do with brains, really. It's just for fun. And mostly just as a lesson for how to DriveSuma.
AUDIENCE: So Daniel, all of these drives [INAUDIBLE].
DANIEL GLEN: Yes. So it actually-- this is the bug I found out last night about this, is it's not just sitting there. It looks like it's just sitting there. The prompt has happened. But for some reason, the window is a single line.
Yes, this took me a long time to figure out because this had been working several months ago. And I just assumed that it would work. But something has changed on the X system. So the solution was to put in a sleep of 3 seconds, and I think-- let's see if I can show you that. Yeah. I wasn't sure if it was true for everyone.
I did put-- I did update the Git repository, because I figured an extra 3 seconds sleep for this demo can't really hurt. So if you edit your file as I did, you can replace all your set L's with sleep 3 semicolon set L. And then that would start a 3 second sleep before each prompt. I don't know if you want to go through that. But it's-- that was my solution.
That you replace all the lines that say-- there are many like say set L equal. So all the sleeps had to be replaced-- all the prompts had to be replaced with sleep 3.
AUDIENCE: [INAUDIBLE] you are running @DriveSuma.
DANIEL GLEN: I'm running @DriveSuma
AUDIENCE: It seemed like it started [INAUDIBLE] for quite a while already.
DANIEL GLEN: It's just that the window has reduced to a pretty much invisible function of invisible size. Sorry about that, but with a little bit extra sleep the @DriveSuma will work, just like all of us.
Anyway, so that was my solution late last night, trying to figure out why this wasn't working. I was surprised to see that didn't stop.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: Yeah, it depends if your editors is handy at-- it'll do a replace all that could work. OK, I'm going to continue through the demo. It's changed it from my fill mode to points mode. And then you can press [INAUDIBLE] back to its filled mesh mode.
So here this recorder can be manipulated like an AFNI viewer. So here, I don't know if we went through this, but in the recorder, there's a recording. And so you can go through the video there. V works in that too. All right.
The @DriveSuma will save your images into GIF file and a JPEG file, and so on. So it's saving different things. You can tell on the text going through in the terminal back here. And now it's going to load some data.
And it's going to load a new surface called [INAUDIBLE]. That's here. And open the surface controller for it. And it will select which sub-brick, which column of its NIML data set, its 1d data set to show as intensity. And change the threshold too. These are all just toy examples, but for your own data, you'll want to do something like this too.
And setting the intensity range to 0.05 for this data. It's all green there. So here it looks a little bit more interesting. And I'll switch to another intensity sub-brick and change the range there and threshold level. Now it's changing the brightness.
And now it's going to show us convexity. That can kind of default. And change the color. But a lot of examples here. These are all examples of how to change color scales. It's just-- it's mostly here as an example. All the lines there are something that you might want to do. So it goes on through a lot of different variations. And finally changes the color map.
And that's it for the DriveSuma demo. And then that closes that. Any questions about driving SUMA?
AUDIENCE: Where do we find that script you're editing?
DANIEL GLEN: I have updated it in the source code. So it will be in the distribution soon. Just get the next version of AFNI will have it in there. Yeah.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: Yeah. I remember there's one other sleep that has to happen. There's a prompt within-- oh, when it prompts SUMA to prompt. Rather than a prompt outside, there's a prompt inside SUMA. So that prompt has to have a sleep too, because that opens up a window in a similar way. And there's something that X11 doesn't like anymore. Replaced. This is tricky at 2:00 in the morning.
Let's see. DriveSuma, DriveAfni, DriveSuma. We have at do examples. This is a kind of a fun one here. At do examples. This shows you how to use displayable objects in SUMA. It's another DriveSuma thing. You can edit it as can-- [INAUDIBLE] I don't know. Actually, I didn't try it last night. I was going to try it, and I got distracted by this thing. The other one.
Well, we'll see. I may have to fix this one too. Well, SUMA is-- oh, OK. So--
AUDIENCE: In mine now, I ran a script. It wouldn't run. And I closed the script. And I opened SUMA. And I got that [INAUDIBLE].
DANIEL GLEN: Yeah. OK. So when SUMA starts without any input, you can hit the comma and period keys and cycle among the different example images, example surfaces. So a lot of examples are in there. Your SUMA's not starting with this? Because I didn't edit at do examples last night, so it should--
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: You do comma and period. So you can start SUMA any place. And it's got these kind of built-in surfaces. And in the terminal that it opens up, you can-- I think you can just-- no, it's got a problem. No, OK. So here-- I think it-- oh, it's got the same issue. So well, that's annoying. So we'll skip that. Yeah, there's a-- if you look, there's a hidden prompt in there.
AUDIENCE: Yeah. I get that [INAUDIBLE].
DANIEL GLEN: You get that prompt?
AUDIENCE: Yeah. Then it's [INAUDIBLE].
DANIEL GLEN: I don't know. It's-- just to show you what it could potentially do, you can just watch the screen. I can go through this. So it can show you these kinds of objects. And I've got--
AUDIENCE: [INAUDIBLE]
DANIEL GLEN: So, yeah I can go-- I have some other examples.
AUDIENCE: [INAUDIBLE]. So I tried opened SUMA first, and [INAUDIBLE] I ran the script and I'm getting [INAUDIBLE]. I opened SUMA first. Then [INAUDIBLE] a different terminal window started the scripts, and so then SUMA [INAUDIBLE].
DANIEL GLEN: OK. So it's adding vectors and spheres. So these kinds of things. You can add different shapes or images. You can add planes and different surfaces, filling modes.
And now we're going to show some NIML displayable objects. So texture images can be added in. So we can add in a picture into the scene. What is the story of that AFNI man, I don't really know.
AUDIENCE: [INAUDIBLE].
DANIEL GLEN: Oh, yeah.
AUDIENCE: And so [INAUDIBLE].
DANIEL GLEN: That's right, yeah. So here we have-- you can color your images with a-- color your surface, your displayable objects with an image too. Some text. You can have it do the keystrokes and zoom in. And I could see I have a window there. OK. And that's it for the displayable objects. So there's some extra kinds of things that you can put into a scene with surfaces.
Another thing we have is called @chauffeur_afni. This has been fairly popular program. So you can make montages automatically. And it's a short script itself. So @chauffeur_afni, this is a general utility. And mostly it brings in a lot of different options. And most of its work is done right near the bottom of this. It's pretty far down I think. Oh, missed it. OK.
The actual driven command is afni dash no plugin. So this doesn't use plugout drive. It actually calls AFNI with the dash com option. And beforehand-- well, it's running-- I think I'm at the wrong here. So let's see. Let's find my xvfb.
So it's going to do xvfb. The xvfb part is so that you don't actually see AFNI running. But just setting it up with a virtual x11 virtual frame buffer. So AFNI starts-- it does its business. It saves images, [INAUDIBLE] asked for it to do, but you'll never see it. This is handy if you're running AFNI on a server remotely and you don't actually need to see the process of saving the JPEG images, the montages. You're creating a quality control report that you're going to look at all these montages later.
So let's run this program. And I have an example here. You can just watch this. So you can give it lots of different options. What's the underlay? What's the overlay? What's the color bar, and so on. And the size of the montage you want to look at. These are-- you could do this all yourself with plugout drive. And it's just a little bit easier. You can have it-- it does the X frame, the xvfb business, so you don't have to see it.
And so this has been a pretty popular tool. So let's-- I'm going to just copy this. And I'm going to run this in my AFNI data6 AFNI directory.
So is my name there? OK. So it wrote-- it created montages and wrote it out. And we didn't actually see AFNI pop up. So now we can look at the output with some sort of graphics viewer like AIV or our AFNI image viewer. You could use [INAUDIBLE] or something else too.
And this is what @chauffeur_afni has done for us. It's created these montages with the overlay and underlay selected and thresholded for us. And it does it for-- let's go through the slider at the bottom. We have the axial and the coronal and the sagittal.
So it's a handy little tool. It's yet another way to just drive AFNI to do stuff for you. I just-- you can make it-- they're fairly short commands, but I used it in the-- I have just a text file that is not in distribution for the class. It's just a kind of last minute addition here.
So I can make this available too. It's a very simple program. It's setting our color scale to that particular color scale too. There's a similar program at junked 4D imager. So that one shows us similar kinds of things as @chauffeur_afni, but it's showing us montages across time and across space.
So if you want to see particular slices at different time points, it will give us a montage of those things, which is a little bit tricky to do otherwise. So this will be a convenient tool for that. Rick has already shown you the @ss_review_basic and the @ss_review_driver. These are two other kinds of scripts to drive AFNI and give you quality control information.
We also have the @snapshot_volreg, so if I do that in this directory, and I give it just an EPI data set, two data sets that I want to see if they're registered with these with each other, whether they're aligned, I can do that here. I'll paste that @snapshot_volreg. It calls the xvfp, virtual frame buffer. It's going to put some edges on our images. And then open up AFNI and save it for us as a picture.
AUDIENCE: [INAUDIBLE]
DANIEL GLEN: Yeah. Yeah.
AUDIENCE: This for meeting the [INAUDIBLE] makes sense.
DANIEL GLEN: Yeah, so I mean, you can call it on on the command line with the data set where the paths are. It will save it locally though. So it saves our data set as test snapshot. So let's just look at that again with AIV. And that's what it's done for us. It's shown us the edges of the EPI with the anatomical data set.
So another way, it's just a way that you can check to see if your data is aligned quickly just at a glance. All right. We have the @add_edge script. This one does something similar. It's called by a line EPI in that if you want. But it can also be called on the command line by itself. You can do something similar as @snapshot_volreg.
It will drive AFNI in the second usage. It will drive AFNI to show either with a single edge or a double edge, and show you the two data sets over each other. Or you can have five different data sets, five different ways to see the same data set. So we use the @add_edge script for evaluating our LPC technique and comparing it with other methods. And so we could do that here.