07 - Regression: Part 1 of 2
Date Posted:
August 3, 2018
Date Recorded:
May 28, 2018
Speaker(s):
Rick Reynolds, NIMH
All Captioned Videos AFNI Training Bootcamp
Description:
Rick Reynolds, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu.ezproxyberklee.flo.org/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
RICK REYNOLDS: In this session, we'll actually do a hands-on linear regression. Basically, this is going to be, hopefully, focused mostly on model fitting, and I won't get distracted and babble too much about the pre-processing. I will a bit.
But the model fitting-- so basically, when you do an fMRI experiment, assuming it's a task-based one, you will design timing, and stuff like that, stimulus durations. And then you'll have a corresponding model for what, say, is an optimal response or an expected response to all your stimulus classes. And then you'll fit this model to the data. That model can include the drifts, or sinusoidal terms for drifts, or motion regressors, stuff like that. So you'll fit all these regressors in your model to the data, and then you'll get beta weights for the magnitude of responses to each one of these regressors.
Understanding the timing, and the creation of the model, and the subsequent beta weights is perhaps the most fundamental thing to fMRI. The pre-processing steps is just a sequence of steps of getting prepared to do this main step. And then for each subject, you create your beta weight maps of interest, and then you take those to a group analysis. And maybe you'll just do a t-test or something more complicated, but getting these beta maps at the single-subject level is something you'd like to understand pretty well. And that specifically means the modeling that [? Gang ?] was just covering. But we'll do more of a hands-on approach in this session.
So [? Gang ?] just talked about-- well, Daniel talked about using the AFNI viewer interactively, and [? Gang ?] talked about modeling the HRF with both fixed shape and variable shape. In this session, we'll babble a bit about pre-processing, and we'll do that more tomorrow as well-- more detail tomorrow than today. And then we'll just briefly remind you of regressors, and the design matrix, and stuff like that. But [? Gang ?] just talked about it, so we don't have to say much.
And then we'll actually look at the data, and look at the model fit to the data, and things like that. So we'll spot-check the original data just to see if it looks like it ought to, if there aren't any glaring artifacts in the data that we should be concerned about, at least that we notice. And then we'll play around with the statistical thresholding, and maybe do some clustering, and stuff like that.
So we're talking about this y equals x beta plus epsilon equation, at least in the simple form for ordinary least squares. So this is solved at one voxel at a time. You may have done pre-processing to get here. You may have scaled your data registered, and things like that, or not.
But at the regression point, at each voxel, we're solving this equation. And right now, we don't care about our neighbors. Each voxel is independent.
We have the-- the x matrix is our regression matrix. Basically, those are all time series that we think we might see in the data. They're good things, or bad things, or we want to account for them, whatever, but they're things that we want to model in the data-- baseline drift, motion, regressors of interest, what have you.
And solving this equation part comes in figuring out that beta vector. So that beta vector is just a scale around each time series. How much of this, how much of this, how much of this, how much of this, how much this-- that's the beta vector, magnitudes. How much of each time series was in this current voxel's time series?
And then epsilon is just what you failed to model. How much does your-- so you fit your model as close to the data as possible. And what do you miss? That's the epsilon time series.
And this solution in an ordinary least squares sense, again, is where the sum of the squares of the epsilon terms is minimal. For this course, we'll mostly focus on ordinary least squares, but you can do the same analysis with the RMO 1-1 model and 3D Remo fit if you want to account for the temporal autocorrelation.
Again, as [? Gang ?] said, the beta weights are not biased with ordinary least squares. So you're OK just taking the beta weights or group analysis. But if you want to show statistic significance, like for single subjects, like some of the animal studies, or if you have patient populations that are rare and you can't do a group of these patients, or whatever, for those, say, the t-stats at the single-subject level to be accurate, then you'd rather deal with one of the RMO models. And if you want to do a 3dMEMA in the group analysis, which is like the t-test but you sort of weight each beta weight by its reliability, then you want the reliability measure, which is to say the standard deviation of the value to be more accurate. But we'll keep it simple and focus on 3D that involve the ordinary least squares here.
So the data that we're looking at is just a very, very simple model. It's actually a localizer task from Mike Beauchamp when he was-- I guess when he was-- he's still down in Texas. But it's a speech perception. Basically, you have these two types of conditions.
Events for these last basically one second, and you may have repeated events in a block design. So here's Audrey speaking some simple words, like cat, or whatever.
And in one case, the visual aspect of the stimulus is clear, but the auditory aspect is degraded in some way. And in the other case, the visual is degraded and the audio is good. So their basic.
They're both audio. They're both visual. But in one case, one aspect is degraded-- not softer or dimmer, say, but just not clear. So definitely, we should see strong visual response to both of these as well as auditory response.
The experiment design, there were three runs. Each run consisted of 10 randomized blocks. And within one-- this is, say, one activation block. They're basically 10 trials lasting 20 seconds, really, 19 seconds.
It's a one-second event, one second off, one-second event, one second off. So really, we're looking at it as if it's 20 seconds-- better to do 19 seconds, but we're not being that picky. But it's really more of an on/off.
But you could model the one-second things repeatedly. You're not going to get anything different. That's too subtle for the data. Plus, these are very strong responses, so you don't have to work too hard.
So we've got these 20-second events, and then 10 seconds off. 20 seconds on, 10 off, and this is a fixation for the off period here. And during a single run, there of five blocks of each condition pseudo randomized across the run.
So two anatomical data sets were collected for each subject. And the reason two anatomies were collected is because they also were going to do a surface analysis. So they took two anatomies, registered them, averaged them, and that reduces the noise a little bit. And so maybe the anatomical segmentation and surface creation was perhaps a little cleaner than just with the single one.
Three runs of EPI data were collected with 33 slices, 152 time points per run. So basically, that same data that we started looking in AFNI data six slash AFNI. The TR is 2 seconds, voxel dimension, 2.75 squared by 3.
Sample size, 10 subjects. You'll never get a study published with 10 normal subjects. But here, this is just a demo, and it's good enough. Because this data is really robust, we get up strong group response with only 10 group result with only 10 subjects.
So data quality check-- so at this point, we'll all go to the AFNI directory in that same location and basically do this slide. So you can follow this slide on your own, but I will also do it up here. We'll do the same stuff, but depending on how you prefer to work.
So we'll cd under AFNI data six slash AFNI, the same directory we were in for the earlier classes. And then we'll just run AFNI there. So first cd to that directory. Then just run AFNI, and then we'll follow the rest of the slide.
So cd AFNI data six slash AFNI. And of course, after typing cd, I immediately type ls. And reminder, these are the exact same data sets we saw earlier here.
And then we can just run AFNI. So following along the slot with a slide, it says to switch the underlay to EPI run one. So we'll just look at the first run of EPI data.
Open an axial image window and a graph window, and then pick the ideal. So let's do that. So in AFNI-- so note that I've now hidden a couple of the AFNI image windows. If you just click on the image buttons again, it will raise up those windows. If you lose a window and can't find it, you can actually right-click on this, and it will bring that window to this location.
Not too important here, but once you get a lot of stuff open, it's easy to actually lose a window. In fact, it's easy to lose the main controller. So I'm looking at something here. Which AFNI controller goes with this?
I could have four AFNI controllers open. Hopefully, that will happen to you, because you're looking at a lot of data at once. For example, you can run AFNI and look at 10 subjects at once, and open all of them, if you feel like it.
Anyway, how do you know which controller this goes with? You can right-click on this Disp button, and that will raise up, if I can get on it. Well, let me hide it first. And that just raises it to the top.
So anyway, so switch underlay to EPI run one now. So click on the Black Underlay button over here on the right side of the controller. And then switch underlay to EPI run one. We see it as 152 time points.
And open an axial image and graph. I guess I'll leave my sagittal image open, but I'll close the coronal one. So we have an axial image window and a graph. And then let's all jump to the same location.
It says, jump to IJK 26, 72, 4. So earlier with Daniel, you jumped to an XYZ coordinate, if you were all following fast enough with that to get to the same location in the brain. This is similar, but it's jumping to IJK indices.
So you can do either. They'll have the same effect. But remember, IJK, actually, those are the indices into the matrix.
So you've got 0 through-- well, is this 80 by 80? I think 80 by 80 by 33. So 0 through 79, 0 through 79, 0 through 32 are the indices, the index ranges, in the three dimensions. And so we're going to jump to 26, 72, or actually, 26, 72, 4.
So if I right-- so on the image window, we can right-click and say jump to, but we're going to jump to IJK. And actually, there's an IJK underlay and an IJK overlay. They might be different.
The overlay data set is generally re-sampled to match the underlay, but we're not we're not running into that. But still, we'll just say underlay. So jump to IJK underlay, and now we can type in 26, 72-- space separated-- 26 space 72 space 4.
So just a reminder, in the image window there, you may need to click on it to select the window, depending on-- so on my laptop, I just move over a window and it gets selected automatically. So you may have to actually click on the window to choose it, and then right-click to do the jump. And then we can hit Apply or Set.
A little comment about that, you may actually want a mouse for this class. Because we use all three buttons, especially getting later in the class, all three. Well, all three-- I shouldn't say all three. My mouse at work has about 10 buttons on it, which drives me nuts too, because I'm always clicking something that I don't want to click.
But anyway, we want at least a standard three-button mouse would be helpful, if you have one. But if you know how to deal with it on your laptop, then you're good. My laptop actually has three buttons, so that's nice.
We're all looking at the same location. So one thing that could vary between my display and some of yours is which side we're looking at. If you notice right here, we're on the right side of the brain, but you might be on the left side.
Again, as Daniel mentioned, this is showing left equals left here, but that's not actually the default in AFNI. The default is left equals right, because Bob succumbed to the radiologists, and that's how he made the default, for the radiologists. So usually, the left side of the image is the right side of the brain. But in our dot AFNI rc files, we specified to have left on the left side.
So you may have a mirrored image, depending on that. But hopefully, everyone has the same graph time series up here. Now, getting over to the quality check, there a couple of aspects to note. I'll just start with the most visually clear ones in the data.
If you notice, again, at the beginning of the times series, the red dot is way up here. But for most of the data, it hovers down here. That's that pre-steady state data issue. At the very beginning of the run before the magnetization reaches a steady state, the signal starts off high, and then it stabilizes.
We actually don't have the pre-steady state data. That's why we only have two TRs that we end up deleting. So we had to fake this. But anyway, it looks like a spike at the beginning, so we added the spike at the beginning of the data.
But then what's with this spike?
AUDIENCE: Emotions.
RICK REYNOLDS: Yeah. That's an emotion spike. How would you evaluate that? Probably by looking at the sagittal image.
But so if we click on that to put our red dot around there in time, we can use the left and right arrow to go back and forward in time and see if the sagittal image shows a head motion. So click around that. I can actually-- I actually happened to just hit the spike. But let's look at a sagittal image and bounce back and forth in time.
And, yeah, so we see the head rotating. And basically, that's the only spike in our-- the apparent motion spike in the data. This subject is annoyingly good for class demonstration. They didn't move enough. We had to fake this too, actually inserted a 2 degree rotation into the data.
So it's curses. It's hard to get good data. Subjects don't move enough. So anyway, that explains a couple of the spikes in the data.
Clearly-- oh, we didn't do the ideal. So let's pick the ideal time series. And to do that, there's this FIM menu in the lower right corner, FIM meaning functional imaging. FIM was used a lot in the early days, so this is a remnant of that.
So if we click on FIM, we can pick our ideal. So left-click on the FIM button, and then click the top button that says Pick Ideal. And then we can use the EPI run one ideal. And then hit a Set or Apply.
Remember, we had two conditions, audio reliable and visual reliable, and they alternated. Here, we just see one result. We put them together in one regressor just for this very first part of looking at the data. But when we do the real analysis, we'll have-- this wouldn't cover one-- it wouldn't have all the bumps in there. It would have half of them.
So certainly, this voxel looks like it's responding to our visual stimuli. So you've seen all this before. Let me make a comment.
So there's this automatic scaling, video mode, and MM for a voxel matrix size. Let me just make a comment. You don't have to remember for these little graphs shortcuts. You don't have to remember all of these shortcuts.
For the matrix size, we can click on this Opt button in the lower right corner for options. And you get a whole submenu here. And the first one is scaling. That's what we'll end up doing too.
In the brackets there are the shortcut keystrokes for these various operations. So typically, we'd like to use the capital A for autoscaling. So as we jump around amongst the voxels, the graph is rescaled every time.
By default-- actually, I don't know what the-- remember what the default is right now. Maybe I'll bounce around a little and check. Yeah, so it's getting small. So it's not rescale-- well, it's hard to say, because there are spikes in the data. Spikes affect the scaling. So it doesn't look good.
But if you set this to use the capital A-- and I'll just show you. If I type a capital A and look at my terminal window, now autoscale is forced off. So clearly, it's on to begin with.
I don't know when we change that. And if I type capital A again, it's a toggle now. It's on. Anyway, that's a convenient thing.
And I'll go back to an image window, and right click, and jump to IJK. And it still knows the-- it still remembers the numbers. So I can easily jump back to where I'm supposed to be.
And the other one, that matrix thing, if you don't remember typing upper or lower M, you can find it in the menu here. It shows the little and capital M there to go up and down in the matrix size-- so see more of voxels or fewer. It sometimes is nice to show many voxels in the time series window at once. You actually will sometimes see patterns in that, which suggests either motion, or scan, or artifacts, or even you can actually see the brain contours. Though, that's a little less interesting, given that we can see brain contours down here better.
So preparing for data analysis, here you'll have to stop me if I babble too long. I can always quit at the break. That'll force me. Excellent.
So what pre-processing do you do before-- does one run before the linear regression? So a long time ago, almost nothing was done, including-- registration was perhaps the first thing. And registration was initially done just slice by slice.
So in my-- so back in those golden, or bronze, or tin days-- I don't know how we call them-- you usually did not acquire axial slices. Because if you want to register slices together in the subject, this is the most natural subject motion, at least for humans in the scanner. So when you do this, you're axial slices are all over, and registering this slice to this slice makes no sense whatsoever.
So back then, you were more commonly acquiring sagittal slices. So now, if the head rotates, at least they're staying in the same slice, for the most part. They can still move a little bit, but they do that less. And now, at least slice-based registration does something useful.
Nowadays, we don't usually bother with the individual slices. Though, you could. But we generally just assume a rigid body. And if the subject does this during a slice, they're screwing up the data. We're not going to recover from that anyway. We'll probably censor it if it's a big motion as well.
Anyway, so over time, more and more pre-processing steps were added. We should deal with motion. Maybe we can deal with outliers, temporal outliers, in the data through despiking, or time shifting, volume registration, masking, blurring. Blurring was just chosen to be done at some point in time. Scaling the data was just chosen to be done.
These are all steps that are incrementally added on to processing. But let's just go through them to some degree. And note, that when we talk about basically any steps we perform in the analysis, a question people ask a lot or expect an answer to a lot is, so what's the best way to do this? We don't know. We don't know what the best way is.
We don't have a measure to tell you what the best way is. So all you can do is understand what you're doing to a reasonable degree and decide if you feel this is good, or if you feel something else should be done, and you can justify it. So every step in here, people will-- you can bicker with people for hours or weeks about, but you have options.
So first of all, you might look for outliers. We defined an outlier. Like for a single voxel time series, you've got some trends, some drift going on, and maybe you've got some spikes in the data. The spikes are going to be outliers.
How do we define an outlier? Well, you take the trend, and then you measure the absolute deviation from the trend at each time point. And then you compute the median, or 3dToutcount or 3dDespike will compute the median of median absolute deviation from the trend.
And then based on that, go back through the data. And then at each time point, how many median absolute deviations MI from the trend? And then if you're many of them, you'll count it as a spike. If you're not, you won't.
So a person who moves more, their median absolute deviation will be higher, right? So it's relative to the actual data how spiky-- what is a spike. So 3dToutcount will we'll tell you what time points seem to be outliers for each voxel.
And if you put that in a different direction at each time point, you can determine how many voxels within the brain are outliers, which is to say, what fraction of the brain do I have as outliers at each time point? So in an AFNI proc, for example, you can use that measure for sensoring. That's actually a very good metric for determining when does the subject move.
If they have a lot of outliers, they probably moved. And this is actually much more clean than the volume registration parameters give us as a measure for motion. Some people use it, some people don't, but it's an option.
3dDespike will hammer those spikes down for you. They will hammer them down. Anything between 2 median absolute deviations and infinity is hammered down to between 2 and 4. So spikes are still spikes, but they're just small ones.
Why do we care so much about spikes? Well, a lot of the-- even the pre-processing steps, and certainly, the linear regression, you do things with minimizing the sums of squares of differences. So a spike is a killer. Spikes are very detrimental.
Because if you're dealing with sums of squares, a spike gets squared before its effect is applied to a time series, or a volume, or something. So spikes really affect all the analysis steps. So sometimes reducing them ahead of time makes things go a little more robust, say.
Then you might do time shifting with 3dTshift. Oh, I can select this, huh? 3dTshift, remember, you're requiring perhaps slice 0, 2, 4, 6, 8, and then 1, 3, 5, 7, 9 in an interleaved manner, but that means you're going to create regressors based on having a stimulus at time 17.3 and another stimulus at time 41.6.
Within one volume, this voxel versus this voxel are basically a whole TR apart. This voxel versus a neighboring slice voxel is half a TR apart. So maybe you want to adjust the timing so that it's as if the whole volume were collected at the beginning of the TR. And then the timing for the volumes would match your regressors more precisely. Or you could do things to the middle of the TR-- whatever suits you.
But say the way we typically do things with AFNI proxy is to use 3dTshift to resample the temporally interpolate to the beginnings of the TR. So you've got your data going on. You're going to do a temporal interpolation to shift it back by 0.7 seconds so that time points are at the beginnings of the TRs. Some people do that, some people don't.
AUDIENCE: Do we have to do something special for 3dTshift to handle multiband?
RICK REYNOLDS: Not really. Multiband acquisitions, I haven't noticed anything special about multiband acquisitions in terms of the analysis, except for the fact that it has a faster or shorter TR. So you're acquiring data in a shorter period of time, possibly multiple slices at once here.
As long as you know when you acquired each slice, you can still use 3dTshift to shift them appropriately to the beginning of the TR. The faster acquisitions might have a bigger impact on whether you do band passing or something, which I think is a bad idea in any case. But if your TR is shorter, than band passing both makes more sense, and much, much less sense, and both at the same time. So I'll whine about that later.
So after those steps, you might go into a registration phase here. 3Dvolreg is what we use to register EPI volumes to one base EPI image. We might actually use the outliers from the first step to determine a time point that clearly doesn't seem to have any motion in it, because the outlier fractions are very small there. So if we use that as a volume registration base, since the subject did not move, then it should be a fairly robust one.
If you happen to choose some random time point, like even the very first one or what have you, then the subject could have moved during that volume acquisition. And now, you're aligning everything to a bad volume, and that can't be good. You align all the EPI data together, and then you might run align_EPI_anat.py to align the suspiciously-named EPI and anatomical data together.
In the tin days, we weren't good at this. We've only gotten good at this, what was it, five years ago perhaps, six years ago. My sense of time is not--
AUDIENCE: Six, seven.
RICK REYNOLDS: Seven? I was within the decade, so that's not bad. We didn't used to be good at that. But now, we're pretty good at it.
So in the earlier days, how did we align the EPI and the anat? Well, you acquire the anatomical volume, and then you acquire your EPI data. And you align all the EPI data to the first EPI time point, maybe after steady state or not. Either way possibly, but probably afterwards.
And how did you align the EPI data to the anatomy? Please don't move, please don't move, please don't move. And then you're done.
So what happens if the subject moves in there? Tough luck. You might try to manually align the anatomy to the EPI. We have a plugin for doing that.
How good are we, as humans, at doing this three-dimensional registration and finding a good one? We're not very good. We can come-- we won't fail miserably. We won't have a gross failure, but we're not going to do a good job.
But anyway, so now we're pretty good at that. So we don't have to choose an EPI volume that's temporally close to the anatomical one. So even in some cases, you won't bother to acquire an anatomical volume for a scan. So like the subject was just scanned in the morning, we got an anatomy, and now they scanned them again in the afternoon, we might not bother with the time it takes to acquire a new T1, because we can align them. Then you might register your anatomical data set to your template, whatever template you're using.
[INAUDIBLE] is the program we currently use for affine registration, affine being the shifts in rotations plus scaling plus shearing. That's a shear. So these affine registrations will account for those types of transformations.
Or there's auto_warp.py, which will do a nonlinear registration. In our version, takes the data set, it breaks it up into smaller and smaller boxes. It registers all these boxes, and then it goes more fine, and registers all those boxes. And they overlap, and then it gets smaller and smaller.
So Daniel we'll chat about that. That's in 3DQR. But that's the nonlinear registration. So you can choose either of these.
Nonlinear registration, if it works well, it's harder to do well. Well, I should say it's easier to have subtle mishaps in nonlinear registration, because every part of the brain is basically registered somewhat independently. So you can have weird things happening here when it's nice over here, and you have to look harder for issues. But typically, you get much better group results with nonlinear registration, because you have better anatomical correspondence across subjects.
Now we've done these multiple registration steps, and there are other registration steps we might do too-- distortion correction maybe with multiple phase encoding. There are a lot of registration steps you could really apply. But what we typically do AFNI proxy is we concatenate our registration.
So we don't do this one, then this one, then this one, then this one. And now we've resampled our EPI data four times. And it gets blurrier and blurrier and blurrier every time every resample. So what do we do?
We concatenate these transformations together, and then it's only applied one time to the EPI data. And you just get the one sampling, and your data is more likely closer to the original resolution, less blurry than it would otherwise be. Then you blur the data. I'll just mentioned that briefly too.
Why would you intentionally blur the data? Two reasons. One is basically that one simple aspect is noise cancellation. So you've got voxels that are near their neighbors in space.
For the most part, we hope that they have similar bold signals. At least in areas that are responsive to our tasks of interest, you expect similar bold signals. So what happens when you blur them a little bit together? Well, you hope that the signals reinforce each other and that the noise cancels.
If you average white noise, pretending and hoping this is white noise that we have here, that should go towards 0 as you average it. So that's one reason to blur. The other reason to blur is for temporal spatial variance in the anatomical structure across subjects.
So say in two subjects, or in many subjects, you've got responses in this exact same lobe. And so this subject has a response here, this subject has a response here, but the lobes aren't perfectly overlapping. So blob, blob, blob-- what happens when you blur? You make this blob bigger, this blob bigger, this blob bigger, and now you get better overlap across your subjects and a better group result.
That was more important when we did affine registration to the template. Now that we're getting better registration through nonlinear methods, you get better group-- you get better anatomical correspondence across your subject, and it reduces the need to blur for that overlap reason. So that's beneficial.
Masking, you can create masks with 3D Auto Mask. We don't actually generally recommend applying any mask to the EPI data at the first-level analysis time. We'd rather-- we want to see the whole result, and including if you have blobs of so-called activation out in the middle of nowhere.
You want to understand this. If you see something, you want to know, is it just a ghosting artifact? Or is it something I should be more concerned about? In what case might you be concerned?
For example, some people did a full analysis. They had their pretty pictures. They're trying to write up their result and submit this as a paper, getting close to that point, but they just aren't quite sure about why the activation, say, is in the areas that it is in. So in this case, they sent their data to Bob. I don't remember if how many of us looked at that time.
But they sent their data to Bob and asked him to look at it, and he just ran the analysis in AFNI. And lo and behold, it was quite clear they had activation all over, all outside the brain, like following the contours of the brain. But they couldn't tell that, because they were doing a masking step.
So in what case would you see activation along the contours of the brain? That's just motion. Almost always, it's just motion that's correlated to your stimuli or to your ideal curve, say.
So then you get what looks like activation all over, but that has nothing to do with blood flow or anything like that. And you're about to publish a paper on this. So instead of having to retract anything, you'd rather look at the full result and see, oh, this is a motioning issue.
What can I do about that? We don't really mean to be all doom talking here, but it's good to understand what types of problems you'll run into rather than just run a simple analysis and look at the pretty results. So the goal is to understand things well enough to detect what seems to be an anomaly that you might have to deal with.
And then the last pre-processing step is temporal mean scaling. At least that's what we do. We scale every voxel to a mean of 100.
And again, we perhaps rather scale it to the baseline being 100, which isn't very hard to do. We get baseline terms out of the linear regression. But it's just a little bit harder and, basically, unnecessary. Because even in a voxel that is very responsive to your stimulus conditions, a big percent change is like 3%. That's a big signal change because of bold response.
And so the average of that voxel could be, say, 1.5% more than the average of the mean of the baseline. So the pure mean is 1.5% above the baseline. Is that concerning? No.
That would change a beta weight of 1 to a beta weight of 1.15, 1.015. So it's a 1.5% change on the beta weight, not additive, but of the magnitude itself. So basically, you'd never notice this. So it's not worth the little effort.
But you could. We demean our motion parameters for that reason. And so I think basically all of our-- I think most of our regressors of no interest would be demeaned, in which case the baseline is reasonably estimated. But again, if you make a mistake with that, it's not worth it, not worth the concern.
AUDIENCE: [INAUDIBLE] the spatial normalization, is it doing each slice? Or it's 3D?
RICK REYNOLDS: All of the normalization steps that we do are basically 3D transformations.
AUDIENCE: [INAUDIBLE]?
RICK REYNOLDS: Yeah.
AUDIENCE: So the old one-- the old-- I mean, there is old ones--
[INTERPOSING VOICES]
RICK REYNOLDS: 2dImReg?
AUDIENCE: Yeah. Those ones, you can choose somewhere on each slice.
RICK REYNOLDS: Yeah. 2dImReg is a slice-wise registration.
AUDIENCE: So if you use the other one, you can only do-- can we choose to do each slice like with 2D or just 3D?
RICK REYNOLDS: It depends. So if you're collecting data where you expect it to be fairly rigid and not to move out of plane, it would not be unreasonable to run the 2dImReg on that. But it's a little dangerous.
Because I don't know which slice direction you would go after to expect whatever subjects, or whomever you're scanning, not to move out of plane. Because moving out of plane is a problem for them. But even if they don't move out of plane, some of the slices-- like suppose one slice is more or less circular. How do you align two slices?
Well, in the same place. But you could rotate this one 117 degrees and it might still align. And so you have to worry about things like that when you're doing slice-wise registration, even if they don't move out of plane. So there are things to worry about.
But you can do 2D registration, and you can even do 3D registration after that. But it's easy to have little problems with the 2D.
AUDIENCE: Yeah. But just sometimes with the [INAUDIBLE]. And you know they are not moving. But animals vary in their size, so the brains vary. So some are more-- actually, they look small. But actually, they didn't move the other-- the [INAUDIBLE] wise. So if I do 3D, it just shifted to another slice. So it's moving on the [INAUDIBLE]. It's sort of [INAUDIBLE].
RICK REYNOLDS: Yeah.
AUDIENCE: So that's why I think have a choice of choosing either 3D or 2D.
RICK REYNOLDS: And you can do both. Some people have done that. But you just have to be careful not to have something go wrong, because it's easy. Other questions about this?
AUDIENCE: If I'm studying a stroke patient, do I have an extra step to implement lesion [INAUDIBLE]?
RICK REYNOLDS: That's certainly harder if you've got-- if you have missing pieces of a brain or something like that. The best way we've expected to work, but we haven't spent that much time evaluating these things, is you can actually apply a mask to tell some of the programs to ignore this area of the brain, and now perform a registration, and just to pull this area along with whatever else is done. So sometimes that is helpful if this one area is going to distort the results in some way, because that might do a pre- and post-registration if you have such data. I don't know if you do.
But even to a template, it's a similar thing. You may want to mask out any part of the brain that you think would distort the result in a bad way.
AUDIENCE: Are you aware of any code doing that in AFNI?
RICK REYNOLDS: People have done that. I don't know how successful it has been. Daniel probably knows more.
AUDIENCE: We have some scripts that we wrote for stroke alignment. And one of the problems with the stroke lesions is that they're so large that virtually anything that works with nonlinear alignment just won't work. It won't work in stroke on the stroke lesion side. On the non-lesion side, everything works.
So one solution we had for it was [INAUDIBLE] alignment. It just [INAUDIBLE]. Because we made a perfect brain by mirroring the non-lesion side to the lesion side. And then [INAUDIBLE] registration [INAUDIBLE].
RICK REYNOLDS: So let's take a 10-minute break and then finish up afterwards.