05 - Individual Subject Analysis: Part 1 of 2
Date Posted:
August 3, 2018
Date Recorded:
May 28, 2018
Speaker(s):
Gang Chen
All Captioned Videos AFNI Training Bootcamp
Description:
Gang Chen, NIMH
Related documents:
For more information and course materials, please visit the workshop website: http://cbmm.mit.edu.ezproxyberklee.flo.org/afni
We recommend viewing the videos at 1920 x 1080 (Full HD) resolution for the best experience. A lot of code and type is displayed that may be hard to read otherwise.
PRESENTER: So for this talk, the slides-- the file you can find in AFNI handouts directory. It's called AFNI 22, so that's the file, the PDF file. So that one. So this talk we're pretty much focused on the individual subject analysis. So this is the layout of the talk. First, I guess, then I don't know, probably 20 minutes or so, we talk about with some basics about regression analysis. Then we're going to switch to the basic of the model on the individual subject, which is pretty typical time series regression.
So for that model, usually we have this-- we're ready in this matrix vector format. So that's on the left hand side, we have the data from the scanner after some pretty positive steps. On the right hand side, we'll have the design matrix plus-- I mean, multiply the effect estimates and plus the residuals. So with there, we're probably going to fix-- focus a lot-- you spend a lot of time on the design matrix. Because we quoted these analytics, that's because usually there's a lot of observational data. It's something you design the experiment. You want to get something out of the data, so you design an experiment. That's why the term large of design matrix.
Within that design matrix, you have multiple columns. Some of the columns are specifically about your experiment. So that's your tasks or conditions. Some of the columns are of low interest. You don't care about them, but you do want to put in the model to explain some of the effects from those requesters. So that's the design matrix.
Then the pathos are pretty straightforward. We care about, that shows how much you expect to see in the brain. Lastly, is there the residual part. Residual part we don't care about, but we do need to be a little bit careful about. We also spend a little time to talk about the residuals. Because the conventional regression model, the residuals, we assume that [INAUDIBLE]. That means there's no data structure, no temporal correlation, for example. But the [INAUDIBLE] that's [INAUDIBLE] assumption. Usually, it does [INAUDIBLE] hold well so we leave to something special about that part.
So the next thing about the design. This is for this stuff we don't care about. Then we still have break into three separate components. Why is the base line? Why is the slow drift, some people call it, in low frequency components? Then other stuff, usually, it's like hidden motion parameters. So we usually try as much as possible to correct for hidden motion but usually, it's not perfect. So we still need to consider incorporating some of the those hidden motion parameters into the model.
In addition to that, there are something like [INAUDIBLE] specific that's where you may want the sensor some time points if you consider those time points a little bit too much. For example, hidden motion or scanner irregularities. So that's called the sensoring part, that's probably something [INAUDIBLE] specific. So those are the different types of requester of low interest.
So when we talk about the requesters we care about, even for that there are different concepts. And how do you want to model those tasks or conditions. That's a little bit tricky. So that's the effective estimates. So three categories. One, the big one and the popular one, and probably 99% of the time, people just make some assumptions. Assume that's [INAUDIBLE] a response. Just use whatever available in the software then assume that's the response curve. It's the same everywhere in the brain, it's the same for everybody, for all the subjects, regardless of the tasks, regardless how many trials you have. You always assume that it's under the curve. So that's a peak assumption, that's called a fixed shape.
Then there are some need to be more [INAUDIBLE] it's called-- it's Julia, SPM adopt slide approach. You have one, they call it [INAUDIBLE] curve, than that plus two additional one. Sometimes people use one extra which is the time to route it. Sometimes you use the third one, which is the dispersion curve. So we're going to talk about that modern approach. Lastly, we're going to talk with the most flexible approach, was which you don't make assumption about a response curve. Don't make any assumption about the shape. You use the data to model the shape. So that is the most flexible but it's the most challenging one. People don't use it because they are-- in the end, people don't know how to do group analysis. So that's the tricky part. So we also talk about that.
Then if we still have some extra time to talk about some miscellaneous aspects of like [INAUDIBLE], then we-- practical issues like you have multiple wrongs, how do you handle it? That's also something AFNI handles. it's a little bit slightly different from the other software packages. Then lastly, there is a concept of a percentage signal change. That's how you normalize your data or scale your data. That's also a little bit unique for AFNI because in the end, there is a-- how do you interpret the beta waves? That's not just the interpretation. I mean, from conceptual perspective and also from modern perspective, you do want your pathos comparable across different subjects, across the brain regions.
Otherwise, if the betas are [INAUDIBLE] interpretable, why do you leave it to group analysis. So that's something we'll talk about. So that's basically the big picture for this talk. So let's start some basics. Everybody here-- I mean, pretty much everybody knows [INAUDIBLE] my data and you know-- you already learned some studies. The basics. Regression model, there are different terms. Some people call it linear model, some people [INAUDIBLE] it general linear model.
There are some subtle differences across those terms, but it's just [INAUDIBLE]. Assume everybody talk about the same thing. I mean traditionally, regression model is as simple as the one you have y, which is the data recorder-- traditionally, people call it dependent variable. Now, x is called a independent variable. I think in our days those terms are feeding away because when people say that there is some casual relationship implied in those terms. So that's not necessarily the case.
So you really would tend to see the y, we consider it as an outcome variable, or a response variable. x we usually say that's your exploratory variable. So as simple as the regression model or linear model is just-- we have one exploratory variable. We look at how much x can explain the variability in the y. So that's simplest model. So on that right hand side, it shows such a simple curve. So just we'll have some pairs of x and y. We tried to fade it with a straight line. That's why it's called a linear model.
That's pretty straightforward. But let's just repeat the idea that intercept-- each straight line will need two parameters. One is the intercept, the other one is the slope. The intercept-- we're going to come back to that, especially at a group level. At the individual sublevel, that intercept, usually is the baseline. We don't care about, but we do need to put it in the model. The slope, in general, the linear model slope shows the marginal effect. When the x increase by one unit, what's the amount of change in y?
But for [INAUDIBLE], that's we deal with the-- mean the model. It's a different framework in the sense that the x and y is large. Just the simple pairs. There are also-- the x, especially the x, there are some pattern about the x. The x our regressors. It's something-- we're putting the data, that's your design experiment. You add which moment, like for example every 10 seconds, you ask the subject to do something. So there is some pattern. Not pattern, it's-- usually, we call it several were aggressors. So that's why when we're right in this Victor and matrix format, that probably better way to describe from data framework. So the simplest case of course is just the intercept plus one experimental [INAUDIBLE]. So that's still the simplest case. Here from my [INAUDIBLE] there was way more complicated than that. So we'll come back to that later, how to formulate the design matrix.
But once we have that model, we solve it. So it's simplistic-- so I teach [INAUDIBLE]. So the numerical term solution is called a ordinary least squares. Basically, you try to search the possible betas. I mean, I that the theta makes the residuals as small as possible. So that's why it called ordinary least squares.
Squares. That means the residuals are not just the IDOP. We leave the square then. Why do you leave the square, then? because some of them for the positive, some of them for negative. So that's why we leave the squares. So that's why it called ordinary least square. Least, that means that their optimization, which is minimize the sum of the squares.
So once we have solved it, then we estimate the beta values. So it's, geometrically speaking, it's pretty much like people called it a projection. So we'll have the data. The data is this arrows. So that's your upper, [INAUDIBLE] the y. What we can explain is our design matrix. So the theta is the higher dimension. But then, in this case, it's three dimensions.
What we can explain is that this blue plane is a two dimension, so we projected a three-dimensional object onto this two-dimensional plane. So that's your [INAUDIBLE] the part is this [INAUDIBLE] way. So the difference between two is that's the residual part. So what this part? This part is y times this vector of ones. That's basically the baseline. I mean the intercept part. So that's just a different perspective.
So once we solve it, then for FMRI, we not just estimate the pairs, we also want to see how confident, how much evidence we can see in this particular brain region, is that we can say with some confidence that this region is activated or not. So that's one possibility. Or we can compare two conditions, like you have house image versus face image. In one particular region, maybe it's house dominant region, then we want check the difference between the two. So that's usually we draw a contrast.
Occasionally you may do more sophisticated case. You can make linear combinations from multiple beta values. So those are typically coded t-tests. Those scenarios. Occasionally we may do something eve more peculiar like this. Whether any of them are equal, I mean not all equal, those rate or equal to zero. So those are usually, we call them a composite low hypothesis. We would have to resort to an F-test, Fisher's F-test to test their significance.
The regression model, there is also the concept of a omnibus test that shows a bundle of regressors or all the regressors. It depends on the context. So that's basically the scenario you may encounter when you perform stages of testing. This is basically the traditional approach people call the null hypothesis significance testing, NHST.
So you make a strawman to attack that strawman. Nobody cares about the null hypothesis. So if your [INAUDIBLE] is zero, nobody cares about. But the traditional statistics is that we put up that strawman to attack it, pretend that nothing go wrong. So then we find a sum based on the data or we say that's not law, but it is slightly reasonable based on our data. So that evidence is-- usually the studies are like a T or F.
So the T is high. Then we say, oh, this not reasonable, so we reject that null hypothesis. So then we take the alternative, which is called alternative hypothesis. So now that's why the concept of a P-value. So the P is associated with the so-called significance. If we say the significance is 0.05, then that's why the magical number, 0.05. Why the 0.05 [INAUDIBLE] with? Why? Just because, probably 1920 or '30s, Fisher decide, I mean, basically at that point he say 1 out of 20. So that's why that magic number. Nowadays we have to solve for that. Everybody has to pass that threshold. Will come back to that Saturday, later on. I mean during the group analysis.
So that's the big picture. I mean, you will have the model, you solve it, and you perform some studies testing. So for FMRI, the y is a little bit different. It's not just there's some random observation of data points. They called it time series. That means they are, there's some special order. So they're sequential. So that's a time series.
That's why some times we call it a time series regression to emphasize that there is sequential or serial order among those data points. Not just the data we collect from the scanner. Also, this has impact on the residuals because the residuals may also have some temporal structure. That's why the traditional approach of ordinary least squares is not good enough. So that that's basically that. That's why there is this issue. We need to be a little bit careful.
So we solve it. But remember we will not just solve one model, we'll have many models. As many as the number of Voxels in the brain. So suppose we have 200,000 Voxels in the brain, we'll have to do it 200,000 times to solve the same model. By the same model I mean, on the right hand side, the design matrix is the same. But on the left hand side, the y is different at it and across Voxels. So of course, in the end of the beta, the betas are different.
So that's some background. So everybody is familiar and from my experiment design might say, well, when you design experiment, we'll either recruit some number of subjects. That's the sample size of at the group level. At an individual level, you design experiment. You know how many tasks you put in the model.
So the house face, you have two conditions or two tasks. And each task, you're going to repeat some number of times. That's called the number of trials. So that's the sample size per condition or per task. That's at the individual level. Then each task in each a trial, there's a concept level whether it's a block design or event-related design.
The differentiation of those is just arbitrary. I mean, people say event-related design simply because the duration of each trial is about 1 TR or less. If you're longer than that, you'll record a block design. That's just for convenience when people describe their design. Sometimes you have a mixture. So you have a cue to show this subject to tell them something is upcoming. So the cue maybe just 500 milliseconds or even shorter. But then later on, the task may last a few seconds or even longer. So you could have a mixture of events and blocks.
So there are a bunch of other terminology. And I'm going to skip. But lastly I want to say the concept of wrongs and decisions. Wrong, cause different wrongs. That means usually you have subjects but usually they cannot stay in this kind of form. Like, for example, one over. It's just too much. So you will sometimes just stop the scanner. But the subject remains in the scanner. Just stop the scanner, take a break a few minutes break, then start again. That's called multiple runs.
Sessions, that's usually you start the scanner. The subject also gets out then come back later or next day or even a few days or few months. So that's called different sessions. Block design, the difference between-- I already talk about that. Once we have the model, we'll have the data from the scanner after preprocessing. Then basically we break into multiple components. So we have the signal part and the noise part.
So the signal part, I already mentioned that you have the base line, you have the slow drift, then there are a few regressors, which are usually the head of motion parameters or sensor the time points, then probably some multiple conditions or tasks. That depends on how we model it. So that's basically the structure of the model.
So this is one specific model, which I believe Daniel or Rick showed you earlier this morning. This is that some blocks in the brain here. It shows the black curve is the original data. The red one is something we expect to see in the data. The blue one is, we use the red curve to try to search, try to match the black one. So the blue light is our best fit.
So this shows how noisy the FMRI data is. So it's really a little bit hard. This is a block design. It's little bit better. Now event-related design, but still it's difficult to see the pattern. Also you'll notice that there is some drift. not just this heart, I mean the block design with the pattern. There are also downward drift. That's the part we also need to take over. So in this case, it's 20 seconds out, 20 seconds off.
Even this we make quite a number of assumptions. First of all, during each block, the BOLD signal really stays like this, a plateau. But that's all assumption. In reality, whether that's true or not, I doubt it. We know you cannot, even when you sit in the classroom, you cannot hold your attention all the time like a flat line.
So we know there's a habituation effect. The fatigue. But anyway, first our assumption, we assume that plateau. That's a level-- it's partly largely practically reasonable assumption. But anyway we do it. So that's one. Also where some class trials that are the same. That's another assumption. So those are the two big assumptions. But anyway, that's the simplest model we can try. So that's block design.
This is the event-related design. I mean the upper one is presumably it's a Voxel, which is responding to some tasks. So there is event-related design. The red curve again. The black, white, the orange and the red line is something we expect to see. The blue one is the best fit, the best match as much as we could.
So this Voxel presumably is activated versus the one down below. It's not a Voxel which is largely activated, simply comparative to black curves. It's really difficult to see which one is activated or not. That's because first of all, when you eventually design those trials, each trial last a really short period of time. So it's really difficult to see the pattern. Parts of the pattern itself is pretty because when we usually design experiment, we don't do it like it this way. Regular on and off. This is [INAUDIBLE] the design.
Usually you want to randomize those trials for two reasons are this. One reason is we want avoid a confounding effect with the slow drift because there are slow drift there, we want avoid that the regular pattern. That's one reason. Another reason we want to avoid anticipation from the subject. If the subject knows every 10 seconds I need to do something, that anticipation, we want to avoid that. So that's both of the two major reasons when we eventually design, we have pretty pattern of the idealized the response.
In addition to this, I want to see the signal, the real signal embedded in the EPI data is pretty much low. For block design, maybe you could reach up to 50%. But for event-related design, probably we can only explain about 20% or 30%-ish of the data. The rest is recorded noise simply because our [INAUDIBLE]. We don't know much about it yet. We just bundle them. We call it noise.
Well, that comes to the concept of how do we create our regressor for corresponding to a specific task? Let's call the concept a BOLD signal. The BOLD signal is like the neuronal response. It's some indirect measure of the neuronal event. So that BOLD response is basically-- that's why it's a sluggish response curve. That's because it's largely the right measurement. It's the oxygen products and content in the blood. So that's why it's much slower, sluggisher.
So that curve. But what's so special about that curve? Well, let's start with the simplest case, it's just a so-called event-related design or instantaneous event. This last pretty short period of time. How short? It doesn't matter. So it's like here we have this red arrow on the left hand side, left corner. Suppose we show something on the screen for a short period of time. So we expect the BOLD response, the signal on this sluggish curve. So AFNI, we call it [INAUDIBLE]. That curve.
That curve, the mathematical formula is like this. But is that some magic expression? So there is this t to the power of p and the multiply by the exponential part. There are two parameters, a p and a q. Is this something like in the physics? We have something like gravitational constant? No. It's not something like that. The p and the q, even though there are some specific values like here, p. This particular case, this is 8.6, q is minus 0.547. Those numbers, 8.6, they don't mean much to us. They don't describe some intrinsic properties about the blood flow. No. Not at all. Not just the empirical curve.
So roughly, we can do some reasonable job with this fitting. Here we have this. A t, t is the time, positive exponential, that don't mean much. We don't really expect to show there's something we related to the wrong response? No. Not at all. So it's just the empirical curve. So we use this curve as a mathematical convenience.
So with this curve, we can convolve with the stimulus timing. That's the mathematical concept of a convolution. It's pretty much like multiplication. So that's why this sign is a multiplication sign with a circle surrounding that design. You have to run multiple recorded basis functions to describe-- to try to characterize the [INAUDIBLE] response. Why is [INAUDIBLE]? We also have implemented the SPM one. That's called a canonical curve.
So the main difference is there is a undershoot during the recovery part. That's one thing. Another thing is the duration lasts much longer because of the undershoot. So the main part is roughly about the same as the 14 seconds. But after 14 seconds, the canonical curve in SPM, there's this undershoot. So it extends to roughly 20, 25 seconds the whole duration because of the undershoot. So you can choose whatever you want. Both options available in every.
That's the basic. Suppose you have instantaneous event. How about you have a trial lasted a few seconds? That you can conceptualize as multiple events. Each TR event, basically you convolve them or you add them up. That's why there's the five color curves. You add them up, you end up with this, a black curve. So it's basically the summation for those five colors.
So this is it. You have 10 seconds, the duration. So there is a subtle difference between-- you use, even for short event, you can use scan. That's basically, regarding the duration, it's one TR or less. But you could use block. And there's an option called a block. Use the block to specify the duration.
So even if it's a short duration, like one second, you can still use this block option with a parenthesis with the duration specified. So that's, even though it's a block, the concept of block is pretty arbitrary. I think probably later on, Rick can show you how to specify different basis functions.
All right. So here is a simple case. You have block design 20 seconds on, 10 seconds off. So you end up with this regular partner of multiple trials. So that's a block design. So use this option, Brock. 20 seconds, that's the duration. The second parameter, one. AFNI usually are in the end. Regardless the duration, you scale the regressor with the maximum magnitude of one. So the reason we do that is the beta would be a multiplier. That would be-- you can interpret the beta as a percent signal change.
Basically, we use each regressor as a ruler. A ruler. So it's a unit, standard unit. So that's why we pretty much always scale all the regressors with the maximum magnitude of one. So that's the reason. So that's a block design. Then you have event-related design, but you have multiple events. They're separate. I mean randomized.
So here we have four different events. That's the same task but you repeat four times. So when you create a regressor, you don't create four regressor. Maybe occasionally you do, but usually we create one regressor for all the four events. We don't care about each individual trials. We care about the average or the general pattern of the whole task. So that's why we convolve them. That means we add up all the individual trials, four trials. We'll end up with this black curve. So that's basically the summation of those four individual copies. So that's event-related design.
So in the end, we'll have this in the model. We take off those with regressors of interest. Now let's go back to talk about the baseline slow drift plus the other regressor flow interest. Baseline is nothing magic. It's just something that we always put in the intercept in the model because the baseline is something we started with.
Slow drift. Slow drift. There are a few possibilities. Why do you have a slow drift? One possibility is the subject itself may have a physiological effect. The two major ones is breathing. Otherwise heartbeat. Actually both breathing and the heartbeat, because they, especially probably the first time when we get in the scanner, maybe a little bit nervous. The heartbeat is faster. Breathing and also faster. But after a while, you come down. So then that's why there's a slow drift. So it's not a lot fluctuation but the long range effect. So that part.
Then sometimes they're slow moving. It's not jerky movements, but this slow drifting head motion. Parts of the scanner itself may have thermal drift effects. So that's the slow drift. In AFNI, we do it in probably a slight different from the other software packages. We model they slow drift with polynomials I believe with other software packages use the high pass filtering. So high pass filtering, you basically use it, theoretically speaking, and you put it back. You filter the frequencies back into the model based on the frequencies. So I believe the default for them is 128 seconds. That's the cutoff range for the slow drift.
I would not think that that's a lot of any substantial differences between the two approaches. But anyway, AFNI, we have the baseline. Basically the baseline is part of polynomials. So you have the zero order polynomials. The intercept plus the leader drift. Depends on how long each run. The duration for each run, you could use just the leader drift or use a quadratic or occasionally use cubic. That depends on the duration. So the general rule is that every 150 seconds you add a extra order for the polynomials. So that's a slow drift. The head motion. That's pretty much standard nowadays.
Also when you have multiple runs, multiple runs, of course, each one has its unique drift effect. So that's why each run you would have-- if you have quadratic, so then each run you would have three parameters for the slow drift. If you have two runs, we would end up with six parameters. Six terms for six regressors for baseline and the slow drift.
So that's head motion. This is the example of a head motion effect. On the left hand side, if you don't take out the head motion, versus the right hand side, we do, would put either, for example, six regressors in the model, that sometimes will show that difference. Cleaning up the situation a little bit.
So the next two slides will show different ways to check your design matrix. So remember, we have this model, y=x. That is analytics times the beta values. Effect estimates plus the residuals. So the x is the same for the whole brain. Every Voxel, that we share the same x. But y is different. Each Voxel has their own time of service, response variable. Beta of course is different across subject. Residual is also different.
So suppose we have this modeled x. We have the baseline. We'll have leader drift plus a bunch of other regressors. So this would be an image of the design matrix. So I think this, probably everybody familiar with this because the other software package [INAUDIBLE] the same way, visualize their design matrix.
So you have any problem with the pattern, you're familiar with this. You know the pattern. So in this case, we have three runs. Each run will have two regressors. y is the intercept. I mean the baseline. y is the leader drift. So that's why the first six columns for the slow drift. So that's y. And also you can see these three segments. So the big pattern. The baseline, of course, is uniformly black. This leader drift. So you go from white to dark gradually for the leader drift. For the rest, that's the first run. So that's why second and third run, you don't have anything.
Same thing, we switch to the second run and third run. So that's step. That's how the baseline and slow drift look like. So next two columns, that's two tasks this case. So we see some regular bands. Not regular bands, but some bands. That's a product design improvement. So the last six columns are the head motion parameters. So three translations plus three rotations. So any questions? Yes.
AUDIENCE: I don't understand why the head motion is laid out in these random bands. It's just because you're going to get that from your image? Where the motion came from?
PRESENTER: Yeah. Came from. Yeah. That's how they are. What they look like. Yes.
AUDIENCE: OK.
PRESENTER: Personally, I prefer this. We use the image. But then they may have different perspective. It's a personal preference. But this, to me, I like this more because it shows the pattern even better. In AFNI, there's [INAUDIBLE] your run. After you perform the, we call it volume restriction, which is twitching. Other software packages call it head emotion correction. But anyway, same thing. Once you go through that step, you end up with six regressors, six parameters. So you can-- oh, actually, this is larger study.
In AFIN, when you specify your model, the program will generate the design basics. You can use a program, it's called a 1D Plot. It's not in 3D because we're dealing with one-dimensional data. So that's why it's called a 1-D Plot. You can use that program to plot this out. Actually, Rick's program, AFNI [INAUDIBLE], I think that automatically generates when you examine your data, you automatically plot out of this panel like this.
So the way you look at it is, it goes from the bottom to the top. From the bottom, you start with all your baselines plus their leader drift. I mean the drift. In this case it's a leader drift. So the baseline, the first-- the bottom is this black once, the horizontal line. That's the first run plus the leader drift. So that's the first run. And then you switch to the second run and the third run. So the bottom six curves are the baseline plus the drift for the three runs.
AUDIENCE: And you can pull out that baseline and get it from data, you'd be like a low pass filter and you see what that drift is? Or you do a linear fit to the whole--
PRESENTER: Well, you do linear fit for each one.
AUDIENCE: For the whole set?
PRESENTER: Yeah. in this case, I remember probably 200 seconds. I don't remember. But yeah. I mean this is just an empirical data. It's ideally, I mean, largest, we'll talk about slow drift. Even the tasks or head motion parameters. As a statistician, I would-- I mean, the process, not just-- unfortunately, in FMRI, people just-- we have a model we want, then we just talk about the result. As a statistician usually it's not just a one time deal. You need to go through multiple steps. Multiple runs of model building, model checking. But unfortunately, it's more like an assembly line for typical users. But we can come back to this issue later on when we're at a group. We'll talk about how to model checking.
So ideally, you went for the slow drift, if you could check those betas whether you see something in the brain. And one other model with a higher order polynomials, compare the models. You could do something like that. Ideally, yes.
AUDIENCE: So AFNI has this 3 D trend print process, right?
PRESENTER: Right.
AUDIENCE: So would you recommend we model this for the drift [INAUDIBLE]?
PRESENTER: No. I don't recommend to do it as a preprocessing step. The reasoning is that you usually want to put everything in one module. That's the ideal situation. So the preprocess thing, that's because we don't have a better way to do it. That's what we do. We clean the data to some extent. But we want to avoid that as much as possible. So for example, when we're doing preprocessing, we do slice timing correction. That's basically, we artificially align those multiple slices. When we acquire the data from the scanner, each slice is acquired separately. We don't take a snapshot of the whole brain. That's not the way the scanner works. So one slice, taken second slice. There are different orders, but anyway.
So we either manually to align them. Put at the beginning of the queue. So that is set up easily for us to build up the model. So that's a preprocessing step. The smoothing, same thing. But for slow drift, we don't want to move the trend first, because those trends may correlate with some of the long range drift from the regressors of interest. They may correlate to some extent. That's one reason.
Another reason, you can talk about degrees of freedom. The issue, it's a-- so ideally, you don't want to do that. Sometimes we do have to do it, like for [INAUDIBLE] data, because [INAUDIBLE] data, we don't have a better option. Any other questions before I move on? So the next two rows, the red one and the black one, those are the regressor of interest. So you immediately see the pattern here. So this is the better way probably to visualize, to check whether you make mistakes or not. Mistakes by putting one stimulus on set time. That will mess up your model, of course.
So the last six rows on the top, are the head motion parameters. You were asking before whether it was going to be here. Here, you can-- a better way to see the pattern. Whether you have big dip or big jump, in this case, of course, it's because of the different runs between different rows.
Also I want to say here, AFNI, it's a little bit different from the other software packages. When we have multiple runs, really we can collude them together on the way. For example, we only have regressors here even though we have three runs. So that's something unique about AFNI. I will come back to this point later on. So that's something we need to pay attention to.
The other software packages, either you have to analyze each one separately, I believe that's the situation with FSL. In SPN, you can collude them together, but you have to model each one. In a model, you would have six regressors of interest instead of two. So that's the difference between AFNI and SPN. Any questions for this?
All right. So, well, I mean, this a little bit of quality check. I mean, when you run-- we'll talk about this in more details. You're going to see on the terminal, on the [INAUDIBLE], whatever, you're going to say something about the progression of the model solving process, that you see some warning, something you a little concerned, or if there error message. So that's something you need to pay attention to.
So model building, model checking, there are many ways to do it. But we're going to talk about some of it later on when you specify the different options and the check the model quality. So far, I only talk about one model approach. As we assume, the response curve is some, whatever you call it, standard curve or canonical curve. That shape is fixed. So the curve goes up slowly, reach the peak, then we covers, even much slower than up stroke.
So that curve, I say that's something critical about that approach, because we believe that-- we assume that curve fits reasonably well. It's the same for different subjects or males or females, patient controls. It's the same curve, shape. Different brain regions, it's the same curve. Different tasks, it's the same curve. Different trials, it's the same curve. So that assumption is very strong. Is there any way we can relax it? Yes there is.
So that's the approach we're going to talk about later on. I think let's take a break, 10 minutes break. Then we come back to talk about the alternate approach, how to handle the [INAUDIBLE] response, a little bit more flexible. And not just for better model. And also capture some subtleties about the [INAUDIBLE] response.