source:http://mindhive.mit.edu/node/112
1. What is smoothing?
"Smoothing" is generally used to describe spatial smoothing in neuroimaging, and that's a nice euphamism for "blurring." s Patial smoothing consists of applying a small blurring kernel across your image, to average part of the intensities from N Eighboring voxels together. The effect is to blur the image somewhat and make it smoother- softening the hard edges, lowering the overall spatial Frequency, and hopefully improving your signal-to-noise ratio.
2. What is the point of smoothing?
i Mproving your signal to noise ratio . That's it, in a nutshell. This happens on a couple of levels, both the Single-subject and the group.
at the Single-subject level : FMRI data had a lot of noise in it, but studies had shown that most of the spatial noise is (mostly) Gaussian -it's essentially random, essentially independent from voxel to Vo Xel, and roughly centered around zero. If that's true, then if we average our intensity across several voxels, our noise would tend to average to zero, whereas OU R signal (which is some non-zero number) would tend to average to something non-zero, and presto! We ' ve decreased our noise when not decreasing our signal, and We have an SNR is better. (Desmond & Glover (designpapers) demonstrate this effect with real data.)
Matthew Brett has a nice discussion and several illustrations of the Cambridge imagers page:http://www.mrc-cbu.ca M.ac.uk/imaging/smoothing.html
At the grouplevel: Anatomy are highly variable between individuals, and so are exact functional placement within T Hat anatomy. Even with normalized data, there ' ll is some good chunk of variability between subjects as to where a given functional clus TER might be. Smoothing would blur those clusters and thus maximize the overlap between subjects for a given cluster, which increases our Odds of detecting that functional cluster at the group level and increasing our sensitivity.
Finally, a slight technical note for Spm:gaussian field theory, by which SPM does P-corrections, was based on how smooth y Our data are- the more spatial correlation in the data, the better your corrected p-values would look, because there ' s Fewer degree of freedom in the data. So-in SPM, smoothing would give you a direct bump in p-values-but the is not a "real" increase in sensitivity as such.
3. When should smooth? When should?
Smoothing is a good idea if:
- you ' re not Particularly concerned with Voxel-by-voxel resolution.
- you ' re not Particularly concerned with finding small (less than a handful of voxels) clusters.
- you want (or Need) to improve your signal-to-noise ratio.
- you "re Averaging results over a group, in a brain region where functional anatomy and organization isn ' t precisely known.
- you ' re using SPM , and you want the p-values corrected with Gaussian field theory (as opposed to FDR).
smoothing ' d not a good idea if:
- you need Voxel-by-voxel resolution.
- you believe Your activations of interest would is only a few voxels large.
- you "re Confident your task would generate large amounts of signal relative to noise.
- you "re Working primarily with single-subject results.
- you "re mainly Interested in getting region-of-interest data from very specific structures so you ' ve drawn with high resolution on sing Le subjects.
4. At the your analysis stream should you smooth?
The first point at which it's obvious to smooth are as the last spatial preprocessing step for your raw images; Smoothi Ng before then would only reduce the accuracy of the earlier preprocessing (normalization, realignment, etc.) -those programs that need smooth images does their own smoothing in memory as part of the calculation, and don ' t save the S moothed versions. One could also avoid smoothing the raw images entirely and instead smooth the beta and/or contrast images. In terms of efficiency, there ' isn't much difference-smoothing even hundreds of raw images is a very fast process. So the question is one of the Performance-which are better for your sensitivity?
Skudlarski et. AL (smoothingpapers) evaluated this for single-subject data and found almost no difference between the methods. They did find, multifiltering (see below) had greater benefits when the smoothing is done on the raw images as oppose D to the statistical maps. Certainly if you want to use p-values corrected with Gaussian field theory (a la SPM), you need to smooth before estimatin G your results. It's a bit of a toss-up, though ...
5. How does you determine the size of your kernel? Based on your resolution? OR structure size?
A little of both, it seems. The matched filter theorem, from the Signal processing field, tells us so if we ' re trying to recover a signal ctivation) in noisy data (like FMRI), we can best do it by smoothing we have a kernel that's about the same size as Our activation.
Trouble is, though, and most of the US don ' t know how big we activations was going to being before we run our experiment. Even if you had a particular structure of interest (say, the hippocampus), you could not get activation over the whole Regi On-only a part.
Given that ambiguity, Skudlarski et. Al introduce a method calledmultifiltering, in which-calculate results once from smoothed images, and then a second set of results from unsmoothed images. Finally, you average together the Beta/con images from both sets of results to create a final set of results. The idea was that the smoothed set of results preferentially highlight larger activations and while the unsmoothed set of ResU LTS Preserve small activations, and the final set has some of the advantages of both. Their evaluations showed multifiltering didn ' t detect larger activations (clusters with radii of 3-4 voxels or greater) as Well as purely smoothed results (as a might predict) but so over several cluster sizes, multifiltering outperformed t Raditional smoothing techniques. Its with your experiment depends on how important you consider detecting activations of small size (less than 3-voxel RA Dius, or about).
overall, Skudlarski et al found that over several cluster sizes, a kernel size of 1-2 voxels (3-6 mm, in their case) is most sensitive in general.< /span>
A good rule of thumb is to avoid using a kernel that's significantly larger than any structure we have a particular a PRI Ori interest in, and carefully consider "what your particular comfort" are with smaller activations. A 2-voxel-radius cluster is around-voxels and change (and multifiltering would being more sensitive to that size); A 3-voxel-radius cluster is the voxels or so (if I ' m doing my math right). 6mm is a good place to start. If you ' re particularly interested in smaller activations, 2-4mm might is better. If you know your won ' t care about small activations and really would only look at large clusters, 8-10mm is a good range.
6. should different kernel for different parts of the brain?
it ' an interesting Question. Hopfinger et. Al find that a 6mm kernel works best for the data they examine in the cortex, but a larger kernel (10mm) works best in Sub Cortical regions. This might is counterintuitive, considering the subcortical structures they examine is small in general than large cortic Al Activations-but they unfortunately don ' t include information about the size of their activation clusters, so the ResU LTS is difficult to interpret. You might think a smaller kernel in subcortical regions would is better, due to the smaller size of the structures.
Trouble is, figuring out exactly which parts of the brain-use a different size of kernel on presupposes a lot of inform Ation-about activation size, about shape of HRF in one region vs. Another-that pretty much doesn ' t exist for most EXPE Rimental set-ups or subjects. I would tend to suggest this varying the size of the kernel for different regions are probably more trouble than it ' s worth At the this is, but is the change as more studies come out about hrfs in different regions and individualized effects of Smoothing. See Kiebel and Friston (smoothingpapers), though, for some advanced work on changing the shape of the kernel in different Regions ...
7. What does it actually does to your activation data?
About "D Expect-preferentially brings out larger activations. Check out the White et. Al (smoothingpapers) for some detailed illustrations. We hope to has some empirical results and maybe some pictures up here in the next few weeks ...
8. What does it does to ROI data?
Great question, and not one I ' ve got a good answer for at the moment. One big part of the answer would depend on the ratio of your smoothing kernel size to your ROI size. Presumably, assuming your kernel size is smaller than your ROI, it could help improve SNR in your ROI, but if the kernel and ROI is similar sizes, smoothing may also blur the signal such this your structure contains less activation. With any luck, we can does a little empirical testing on this questions and has some results up here in the future ...
Smoothing in an FMRI analysis (FAQ)