-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FIX: AFNI TSNR calculation, ADD: BlurToFWHM, ADD: testing outputs against reference #280
Conversation
Hello @Shotgunosine, Thank you for updating! Cheers! There are no style issues detected in this Pull Request. 🍻 To test for issues locally, Comment last updated at 2021-02-12 14:40:28 UTC |
You can run pytest with an "append" mode. That might help. see: https://github.com/PsychoinformaticsLab/pliers/blob/master/.github/workflows/python-package.yml
|
I don't think that should be it, each of the three tests (afni_smooth, afni_blurto, and nistats_smooth) is being run in a separate job on circleci, so they are each uploaded to codecov separately. As far as I can tell from the individual xml reports, they indicate that the lines have been run. |
Hmm, I'm not sure then sorry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM.
From a high level only quibble I have is that the three test conditions differ in estimator and smoothing, but the latter seems less important overall to fitlins. Ideally we would test more permutations of fitlins parameters. That said, that seems like more of a reach goal, and this is already a nice improvement on what we have.
I'm sure @effigies will have more to say than me though.
@effigies, I've finally got coverage working. Turns out the issue was pointing at tests in one directory ( |
Cool. It might be that we should have used I will try to review this today, but it's somewhat meeting heavy. |
vol_labels = parse_afni_ext(rbetas)["BRICK_LABS"].split("~") | ||
mat = pd.read_csv(self.inputs.design_matrix, delimiter="\t", index_col=0) | ||
# find the name of the constant column | ||
const_name = mat.columns[(mat != 1).sum(0) == 0].values[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this choke if there are non-steady-state volumes? Generally the constant should be zeroed out like everything else if you're censoring a volume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to verifying that zeroing out everything else or not doesn't impact the estimates of the other coefficients, I also checked to see if we should expect a design matrix that doesn't have an all 1's constant column. We're getting the design matrix from nilearn's make_first_level_design_matrix
function, which adds a an all 1s constant column as part of the drift model function, even if the drift model is None:
https://github.com/nilearn/nilearn/blob/d2ad41a9fa0aa7041c7e826518c6c3692a92a2e1/nilearn/glm/first_level/design_matrix.py#L122-L173
Based on that, I think this logic is fine.
ref_nans = np.isnan(ref_data) | ||
out_nans = np.isnan(out_data) | ||
|
||
if (ref_nans == out_nans).all(): | ||
ref_nonan = ref_data[~ref_nans] | ||
out_nonan = out_data[~out_nans] | ||
|
||
diff = np.abs(ref_nonan - out_nonan) | ||
rel_diff = (diff) / ((np.abs(ref_nonan) + np.abs(out_nonan)) / 2) | ||
if max_abs and max_rel: | ||
over = (diff > max_abs) | ||
diff = diff[over] | ||
rel_diff = rel_diff[over] | ||
if (rel_diff > max_rel).any(): | ||
res.append(f"Relative difference (max of {rel_diff.max()})" | ||
f" greater than {max_rel} for {ref_nii} and {out_nii}.") | ||
elif max_rel: | ||
if (rel_diff > max_rel).any(): | ||
res.append(f"Relative difference (max of {rel_diff.max()})" | ||
f" greater than {max_rel} for {ref_nii} and {out_nii}.") | ||
else: | ||
if ((diff > max_abs).any()): | ||
res.append(f"Absolute difference (max of {diff.max()})" | ||
f" greater than {max_abs} for {ref_nii} and {out_nii}.") | ||
|
||
else: | ||
res.append(f"{ref_nii} nans don't match {out_nii} nans.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this just
np.allclose(ref_data, out_data, rtol=max_rel, atol=max_abs, equal_nan=True)
Ref: https://numpy.org/doc/stable/reference/generated/numpy.allclose.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's different from numpy allclose. I was trying to mimic the functionality of nibabel diff, but add in tolerance for nans. As far as I can understand nibabel diff handles relative and max specifications differently from numpy allclose. Allclose uses this:
absolute(a - b) <= (atol + rtol * absolute(b))
but nibabel diff uses the absolute tolerance as a mask and only tests the relative differences on elements where the difference is less than the absolute tolerance.
I'm not sure if those work out to be the same thing or not though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the whole looks good. Left a couple small comments.
Sorry about the size of this one, the number of changes kinda of got away from me.
First big part of this is that I've fixed the TSNR calculation in the AFNI estimator.
Second part is I added a new option to the AFNI estimator to use the 3dBlurToFWHM.
Third big part is that I've created a set of reference outputs for three different run settings (https://gin.g-node.org/shotgunosine/fitlins_tests). I also added estimator and smoothing information to the dataset description, but I can pull that out once @adelavega's PR is merged to master.