Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REF] Replace duecredit with BibTeX #875

Merged
merged 11 commits into from
Aug 12, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .codecov.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,4 @@ coverage:

ignore:
- "tedana/tests/"
- "tedana/due.py"
- "tedana/_version.py"
24 changes: 24 additions & 0 deletions docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -198,6 +198,30 @@ API
tedana.stats.getfbounds


.. _api_bibtex_ref:

*********************************************************
:mod:`tedana.bibtex`: Tools for working with BibTeX files
*********************************************************

.. automodule:: tedana.bibtex
:no-members:
:no-inherited-members:

.. currentmodule:: tedana.bibtex

.. autosummary::
:toctree: generated/
:template: function.rst

tedana.bibtex.find_braces
tedana.bibtex.reduce_idx
tedana.bibtex.index_bibtex_identifiers
tedana.bibtex.find_citations
tedana.bibtex.reduce_references
tedana.bibtex.get_description_references


.. _api_utils_ref:

**************************************
Expand Down
12 changes: 10 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@
import os
import sys

import sphinx_rtd_theme

sys.path.insert(0, os.path.abspath("sphinxext"))
sys.path.insert(0, os.path.abspath(os.path.pardir))

Expand Down Expand Up @@ -50,6 +52,7 @@
"sphinx.ext.napoleon",
"sphinx.ext.todo",
"sphinxarg.ext",
"sphinxcontrib.bibtex", # for foot-citations
]

import sphinx
Expand Down Expand Up @@ -127,8 +130,6 @@
# a list of builtin themes.
#
# installing theme package
import sphinx_rtd_theme

html_theme = "sphinx_rtd_theme"

# Theme options are theme-specific and customize the look and feel of a theme
Expand All @@ -153,6 +154,13 @@ def setup(app):

html_favicon = "_static/tedana_favicon.png"

# -----------------------------------------------------------------------------
# sphinxcontrib-bibtex
# -----------------------------------------------------------------------------
bibtex_bibfiles = ["../tedana/resources/references.bib"]
bibtex_style = "unsrt"
bibtex_reference_style = "author_year"
bibtex_footbibliography_header = ""

# -- Options for HTMLHelp output ------------------------------------------

Expand Down
12 changes: 0 additions & 12 deletions docs/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -97,18 +97,6 @@ the v3.2 code, with the goal of revisiting it when ``tedana`` is more stable.
Anyone interested in using v3.2 may compile and install an earlier release (<=0.0.4) of ``tedana``.


*************************************************
[tedana] What is the warning about ``duecredit``?
*************************************************

``duecredit`` is a python package that is used, but not required by ``tedana``.
These warnings do not affect any of the processing within the ``tedana``.
To avoid this warning, you can install ``duecredit`` with ``pip install duecredit``.
For more information about ``duecredit`` and concerns about
the citation and visibility of software or methods, visit the `duecredit`_ GitHub repository.

.. _duecredit: https://github.com/duecredit/duecredit

.. _here: https://bitbucket.org/prantikk/me-ica/commits/906bd1f6db7041f88cd0efcac8a74074d673f4f5

.. _NeuroStars: https://neurostars.org
Expand Down
15 changes: 4 additions & 11 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -112,9 +112,9 @@ When using tedana, please include the following citations:
}
</script>
<p>
<span id="tedana_citation">tedana</span>
This link is for the most recent version of the code and that page has links to DOIs
for older versions. To support reproducibility, please cite the version you used:
<span id="tedana_citation">tedana</span>
This link is for the most recent version of the code and that page has links to DOIs
for older versions. To support reproducibility, please cite the version you used:
<a id="tedana_doi_url" href="https://doi.org/10.5281/zenodo.1250561">https://doi.org/10.5281/zenodo.1250561</a>
<img src onerror='fillCitation()' alt=""/>
</p>
Expand Down Expand Up @@ -143,19 +143,12 @@ When using tedana, please include the following citations:
<i>Proceedings of the National Academy of Sciences</i>, <i>110</i>, 16187-16192.
</p>

Alternatively, you can automatically compile relevant citations by running your
tedana code with `duecredit`_. For example, if you plan to run a script using
tedana (in this case, ``tedana_script.py``):

.. code-block:: bash
python -m duecredit tedana_script.py
Alternatively, you can use the text and citations produced by the tedana workflow.

You can also learn more about `why citing software is important`_.

.. _Differentiating BOLD and non-BOLD signals in fMRI time series using multi-echo EPI.: https://doi.org/10.1016/j.neuroimage.2011.12.028
.. _Integrated strategy for improving functional connectivity mapping using multiecho fMRI.: https://doi.org/10.1073/pnas.1301725110
.. _duecredit: https://github.com/duecredit/duecredit
.. _`why citing software is important`: https://www.software.ac.uk/how-cite-software


Expand Down
3 changes: 0 additions & 3 deletions docs/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,6 @@ packages will need to be installed:
- scipy
- mapca

You can also install several optional dependencies, notably ``duecredit``.
Please see the :doc:`FAQ <faq>` for more information on how tedana uses ``duecredit``.

After installing relevant dependencies, you can then install ``tedana`` with:

.. code-block:: bash
Expand Down
143 changes: 123 additions & 20 deletions docs/outputs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,8 @@ desc-ICAAccepted_components.nii.gz High-kappa ICA coefficient f
desc-ICAAcceptedZ_components.nii.gz Z-normalized spatial component maps
report.txt A summary report for the workflow with relevant
citations.
references.bib The BibTeX entries for references cited in
report.txt.
tedana_report.html The interactive HTML report.
================================================ =====================================================

Expand Down Expand Up @@ -400,28 +402,129 @@ The report is saved in a plain-text file, report.txt, in the output directory.

An example report

TE-dependence analysis was performed on input data. An initial mask was generated from the first echo using nilearn's compute_epi_mask function. An adaptive mask was then generated, in which each voxel's value reflects the number of echoes with 'good' data. A monoexponential model was fit to the data at each voxel using nonlinear model fitting in order to estimate T2* and S0 maps, using T2*/S0 estimates from a log-linear fit as initial values. For each voxel, the value from the adaptive mask was used to determine which echoes would be used to estimate T2* and S0. In cases of model fit failure, T2*/S0 estimates from the log-linear fit were retained instead. Multi-echo data were then optimally combined using the T2* combination method (Posse et al., 1999). Principal component analysis in which the number of components was determined based on a variance explained threshold was applied to the optimally combined data for dimensionality reduction. A series of TE-dependence metrics were calculated for each component, including Kappa, Rho, and variance explained. Independent component analysis was then used to decompose the dimensionally reduced dataset. A series of TE-dependence metrics were calculated for each component, including Kappa, Rho, and variance explained. Next, component selection was performed to identify BOLD (TE-dependent), non-BOLD (TE-independent), and uncertain (low-variance) components using the Kundu decision tree (v2.5; Kundu et al., 2013). Rejected components' time series were then orthogonalized with respect to accepted components' time series.
.. note::

This workflow used numpy (Van Der Walt, Colbert, & Varoquaux, 2011), scipy (Jones et al., 2001), pandas (McKinney, 2010), scikit-learn (Pedregosa et al., 2011), nilearn, and nibabel (Brett et al., 2019).
The boilerplate text includes citations in LaTeX format.
\\citep refers to parenthetical citations, while \\cite refers to textual ones.

This workflow also used the Dice similarity index (Dice, 1945; Sørensen, 1948).
TE-dependence analysis was performed on input data using the tedana workflow \\citep{dupre2021te}.
An adaptive mask was then generated, in which each voxel's value reflects the number of echoes with 'good' data.
A two-stage masking procedure was applied, in which a liberal mask (including voxels with good data in at least the first echo) was used for optimal combination, T2*/S0 estimation, and denoising, while a more conservative mask (restricted to voxels with good data in at least the first three echoes) was used for the component classification procedure.
Multi-echo data were then optimally combined using the T2* combination method \\citep{posse1999enhancement}.
Next, components were manually classified as BOLD (TE-dependent), non-BOLD (TE-independent), or uncertain (low-variance).
This workflow used numpy \\citep{van2011numpy}, scipy \\citep{virtanen2020scipy}, pandas \\citep{mckinney2010data,reback2020pandas}, scikit-learn \\citep{pedregosa2011scikit}, nilearn, bokeh \\citep{bokehmanual}, matplotlib \\citep{Hunter:2007}, and nibabel \\citep{brett_matthew_2019_3233118}.
This workflow also used the Dice similarity index \\citep{dice1945measures,sorensen1948method}.

References

Brett, M., Markiewicz, C. J., Hanke, M., Côté, M.-A., Cipollini, B., McCarthy, P., … freec84. (2019, May 28). nipy/nibabel. Zenodo. http://doi.org/10.5281/zenodo.3233118

Dice, L. R. (1945). Measures of the amount of ecologic association between species. Ecology, 26(3), 297-302.

Jones E, Oliphant E, Peterson P, et al. SciPy: Open Source Scientific Tools for Python, 2001-, http://www.scipy.org/

Kundu, P., Brenowitz, N. D., Voon, V., Worbe, Y., Vértes, P. E., Inati, S. J., ... & Bullmore, E. T. (2013). Integrated strategy for improving functional connectivity mapping using multiecho fMRI. Proceedings of the National Academy of Sciences, 110(40), 16187-16192.

McKinney, W. (2010, June). Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference (Vol. 445, pp. 51-56).

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., ... & Vanderplas, J. (2011). Scikit-learn: Machine learning in Python. Journal of machine learning research, 12(Oct), 2825-2830.

Posse, S., Wiese, S., Gembris, D., Mathiak, K., Kessler, C., Grosse‐Ruyken, M. L., ... & Kiselev, V. G. (1999). Enhancement of BOLD‐contrast sensitivity by single‐shot multi‐echo functional MR imaging. Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, 42(1), 87-97.

Sørensen, T. J. (1948). A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons. I kommission hos E. Munksgaard.

Van Der Walt, S., Colbert, S. C., & Varoquaux, G. (2011). The NumPy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2), 22.
.. note::

The references are also provided in the ``references.bib`` output file.

.. code-block:: bibtex
@Manual{bokehmanual,
title = {Bokeh: Python library for interactive visualization},
author = {{Bokeh Development Team}},
year = {2018},
url = {https://bokeh.pydata.org/en/latest/},
}
@article{dice1945measures,
title={Measures of the amount of ecologic association between species},
author={Dice, Lee R},
journal={Ecology},
volume={26},
number={3},
pages={297--302},
year={1945},
publisher={JSTOR},
url={https://doi.org/10.2307/1932409},
doi={10.2307/1932409}
}
@article{dupre2021te,
title={TE-dependent analysis of multi-echo fMRI with* tedana},
author={DuPre, Elizabeth and Salo, Taylor and Ahmed, Zaki and Bandettini, Peter A and Bottenhorn, Katherine L and Caballero-Gaudes, C{\'e}sar and Dowdle, Logan T and Gonzalez-Castillo, Javier and Heunis, Stephan and Kundu, Prantik and others},
journal={Journal of Open Source Software},
volume={6},
number={66},
pages={3669},
year={2021},
url={https://doi.org/10.21105/joss.03669},
doi={10.21105/joss.03669}
}
@inproceedings{mckinney2010data,
title={Data structures for statistical computing in python},
author={McKinney, Wes and others},
booktitle={Proceedings of the 9th Python in Science Conference},
volume={445},
number={1},
pages={51--56},
year={2010},
organization={Austin, TX},
url={https://doi.org/10.25080/Majora-92bf1922-00a},
doi={10.25080/Majora-92bf1922-00a}
}
@article{pedregosa2011scikit,
title={Scikit-learn: Machine learning in Python},
author={Pedregosa, Fabian and Varoquaux, Ga{\"e}l and Gramfort, Alexandre and Michel, Vincent and Thirion, Bertrand and Grisel, Olivier and Blondel, Mathieu and Prettenhofer, Peter and Weiss, Ron and Dubourg, Vincent and others},
journal={the Journal of machine Learning research},
volume={12},
pages={2825--2830},
year={2011},
publisher={JMLR. org},
url={http://jmlr.org/papers/v12/pedregosa11a.html}
}
@article{posse1999enhancement,
title={Enhancement of BOLD-contrast sensitivity by single-shot multi-echo functional MR imaging},
author={Posse, Stefan and Wiese, Stefan and Gembris, Daniel and Mathiak, Klaus and Kessler, Christoph and Grosse-Ruyken, Maria-Liisa and Elghahwagi, Barbara and Richards, Todd and Dager, Stephen R and Kiselev, Valerij G},
journal={Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine},
volume={42},
number={1},
pages={87--97},
year={1999},
publisher={Wiley Online Library},
url={https://doi.org/10.1002/(SICI)1522-2594(199907)42:1<87::AID-MRM13>3.0.CO;2-O},
doi={10.1002/(SICI)1522-2594(199907)42:1<87::AID-MRM13>3.0.CO;2-O}
}
@software{reback2020pandas,
author = {The pandas development team},
title = {pandas-dev/pandas: Pandas},
month = feb,
year = 2020,
publisher = {Zenodo},
version = {latest},
doi = {10.5281/zenodo.3509134},
url = {https://doi.org/10.5281/zenodo.3509134}
}
@article{sorensen1948method,
title={A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons},
author={Sorensen, Th A},
journal={Biol. Skar.},
volume={5},
pages={1--34},
year={1948}
}
@article{van2011numpy,
title={The NumPy array: a structure for efficient numerical computation},
author={Van Der Walt, Stefan and Colbert, S Chris and Varoquaux, Gael},
journal={Computing in science \& engineering},
volume={13},
number={2},
pages={22--30},
year={2011},
publisher={IEEE},
url={https://doi.org/10.1109/MCSE.2011.37},
doi={10.1109/MCSE.2011.37}
}
@article{virtanen2020scipy,
title={SciPy 1.0: fundamental algorithms for scientific computing in Python},
author={Virtanen, Pauli and Gommers, Ralf and Oliphant, Travis E and Haberland, Matt and Reddy, Tyler and Cournapeau, David and Burovski, Evgeni and Peterson, Pearu and Weckesser, Warren and Bright, Jonathan and others},
journal={Nature methods},
volume={17},
number={3},
pages={261--272},
year={2020},
publisher={Nature Publishing Group},
url={https://doi.org/10.1038/s41592-019-0686-2},
doi={10.1038/s41592-019-0686-2}
}
4 changes: 1 addition & 3 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ doc =
sphinx>=1.5.3
sphinx_rtd_theme
sphinx-argparse
sphinxcontrib-bibtex
tests =
codecov
coverage<5.0
Expand All @@ -50,10 +51,7 @@ tests =
pytest
pytest-cov
requests
duecredit =
duecredit
all =
%(duecredit)s
%(doc)s
%(tests)s

Expand Down
32 changes: 0 additions & 32 deletions tedana/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,42 +8,10 @@
import warnings

from ._version import get_versions
from .due import BibTeX, Doi, due

__version__ = get_versions()["version"]

# cmp is not used, so ignore nipype-generated warnings
warnings.filterwarnings("ignore", r"cmp not installed")

# Citation for the package JOSS paper.
due.cite(
Doi("10.21105/joss.03669"),
description="Publication introducing tedana.",
path="tedana",
cite_module=True,
)

# Citation for the algorithm.
due.cite(
Doi("10.1016/j.neuroimage.2011.12.028"),
description="Introduces MEICA and tedana.",
path="tedana",
cite_module=True,
)
due.cite(
Doi("10.1073/pnas.1301725110"),
description="Improves MEICA and tedana.",
path="tedana",
cite_module=True,
)

# Citation for package version.
due.cite(
Doi("10.5281/zenodo.1250561"),
description="The tedana package",
version=__version__,
path="tedana",
cite_module=True,
)

del get_versions
Loading