Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: TorchSurv: A Lightweight Package for Deep Survival Analysis #7341

Closed
editorialbot opened this issue Oct 10, 2024 · 130 comments
Closed
Assignees
Labels
accepted published Papers published in JOSS Python R recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Oct 10, 2024

Submitting author: @melodiemonod (Mélodie Monod)
Repository: https://github.com/Novartis/torchsurv
Branch with paper.md (empty if default branch): main
Version: v0.1.4
Editor: @kanishkan91
Reviewers: @XinyiEmilyZhang, @rich2355
Archive: 10.5281/zenodo.14517267

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/02d7496da2b9cc34f9a6e04cabf2298d"><img src="https://joss.theoj.org/papers/02d7496da2b9cc34f9a6e04cabf2298d/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/02d7496da2b9cc34f9a6e04cabf2298d/status.svg)](https://joss.theoj.org/papers/02d7496da2b9cc34f9a6e04cabf2298d)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@WeakCha & @LingfengLuo0510, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @kanishkan91 know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @XinyiEmilyZhang

📝 Checklist for @rich2355

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.18 s (378.3 files/s, 67362.1 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          26           1224           2070           4135
Markdown                         7            266              0            681
TeX                              2             47              0            540
R                                7            221            158            520
Jupyter Notebook                 2              0           1551            238
YAML                             6             34              6            201
TOML                             1              8              0             49
Bourne Shell                     4             20              3             40
reStructuredText                 5             16             37             13
make                             1              5              8             10
JSON                             7              0              0              7
-------------------------------------------------------------------------------
SUM:                            68           1841           3833           6434
-------------------------------------------------------------------------------

Commit count by author:

    29	Peter Krusche
    18	Thibaud Coroller
    11	corolth1
     9	melodiemonod
     7	Mélodie Monod
     2	Peter Krusche (Novartis)
     1	Ikko Eltociear Ashimine

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.21105/joss.01317 is OK
- 10.32614/CRAN.package.survival is OK
- 10.48550/arXiv.1912.01703 is OK
- 10.5281/zenodo.3352342 is OK
- 10.32614/CRAN.package.survAUC is OK
- 10.1109/cvpr42600.2020.00975 is OK
- 10.1186/s12874-018-0482-1 is OK
- 10.48550/arXiv.2204.07276 is OK
- 10.1007/978-1-4612-4380-9_37 is OK
- 10.1016/s0197-2456(03)00072-2 is OK
- 10.32614/CRAN.package.survival is OK
- 10.32614/CRAN.package.survival is OK
- 10.32614/CRAN.package.survAUC is OK
- 10.32614/CRAN.package.timeROC is OK
- 10.32614/CRAN.package.risksetROC is OK
- 10.32614/CRAN.package.survivalROC is OK
- 10.1093/bioinformatics/btr511 is OK
- 10.32614/CRAN.package.riskRegression is OK
- 10.32614/CRAN.package.SurvMetrics is OK
- 10.32614/CRAN.package.pec is OK
- 10.1111/j.0006-341x.2000.00337.x is OK
- 10.1111/j.0006-341x.2005.030814.x is OK
- 10.1002/bimj.201200045 is OK
- 10.1198/016214507000000149 is OK
- 10.1093/biostatistics/kxy006 is OK
- 10.1002/sim.4154 is OK
- 10.1002/(sici)1097-0258(19960229)15:4<361::aid-sim168>3.0.co;2-4 is OK
- 10.1002/(sici)1097-0258(19990915/30)18:17/18<2529::aid-sim274>3.0.co;2-5 is OK
- 10.1080/01621459.1977.10480613 is OK
- 10.2307/1402659 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Time-to-Event Prediction with Neural Networks and ...
- No DOI given, and none found for title: The Weibull Distribution

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 1374

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

License info:

✅ License found: MIT License (Valid open source OSI approved license)

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@kanishkan91
Copy link

@melodiemonod, @WeakCha, @LingfengLuo0510, This is the review thread for the paper. All of our communications will happen here from now on.

Please read the "Reviewer instructions & questions" in the first comment above.

For @WeakCha, @LingfengLuo0510 - Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

As you are probably already aware, The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention #7341 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.

Thanks in advance and let me know if you have any questions!!

@rich2355
Copy link

rich2355 commented Oct 15, 2024

Review checklist for @WeakCha

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/Novartis/torchsurv?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@melodiemonod) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1. Contribute to the software 2. Report issues or problems with the software 3. Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@rich2355
Copy link

rich2355 commented Oct 18, 2024

@melodiemonod @kanishkan91

Thank you so much for this package, this work looks super interesting! Here are my initial comments:

For the paper:

  1. Adding an example after you introduce the functionality would be great. This example does not need to be comprehensive, but needs to be executable without any error. Your tutorial has done a great job on this, but people may refer to your paper first and want to seek for something that is easier to use.
  2. There are some typos in your paper. For example, scitkit-survival should be scikit-survival in Figure 1, MyPyTorcWeibullhModel should be MyPyTorchWeibullModel (between line 46 and 47), and MyPyTorchXCoxModel should be MyPyTorchCoxModel (between line 51 and 52).

For your tutorial (https://opensource.nibr.com/torchsurv/notebooks/introduction.html), could you please show what helpers_introduction is? I know there should be some plugins but I cannot find them, and I will give further comments after I successfully reproduce your tutorial.

Thank you so much for your work! I am looking forward to reviewing your updated paper and package!

@tcoroller
Copy link

tcoroller commented Oct 21, 2024

Dear @WeakCha,

Thank you for your early comments. Regarding your last one (helpers_*), would it be ok with you if we add this line below in the notebooks with the link to the helper file itself?

For the first notebook (introduction), the helpers_introduction line will be the following:

# PyTorch boilerplate - see https://github.com/Novartis/torchsurv/blob/main/docs/notebooks/helpers_introduction.py
from helpers_introduction import Custom_dataset, plot_losses

I would discourage adding the code directly in the notebook because it would overcrowd the notebook:

  1. I would need to be at the beginning of the notebook to be properly loaded and run, which is annoying because one helper (helpers_momentum) is ~200lines of code
  2. They are simply boilerplate PyTorch code, with no relationship to TorchSurv and well established function (model and datasets, plot losses).

If our suggestion isn't satisfying, I can look into other option (one could be to have the cell hidden, but requires another package thus making it rely on more dependancies)

@tcoroller
Copy link

Hello everyone,

We modified the package according to the first comments. Please use the main branch (default) for the review as we will keep this one up to date with your comments.

Thanks

@rich2355
Copy link

rich2355 commented Oct 31, 2024

Hi @tcoroller Thanks for the update! My further comments about your tutorial are listed below:

  1. I am guessing whether the learning rate 10^{-2} is too high for your coxph experiment as your loss value changes turbulently in both training and test data. For better visualization effects maybe you could try a lower learning rate.
  2. Your helper_function can not be imported after I installed torchsurv. I found your discussion and I guess the most efficient way is to give necessary instructions/comments in your notebook about how to make your helper functions work (for example, personally I will simply add the helper functions to my own notebook, although this will make the notebook more lengthy), so that readers will not be confused by this, or will not need to add the content of the helper function by themselves. I have no preference on whether you like or dislike adding helper functions explicitly in your notebook.
  3. I am running your MNIST tutorial on an environment without GPU and 12G RAM, and it crashed. Do you have comments on this? I guess a large memory is required to reproduce this tutorial, and you could add some information about it.
  4. Have you done some validity tests for your package? For example, your model results should be comparable with those in other established packages for the same dataset with the same training/test split, for the same deep learning model architecture, and for the same loss function. A notebook like this will be highly welcomed, but I would not insist on this as this request may be time consuming, and I could also spend time checking your implementations. What about your thoughts?
  5. There are some display effects that I think might not be the most suitable. For example, in your introduction tutorial, in the "Dependency" section, this sentence:

"the recommanded method is to use our developpment conda environment (preferred)."

seems to emphasize the word "preferred" with a code font. I guess you want to use bold or italic instead? There are many words like this in your tutorials.

  1. There are also some grammar errors or typos, for example, "recommanded" should be "recommended", "this notebooks" should be "this notebook".

I will get back to you with paper reviews if any. Thanks a lot for your work!

@rich2355
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@rich2355
Copy link

@tcoroller @melodiemonod @kanishkan91

It seems that your paper is unchanged. I guess this may be an error as I found an updated version here: https://github.com/Novartis/torchsurv/blob/58-comments-joss-241018/paper/paper.md, but I cannot get this when following the prompt by the editoralbot.

@tcoroller
Copy link

dear @WeakCha ,

Thank you for your comments, please see my replies below:

  1. Learning rate: You may be referring to the introduction notebook. Indeed, the Cox model isn't doing well, with a poor looking training loss profile. However, this notebook also shows the Weibull model doing pretty well and then ultimely compare the two to illustrate our functions, such as this key function cindex.compare(). This notebook is meant to illustrate the function rather than pushing for performance, and having Cox model performing poorly was good to this purpose. Please let me know that clarifies and give you better context.
  2. helper_functions: Sorry for the confusion there, those functions are not part of TorchSurv but only support the notebooks. Installing TorchSurv will not solve this, but those function are modules sitting directly next to the notebooks, so the import is relative and should run directly. You need to make sure you have all files together, or better clone the whole repository to be safe. Please see below in the screenshot.
Screenshot 2024-10-31 at 9 42 24 AM
  1. MNIST: Appologies for the trouble running this notebook. I made this example on a M2 macbook with 64Gb of memory, so I guess I didn't think too much of batch size limitation. Several colleagues tried it on HPC, but I would assume with GPUs. I will add a code snippet to downgrade the batch size (currently 500) to 50 or less to make sure it runs smoothly on smaller computer if the user is using CPUs.
  2. Model comparison: Thank you for this comment, this is a very important part of our work to ensure that our package is comparable against established R/python packages. During the comparison, we focused on simple model (single neuron, batch size equal to dataset) to mimic other packages and validate the code. We did not compare our method to other deep learning package, simply because most are either deprecated (last commit >3 years ago), poorly documented or simply out of scope. We have an extensive list of comparison for all our metrics that can be found here. @melodiemonod is the lead for this piece and can comment further.
  3. Formatting: Thank you for the comment and sorry for the inconsistent formatting. We will review and edit the document accordingly.
  4. Typos: Thank you for the comment and sorry for the typos. We will review and edit the document accordingly.
  5. Paper: Thank you for raising the issue about the paper not being update. We are now working on the main branch but I think the editorialbot is still look at an older branch. @kanishkan91 could you help update the Branch with paper.md to main?

Thank you,
Thibaud

@melodiemonod
Copy link

To expand on point 4 about model comparison: our package is designed to provide all the necessary functions to assist in model fitting (i.e.., with the negative log-likelihood function) and evaluate its performance (i.e., with evaluation metric functions), rather than providing functions for directly fitting a model. Unlike packages like deepsurv, we do not provide functions like fit for direct model training. Instead, our package focuses on offering users maximum flexibility, allowing them to integrate our functions seamlessly within their own PyTorch-based model-fitting workflows. As a result, comparing model-fitting outcomes with another algorithm falls outside the scope of our package. However, we have included a comparison to demonstrate that our likelihood and metric functions produce results consistent with established package. This is the notebook that Thibaud mentioned above.

@rich2355
Copy link

@tcoroller @melodiemonod Thank you so much for your reply and I agree with most of them. I would wait to review

  1. Your MNIST example.
  2. The updated paper and tutorial (without formatting issues).

Again, thanks a lot for your work!

@melodiemonod
Copy link

@editorialbot commands

@editorialbot
Copy link
Collaborator Author

Hello @melodiemonod, here are the things you can ask me to do:


# List all available commands
@editorialbot commands

# Get a list of all editors's GitHub handles
@editorialbot list editors

# Adds a checklist for the reviewer using this command
@editorialbot generate my checklist

# Set a value for branch
@editorialbot set joss-paper as branch

# Run checks and provide information on the repository and the paper file
@editorialbot check repository

# Check the references of the paper for missing DOIs
@editorialbot check references

# Generates the pdf paper
@editorialbot generate pdf

# Generates a LaTeX preprint file
@editorialbot generate preprint

# Get a link to the complete list of reviewers
@editorialbot list reviewers

@editorialbot
Copy link
Collaborator Author

@WeakCha removed from the reviewers list!

@kanishkan91
Copy link

kanishkan91 commented Dec 20, 2024

@rich2355 - This should be fixed (Your new name is added). To be safe, could you regenerate your checklist and complete it? Its just a safety measure.

@kanishkan91
Copy link

@tcoroller , @melodiemonod I have recommended this for acceptance now. I will be reading through the paper for typos etc. shortly. The AEiC in this submission track will review shortly and if all goes well this will go live soon! Big thank you to @rich2355 and @XinyiEmilyZhang for reviewing! JOSS is volunteer run and relies heavily on researchers such as yourself.

@rich2355
Copy link

rich2355 commented Dec 20, 2024

Review checklist for @rich2355

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/Novartis/torchsurv?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@melodiemonod) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1. Contribute to the software 2. Report issues or problems with the software 3. Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@rich2355
Copy link

@kanishkan91 Done!

@crvernon
Copy link

crvernon commented Dec 20, 2024

@editorialbot generate pdf

🔍 checking out the following:

  • reviewer checklists are completed or addressed
  • version set
  • archive set
  • archive names (including order) and title in archive matches those specified in the paper
  • archive uses the same license as the repo and is OSI approved as open source
  • archive DOI and version match or redirect to those set by editor in review thread
  • paper is error free - grammar and typos
  • paper is error free - test links in the paper and bib
  • paper is error free - refs preserve capitalization where necessary
  • paper is error free - no invalid refs without justification

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@kanishkan91
Copy link

@melodiemonod @tcoroller - I guess one last minor comment from my side. The paper is a bit too long. We generally only recommend 3-4 pages of text for a software paper. I recommend removing the "Comprehensive example" starting page 4. This is probably covered in your documentation.

@melodiemonod
Copy link

Dear @kanishkan91, thank you for your feedback. We are open to removing the example, provided that @rich2355, who originally suggested it, agrees. Please let us know your thoughts so we can proceed accordingly.

@rich2355
Copy link

rich2355 commented Dec 20, 2024 via email

@melodiemonod
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@melodiemonod
Copy link

Dear @kanishkan91, we have removed the comprehensive example from the manuscript. Please see the updated version above.

@kanishkan91
Copy link

Thanks. The EiC will take care of this paper here forward.

@crvernon
Copy link

crvernon commented Dec 20, 2024

👋 @melodiemonod - I just need you to address the following before I move to accept this for publication:

In the archive:

  • The names listed in your Zenodo archive do not match the names listed in your paper. Please edit the Zenodo metadata to ensure that the names and their order match what is in the paper exactly.

In the paper:

  • LINE 93: "pytorch" should be written as "PyTorch". You can maintain capitalization in your bib file by using curly brackets around the characters you wish to maintain formatting of.
  • LINE 101: the "w" in "weibull" should be capitalized
  • LINE 107: the "p" in "python" should be capitalized
  • LINE 109: the "c" in "cox's" should be capitalized
  • LINE 126: the "c" in "cox" should be capitalized
  • LINE 130 the "c" in "cox" should be capitalized
  • LINE 144 the "c" in "c-statistic" should be capitalized

After you make these changes let me know. Thanks.

@melodiemonod
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@melodiemonod
Copy link

Dear @crvernon, thank you for your comments. All the changes, on the Zenodo and on the manuscript, have been made.

@crvernon
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Monod
  given-names: Mélodie
  orcid: "https://orcid.org/0000-0001-6448-2051"
- family-names: Krusche
  given-names: Peter
  orcid: "https://orcid.org/0009-0003-2541-5181"
- family-names: Cao
  given-names: Qian
- family-names: Sahiner
  given-names: Berkman
- family-names: Petrick
  given-names: Nicholas
- family-names: Ohlssen
  given-names: David
- family-names: Coroller
  given-names: Thibaud
  orcid: "https://orcid.org/0000-0001-7662-8724"
contact:
- family-names: Coroller
  given-names: Thibaud
  orcid: "https://orcid.org/0000-0001-7662-8724"
doi: 10.5281/zenodo.14517267
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Monod
    given-names: Mélodie
    orcid: "https://orcid.org/0000-0001-6448-2051"
  - family-names: Krusche
    given-names: Peter
    orcid: "https://orcid.org/0009-0003-2541-5181"
  - family-names: Cao
    given-names: Qian
  - family-names: Sahiner
    given-names: Berkman
  - family-names: Petrick
    given-names: Nicholas
  - family-names: Ohlssen
    given-names: David
  - family-names: Coroller
    given-names: Thibaud
    orcid: "https://orcid.org/0000-0001-7662-8724"
  date-published: 2024-12-30
  doi: 10.21105/joss.07341
  issn: 2475-9066
  issue: 104
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 7341
  title: "TorchSurv: A Lightweight Package for Deep Survival Analysis"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.07341"
  volume: 9
title: "TorchSurv: A Lightweight Package for Deep Survival Analysis"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🦋🦋🦋 👉 Bluesky post for this paper 👈 🦋🦋🦋

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.07341 joss-papers#6289
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.07341
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Dec 30, 2024
@crvernon
Copy link

🥳 Congratulations on your new publication @melodiemonod! Many thanks to @kanishkan91 for editing and @XinyiEmilyZhang and @rich2355 for your time, hard work, and expertise!! JOSS wouldn't be able to function nor succeed without your efforts.

Please consider becoming a reviewer for JOSS if you are not already: https://reviewers.joss.theoj.org/join

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following

code snippets

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.07341/status.svg)](https://doi.org/10.21105/joss.07341)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.07341">
  <img src="https://joss.theoj.org/papers/10.21105/joss.07341/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.07341/status.svg
   :target: https://doi.org/10.21105/joss.07341

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@melodiemonod
Copy link

We are delighted by this news. Many thanks for your helpful reviews and for ensuring a swift reviewing process. Wishing you all lovely holidays.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python R recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

8 participants