Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: ollamar: An R Package for running large language models #7211

Closed
editorialbot opened this issue Sep 10, 2024 · 66 comments
Closed

[REVIEW]: ollamar: An R Package for running large language models #7211

editorialbot opened this issue Sep 10, 2024 · 66 comments
Assignees
Labels
accepted published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Sep 10, 2024

Submitting author: @hauselin (Hause Lin)
Repository: https://github.com/hauselin/ollama-r
Branch with paper.md (empty if default branch): joss
Version: v1.2.2
Editor: @crvernon
Reviewers: @KennethEnevoldsen, @elenlefoll
Archive: 10.5281/zenodo.14728444

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/8777293d4e8ad448fea1520b780387d6"><img src="https://joss.theoj.org/papers/8777293d4e8ad448fea1520b780387d6/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/8777293d4e8ad448fea1520b780387d6/status.svg)](https://joss.theoj.org/papers/8777293d4e8ad448fea1520b780387d6)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@KennethEnevoldsen & @elenlefoll, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @crvernon know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @KennethEnevoldsen

📝 Checklist for @elenlefoll

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.02 s (1603.8 files/s, 190658.8 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
R                               20            494            640           1413
Markdown                         8            178              0            587
YAML                             4             26              9            160
Rmd                              1            112            195            125
TeX                              1             14              0             89
-------------------------------------------------------------------------------
SUM:                            34            824            844           2374
-------------------------------------------------------------------------------

Commit count by author:

   139	Hause Lin
     9	Tawab Safi

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 1189

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.48550/arXiv.2408.11707 is OK
- 10.1002/widm.1531 is OK
- 10.48550/arXiv.2404.07654 is OK
- 10.48550/arXiv.2408.05933 is OK
- 10.48550/arXiv.2403.12082 is OK
- 10.48550/arXiv.2408.11847 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Enhancing propaganda detection with open source la...

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

License info:

🟡 License found: Other (Check here for OSI approval)

@crvernon
Copy link

👋 @hauselin, @elenlefoll, and @KennethEnevoldsen - This is the review thread for the paper. All of our communications will happen here from now on.

Please read the "Reviewer instructions & questions" in the first comment above.

Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention #7211 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule.

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@KennethEnevoldsen
Copy link

KennethEnevoldsen commented Sep 10, 2024

Review checklist for @KennethEnevoldsen

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/hauselin/ollama-r?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@hauselin) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@KennethEnevoldsen
Copy link

  • There seem to be two license files. I believe these can be combined to one
  • Substantial scholarly effort: I believe this is a borderline case. This package is quite young and provides a wrapper around ollama. I believe this in itself is quite valuable, but I am unsure if there is workflow in place to ensure maintainability and compatibility. This could e.g. be ensured using a scheduled test. I would love a section added to the paper on this. I would suggest in favor of the tutorials which are best kept in the documentation (as it will become outdated).
  • Installation: I have created an issue here: Broken link in the readme hauselin/ollama-r#25
  • State of the field: I believe the comparison to existing packages is lacking. From the paper I am unsure what rollama lack which ollama-r adds. It would be nice if it was clearer. I also believe there is other packages seeking to provide such as tidychatmodels which are not currently included.
  • Quality of writing: While I appreciate the simple usage example I would remove these from the paper and instead spend some time on the reasoning of the implementation.
  • ** References**: see state-of-the-field
  • ** Documentation**: I believe the readability of the documentation can be improved. For instance, there is a header called "Notes", which seems like it should be reformatted. Fold out menus could also be used to allow for ease of navigation.
  • Community guidelines: can't find any community guidelines

@hauselin
Copy link

Thanks @KennethEnevoldsen for your review/comments! I've made changes to the repo/doc to address your comments, which definitely clarified things a lot. Let me know if they've addressed your comments (see responses below). Your other comments relate to the paper itself, which I'll address later.

  • There seem to be two license files. I believe these can be combined to one

The R community has workflows/package structures that produce multiple licenses that usually aren't combined into one—ollamar is following closely the conventions followed by the R community. For example, see the multiple licenses in ggplot and dplyr, two of the most used R libraries.

  • Substantial scholarly effort: I believe this is a borderline case. This package is quite young and provides a wrapper around ollama. I believe this in itself is quite valuable, but I am unsure if there is workflow in place to ensure maintainability and compatibility. This could e.g. be ensured using a scheduled test. I would love a section added to the paper on this. I would suggest in favor of the tutorials which are best kept in the documentation (as it will become outdated).

Glad you think it's valuable! There are many workflows in place. First, the package already has github continuous integration and deployment, so it will be tested on macOS/linux/windows whenever there are changes to the repository (there are a lot of test cases, which are also being run whenever the repo updates). Second, because it's hosted on R's CRAN, the same tests are also being run regularly on CRAN's servers to ensure maintainability and compatibility (see regular test results here; note that on 2024-09-10, a few CRAN servers are down, resulting in failed tests on certain linux machines and test results page might not load). Note also that for a library to be hosted on CRAN, it has to satisfy many strict requirements regarding maintainability and compatibility (otherwise, CRAN will inform the author and take it down).

  • Documentation: I believe the readability of the documentation can be improved. For instance, there is a header called "Notes", which seems like it should be reformatted. Fold out menus could also be used to allow for ease of navigation.
  • Community guidelines: can't find any community guidelines

I've restructured the site so the home page focuses on installation and basic usage (also added a table of contents on the right). I've also added a Get started page that uses foldout menus to allow for ease of navigation. The old "Notes" section no longer exist and has been integrated into the rest of the documentation. There's a new Community section on the right sidebar that links to contributing guide and code of conduct.

I've updated the installation instructions so they are clearer, especially for different OS.

@KennethEnevoldsen
Copy link

KennethEnevoldsen commented Sep 10, 2024

thanks for the quick fixes!

The R community has workflows/package structures that produce multiple licenses that usually aren't combined into one
Second, because it's hosted on R's CRAN, the same tests are also being run regularly on CRAN's servers to ensure maintainability and compatibility

Thanks for the clarification. It has been a while since I did packages in R and they were only for internal projects so didn't know CRAN regularly ran tests (def. nice to know).

Just to clear up any worries I have: What happens if the Ollama community pushes an update with breaking changes? As I understand it would require an update from you. It might be ideal to add information about compatible versions to allow users to resolve compatibility issues.

I've made changes to the repo/doc to address your comments, which definitely clarified things a lot

I totally agree, makes navigation noticeably easier. Once the updates for the paper are in I will do a full run-through of code examples in the docs and run the tests.

Optional:

  • I note that you do not use the citation.cff file for citations. Would recommend adding it, but it is def. up to you.
  • To increase the visibility of the package it might be worth checking if the ollama folks want the add your library to their list of libraries. Understand if you would rather do this after the review.

@hauselin
Copy link

hauselin commented Sep 10, 2024

Just to clear up any worries I have: What happens if the Ollama community pushes an update with breaking changes? As I understand it would require an update from you. It might be ideal to add information about compatible versions to allow users to resolve compatibility issues.

I've added the versions that have been tested in the updated README (https://github.com/hauselin/ollama-r/blob/4fca9c0546b45e7ea998e600e8112de17e028340/README.md?plain=1#L32C1-L36C16). Let me know if this is good, @KennethEnevoldsen. Ollama should be relatively stable (86k stars and almost 7k forks) so it's unlikely they'll introduce breaking changes. But if they do, the (official) Python and JS libraries (and the hundreds of apps/tools that have already been built on top of it) will break too—ollamar's design philosophy is similar to these two libraries' and is very modular/follows good software design practices, so it should not be difficult to update.

Regarding your two optional comments:

To increase the visibility of the package it might be worth checking if the ollama folks want the add your library to their list of libraries. Understand if you would rather do this after the review.

It actually already is in their list of libraries (but in a different section). I think the section you referred to is for their official libraries (other libraries are listed lower down on the same page).

I note that you do not use the citation.cff file for citations. Would recommend adding it, but it is def. up to you.

There is a citation file here (again, it's located in this directory because I'm following R package development conventions. It is picked up by github though (if you go to the main page, you can see "Cite this repository" in the right sidebar). If I add a citation.cff, the automated tests flags it and leave this note: Found the following CITATION file in a non-standard place: CITATION.cff Most likely ‘inst/CITATION’ should be used instead.

@crvernon
Copy link

crvernon commented Oct 8, 2024

👋 @hauselin, @elenlefoll, and @KennethEnevoldsen - just checking in to see how things are going with this review. Could you each post a short update here?

Also, @elenlefoll I don't see that you have created your checklist yet. Are you still able to conduct this review?

Thanks!

@KennethEnevoldsen
Copy link

I am waiting for the updated article draft, made clear here:

Once the updates for the paper are in I will do a full run-through of code examples in the docs and run the tests.

Notably the missing state-of-the-field and design considerations as it allow me to evaluate whether the code lives up to the intent

However, I understand that @hauselin was waiting for the second review before making too many changes.

@hauselin
Copy link

hauselin commented Oct 8, 2024

Yes @crvernon I've already revised the codebase based on @KennethEnevoldsen's suggestions. What's left are changes to the paper itself, and waiting for the second review before revising the paper makes more sense to me (but @crvernon, if you think it makes sense for me to revise the paper at this point too, let me know).

@elenlefoll
Copy link

elenlefoll commented Oct 26, 2024

Review checklist for @elenlefoll

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/hauselin/ollama-r?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@hauselin) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1. Contribute to the software 2. Report issues or problems with the software 3. Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@elenlefoll
Copy link

A couple of minor comments, having now read the paper:

"Locally deployed LLMs offer advantages in terms of data privacy, security, and customization, making them an attractive option for many users (Chan et al., 2024; Liu et al., 2024; Lytvyn, 2024; Shostack, 2024)" --> I was surprised not to see reproducibility mentioned anywhere in the paper as an advantage of locally deployed LLMs.

"To use Ollama, you must first download the model you want to use from https://ollama.com/li-53
brary." --> The wording of this sentence could be improved to make clear that the pull() function automatically downloads models from this website.

"ollamar fills a critical gap in the R ecosystem by providing a native interface to run locally deployed LLMs" --> I personally feel that this sentence is somewhat misleading since other R libraries do exist to run LLMs locally via R. Only rollama is mentioned a couple of sentences later.

@elenlefoll
Copy link

I have now checked off most boxes of my review and only have a few concerns that largely overlap with Reviewer 1:

Substantial scholarly effort: The package offers many useful functions that are well documented but my understanding is that it is exclusively a wrapper around ollama and I am therefore unsure whether JOSS considers this to correspond to a sufficiently substantial scholarly effort. This might be a question for the editor(s) (@crvernon?) to clarify.

State of the field: The statement of need is about accessing locally deployed LLMs in R and gives the impression that only one other R package currently exists for this task. The package mentioned is also a wrapper for ollama and it is not clear how this current package differs for the paper.

** References**: These will need to be updated once the state of the field section has been filled with life.

Everything else I'm happy with!

@crvernon
Copy link

Hi @elenlefoll - Concerning the substantial scholarly effort question you raised: I let this submission go through pre-screening on the grounds of "...makes addressing research challenges significantly better (e.g., faster, easier, simpler)."

@hauselin
Copy link

hauselin commented Nov 15, 2024

Thanks for clarifying, @crvernon. I'll address both reviewers' concerns in the next day or two. Thanks again, everyone!

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

1 similar comment
@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@hauselin
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.48550/arXiv.2408.11707 is OK
- 10.1002/widm.1531 is OK
- 10.48550/arXiv.2404.07654 is OK
- 10.48550/arXiv.2408.05933 is OK
- 10.48550/arXiv.2403.12082 is OK
- 10.48550/arXiv.2408.11847 is OK
- 10.32614/cran.package.tidyllm is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Enhancing propaganda detection with open source la...
- No DOI given, and none found for title: tidychatmodels: Chat with all kinds of AI models t...

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@hauselin
Copy link

Hi @crvernon, is there anything I need to do to move this along? Thanks!

@crvernon
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

crvernon commented Jan 23, 2025

👋 @hauselin - we are almost there!

I just have one thing I need you to address in the paper:

  • LINE 114: Maintain capitalization of the "h" and "p" in "harrr potter" and capitalize "llm" - both of which can be done in your bib file by wrapping any characters you wish to maintain the formatting of in curly brackets.

Next is just setting up the archive for your new release.

We want to make sure the archival has the correct metadata that JOSS requires. This includes a title that matches the paper title and a correct author list.

So here is what we have left to do:

  • Conduct a GitHub release of the current reviewed version of the software and archive the reviewed software in Zenodo or a similar service (e.g., figshare, an institutional repository). Please ensure that the software archive uses the same license as the license you have posted on GitHub.

  • Check the archival deposit (e.g., in Zenodo) to ensure it has the correct metadata. This includes the title (should match the paper title) and author list (should also match you paper exactly). You may also add the authors' ORCID.

  • Please respond with the DOI of the archived version and the version number of the release here.

I can then move forward with accepting the submission.

@hauselin
Copy link

I've changed the capitalization.

GitHub release of the current reviewed version: https://github.com/hauselin/ollama-r/releases/tag/v1.2.2

Zenodo DOI: 10.5281/zenodo.14728444

Version number of release: v1.2.2

Thanks @crvernon!

@hauselin
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@crvernon
Copy link

@editorialbot set v1.2.2 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v1.2.2

@crvernon
Copy link

@editorialbot set 10.5281/zenodo.14728444 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.14728444

@crvernon
Copy link

@hauselin - you still need to edit the metadata of your Zenodo archive license to match what is in your GitHub repository. No new release required.

@hauselin
Copy link

Updated Zenodo archive license. @crvernon

@crvernon
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.48550/arXiv.2408.11707 is OK
- 10.1002/widm.1531 is OK
- 10.48550/arXiv.2404.07654 is OK
- 10.48550/arXiv.2408.05933 is OK
- 10.48550/arXiv.2403.12082 is OK
- 10.48550/arXiv.2408.11847 is OK
- 10.32614/cran.package.tidyllm is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Enhancing propaganda detection with open source la...
- No DOI given, and none found for title: tidychatmodels: Chat with all kinds of AI models t...

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/dsais-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#6365, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Jan 24, 2025
@crvernon
Copy link

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Lin
  given-names: Hause
  orcid: "https://orcid.org/0000-0003-4590-7039"
- family-names: Safi
  given-names: Tawab
  orcid: "https://orcid.org/0009-0000-5659-9890"
doi: 10.5281/zenodo.14728444
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Lin
    given-names: Hause
    orcid: "https://orcid.org/0000-0003-4590-7039"
  - family-names: Safi
    given-names: Tawab
    orcid: "https://orcid.org/0009-0000-5659-9890"
  date-published: 2025-01-24
  doi: 10.21105/joss.07211
  issn: 2475-9066
  issue: 105
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 7211
  title: "ollamar: An R package for running large language models"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.07211"
  volume: 10
title: "ollamar: An R package for running large language models"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🦋🦋🦋 👉 Bluesky post for this paper 👈 🦋🦋🦋

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.07211 joss-papers#6366
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.07211
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Jan 24, 2025
@crvernon
Copy link

🥳 Congratulations on your new publication @hauselin! Many thanks to @KennethEnevoldsen and @elenlefoll for your time, hard work, and expertise!! JOSS wouldn't be able to function nor succeed without your efforts.

Please consider becoming a reviewer for JOSS if you are not already: https://reviewers.joss.theoj.org/join

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following

code snippets

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.07211/status.svg)](https://doi.org/10.21105/joss.07211)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.07211">
  <img src="https://joss.theoj.org/papers/10.21105/joss.07211/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.07211/status.svg
   :target: https://doi.org/10.21105/joss.07211

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS R recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 5 (DSAIS) Data Science, Artificial Intelligence, and Machine Learning
Projects
None yet
Development

No branches or pull requests

5 participants