Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

codespell: tuneup of config and some new typos detected fixes #1334

Merged
merged 2 commits into from
Oct 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion dandi/tests/data/metadata/dandimeta_migration.new.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"schemaKey": "Dandiset",
"schemaVersion": "0.4.0",
"name": "A NWB-based dataset and processing pipeline of human single-neuron activity during a declarative memory task",
"description": "A challenge for data sharing in systems neuroscience is the multitude of different data formats used. Neurodata Without Borders: Neurophysiology 2.0 (NWB:N) has emerged as a standardized data format for the storage of cellular-level data together with meta-data, stimulus information, and behavior. A key next step to facilitate NWB:N adoption is to provide easy to use processing pipelines to import/export data from/to NWB:N. Here, we present a NWB-formatted dataset of 1863 single neurons recorded from the medial temporal lobes of 59 human subjects undergoing intracranial monitoring while they performed a recognition memory task. We provide code to analyze and export/import stimuli, behavior, and electrophysiological recordings to/from NWB in both MATLAB and Python. The data files are NWB:N compliant, which affords interoperability between programming languages and operating systems. This combined data and code release is a case study for how to utilize NWB:N for human single-neuron recordings and enables easy re-use of this hard-to-obtain data for both teaching and research on the mechanisms of human memory.",
"description": "A challenge for data sharing in systems neuroscience is the multitude of different data formats used. Neurodata Without Borders: Neurophysiology 2.0 (NWB:N) has emerged as a standardized data format for the storage of cellular-level data together with meta-data, stimulus information, and behavior. A key next step to facilitate NWB:N adoption is to provide easy to use processing pipelines to import/export data from/to NWB:N. Here, we present a NWB-formatted dataset of 1863 single neurons recorded from the medial temporal lobes of 59 human subjects undergoing intracranial monitoring while they performed a recognition memory task. We provide code to analyze and export/import stimuli, behavior, and electrophysiological recordings to/from NWB in both MATLAB and Python. The data files are NWB:N compliant, which affords interoperability between programming languages and operating systems. This combined data and code release is a case study for how to utilize NWB:N for human single-neuron recordings and enables easy reuse of this hard-to-obtain data for both teaching and research on the mechanisms of human memory.",
"contributor": [
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

although may be we should just ignore such files too

{
"schemaKey": "Person",
Expand Down
2 changes: 1 addition & 1 deletion docs/design/python-api-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ Designs for an improved Python API
* The basic methods simply upload/download everything, blocking until completion, and return either nothing or a summary of everything that was uploaded/downloaded
* These methods have `show_progress=True` options for whether to display progress output using pyout or to remain silent
* The upload methods return an `Asset` or collection of `Asset`s. This can be implemented by having the final value yielded by the `iter_` upload function contain an `"asset"` field.
* Each method also has an iterator variant (named with an "`iter_`" preffix) that can be used to iterate over values containing progress information compatible with pyout
* Each method also has an iterator variant (named with an "`iter_`" prefix) that can be used to iterate over values containing progress information compatible with pyout
* These methods do not output anything (aside perhaps from logging)

* An `UploadProgressDict` is a `dict` containing some number of the following keys:
Expand Down
4 changes: 2 additions & 2 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -127,10 +127,10 @@ tag_prefix =
parentdir_prefix =

[codespell]
skip = dandi/_version.py,dandi/due.py,versioneer.py
skip = _version.py,due.py,versioneer.py,*.vcr.yaml,venv,venvs
# Don't warn about "[l]ist" in the abbrev_prompt() docstring:
# TE is present in the BIDS schema
ignore-regex = (\[\w\]\w+|TE)
ignore-regex = (\[\w\]\w+|TE|ignore "bu" strings)
exclude-file = .codespellignore

[mypy]
Expand Down
2 changes: 1 addition & 1 deletion tools/update-assets-on-server
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Composed by Satra (with only little changes by yoh).
Initially based on code in dandisets' backups2datalad.py code for updating
as a part of that script but it was intefering with the updates to datalad thus
as a part of that script but it was interfering with the updates to datalad thus
extracted into a separate script.
"""

Expand Down
Loading