diff --git a/dandi/tests/data/metadata/dandimeta_migration.new.json b/dandi/tests/data/metadata/dandimeta_migration.new.json index b95e6c45b..b05d752bf 100644 --- a/dandi/tests/data/metadata/dandimeta_migration.new.json +++ b/dandi/tests/data/metadata/dandimeta_migration.new.json @@ -3,7 +3,7 @@ "schemaKey": "Dandiset", "schemaVersion": "0.4.0", "name": "A NWB-based dataset and processing pipeline of human single-neuron activity during a declarative memory task", - "description": "A challenge for data sharing in systems neuroscience is the multitude of different data formats used. Neurodata Without Borders: Neurophysiology 2.0 (NWB:N) has emerged as a standardized data format for the storage of cellular-level data together with meta-data, stimulus information, and behavior. A key next step to facilitate NWB:N adoption is to provide easy to use processing pipelines to import/export data from/to NWB:N. Here, we present a NWB-formatted dataset of 1863 single neurons recorded from the medial temporal lobes of 59 human subjects undergoing intracranial monitoring while they performed a recognition memory task. We provide code to analyze and export/import stimuli, behavior, and electrophysiological recordings to/from NWB in both MATLAB and Python. The data files are NWB:N compliant, which affords interoperability between programming languages and operating systems. This combined data and code release is a case study for how to utilize NWB:N for human single-neuron recordings and enables easy re-use of this hard-to-obtain data for both teaching and research on the mechanisms of human memory.", + "description": "A challenge for data sharing in systems neuroscience is the multitude of different data formats used. Neurodata Without Borders: Neurophysiology 2.0 (NWB:N) has emerged as a standardized data format for the storage of cellular-level data together with meta-data, stimulus information, and behavior. A key next step to facilitate NWB:N adoption is to provide easy to use processing pipelines to import/export data from/to NWB:N. Here, we present a NWB-formatted dataset of 1863 single neurons recorded from the medial temporal lobes of 59 human subjects undergoing intracranial monitoring while they performed a recognition memory task. We provide code to analyze and export/import stimuli, behavior, and electrophysiological recordings to/from NWB in both MATLAB and Python. The data files are NWB:N compliant, which affords interoperability between programming languages and operating systems. This combined data and code release is a case study for how to utilize NWB:N for human single-neuron recordings and enables easy reuse of this hard-to-obtain data for both teaching and research on the mechanisms of human memory.", "contributor": [ { "schemaKey": "Person", diff --git a/docs/design/python-api-1.md b/docs/design/python-api-1.md index 10e463b92..e8108ec2a 100644 --- a/docs/design/python-api-1.md +++ b/docs/design/python-api-1.md @@ -118,7 +118,7 @@ Designs for an improved Python API * The basic methods simply upload/download everything, blocking until completion, and return either nothing or a summary of everything that was uploaded/downloaded * These methods have `show_progress=True` options for whether to display progress output using pyout or to remain silent * The upload methods return an `Asset` or collection of `Asset`s. This can be implemented by having the final value yielded by the `iter_` upload function contain an `"asset"` field. - * Each method also has an iterator variant (named with an "`iter_`" preffix) that can be used to iterate over values containing progress information compatible with pyout + * Each method also has an iterator variant (named with an "`iter_`" prefix) that can be used to iterate over values containing progress information compatible with pyout * These methods do not output anything (aside perhaps from logging) * An `UploadProgressDict` is a `dict` containing some number of the following keys: diff --git a/setup.cfg b/setup.cfg index b9326e731..50ee3e74d 100644 --- a/setup.cfg +++ b/setup.cfg @@ -127,10 +127,10 @@ tag_prefix = parentdir_prefix = [codespell] -skip = dandi/_version.py,dandi/due.py,versioneer.py +skip = _version.py,due.py,versioneer.py,*.vcr.yaml,venv,venvs # Don't warn about "[l]ist" in the abbrev_prompt() docstring: # TE is present in the BIDS schema -ignore-regex = (\[\w\]\w+|TE) +ignore-regex = (\[\w\]\w+|TE|ignore "bu" strings) exclude-file = .codespellignore [mypy] diff --git a/tools/update-assets-on-server b/tools/update-assets-on-server index b1e544eac..9163b2a6a 100755 --- a/tools/update-assets-on-server +++ b/tools/update-assets-on-server @@ -3,7 +3,7 @@ Composed by Satra (with only little changes by yoh). Initially based on code in dandisets' backups2datalad.py code for updating -as a part of that script but it was intefering with the updates to datalad thus +as a part of that script but it was interfering with the updates to datalad thus extracted into a separate script. """