-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SimultaneousRecording and SimultaneousRecordingWith #86
Comments
@chrisfilo @jasmainak @robertoostenveld @dorahermes |
I like the idea of having SimultaneousRecording and SimultaneousRecordingWith +1 on not splitting files |
Thanks @CPernet for the detailed description. Isn't the |
For the rare case of MEG-EEG - if you acquired on two different systems (ie you have a separate EEG amplifier you would still have simultaneous recordings but two separate channels.tsv) there is no redundancy. If on the same system like ds00117 sure to some degree it is redundant - although IMO the global field For the more generic cases, fMRI-EEG, EEG-eye tracker, PET-fMRI, etc .. the global field |
okay fair enough! |
In the case of MEG + EEG acquired on two different systems, the two sampling rates would be different and the data would be split over to types (e.g. "sub-01/ses-combined/eeg" and "sub-01/ses-combined/meg"). That is identical to BOLD + EEG, which will always be recorded with different systems, or BOLD + Physio which may or may not be recorded with different systems. I do not see the need for the SimultaneousRecording field. This would replicate the information in the XXXChannelCount fields (with XXX being the different types of data, e.g. EOG, ECG, EMG, AUDIO, EYEGAZE, PUPIL, EEG, MEG) in the meg.json, eeg.json and ieeg.json. |
It would necessarily replicate fields, think simultaneous PET MRI - same system but everything is different.
Of course within Jason files SimultaneousRecordingWith will point to each related files. To me the global field has indexing value to quickly make the difference between multimodal datasets acquired sequentially vs simultaneously.
…--
Dr Cyril Pernet,
Senior Academic Fellow
Neuroimaging Sciences
Centre for Clinical Brain Sciences
Chancellor's Building, Room GU426D
The University of Edinburgh
49 Little France Crescent
Edinburgh BioQuarter EH16 4SB
cyril.pernet@ed.ac.uk
tel: +44 (0)131 465 9530
http://www.sbirc.ed.ac.uk/cyril
http://www.ed.ac.uk/edinburgh-imaging
________________________________
From: Robert Oostenveld <notifications@github.com>
Sent: 12 November 2018 10:58:52
To: bids-standard/bids-specification
Cc: PERNET Cyril; Mention
Subject: Re: [bids-standard/bids-specification] SimultaneousRecording and SimultaneousRecordingWith (#86)
In the case of MEG + EEG acquired on two different systems, the two sampling rates would be different and the data would be split over to types (e.g. "sub-01/ses-combined/eeg" and "sub-01/ses-combined/meg"). That is identical to BOLD + EEG, which will always be recorded with different systems, or BOLD + Physio which may or may not be recorded with different systems.
I do not see the need for the SimultaneousRecording field. This would replicate the information in the XXXChannelCount fields (with XXX being the different types of data, e.g. EOG, ECG, EMG, AUDIO, EYEGAZE, PUPIL, EEG, MEG) in the meg.json, eeg.json and ieeg.json.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#86 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AEjUDn5PZAIZWvolKu7VlNrDyaemjccGks5uuVRsgaJpZM4YXwUP>.
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
|
Additional 1c: There is also _scans files https://bids-specification.readthedocs.io/en/stable/03-modality-agnostic-files.html#scans-file acq_time field in which provides critical information about possible multiple acquisitions (overlapping in time). The tricky part only the need to compute durations per each. If we add field end_time or duration into scans file, we would gain ultimate way to figure out what was simultaneous with what, and what were possible temporal offsets between recordings etc |
I doubt whether this should go at the level of In this example everything is coded in the scans.tsv file (which is specific for one session). It is clear that, due to the absence of duration information in that file, it is non-trivial to determine which ones are overlapping. But most of the "scans" have the In general I am not happy about replicating information. Using pybids or matlab-bids it should be possible to form a query that extracts the right information easily. Duplication means that the querying code gets more complex (because it has to check multiple locations) and that (meta)data can be inconsistent. |
@robertoostenveld in my mind, the point dataset_descrption comes down to help distinguish simultaneous recordings (any expect behaviour) so it's easy to catalogue dataset. Yes you can figure out by checking scan.tsv but to update your repository catalogue that's a bit of a pain IMO. Let's vote and close that issue? |
how would it concretely look like? Could you demonstrate the proposed |
could not see a we go from {
"Name": "EEG, fMRI and NODDI at rest'",
"BIDSVersion": "v1.1.X",
"License": " Creative Commons Attribution 4.0 International License",
"Authors": [
"F. Deligianni",
"M. Centeno",
"D.W. Carmichael",
"G.H. Zhang",
"C.A. Clark",
" J.D. Clayden "
],
"Acknowledgements": "Thanks to C.R. Pernet for preparing the dataset following BIDS",
"ReferencesAndLinks": [
"F. Deligianni, M. Centeno, D.W. Carmichael and J.D. Clayden (2014). Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands. Frontiers in Neuroscience 8:258",
"F. Deligianni, D.W. Carmichael, Gary H. Zhang, C.A. Clark and J.D. Clayden (2016). NODDI and tensor-based microstructural indices as predictors of functional connectivity. PLoS ONE 11(4):e0153404"
],
"SourceDatasetsURLs": " https://osf.io/94c5t/"
} to {
"Name": "EEG, fMRI and NODDI at rest'",
"SimultaneousRecording": "EEG, BOLD, DWI'",
"BIDSVersion": "v1.1.X",
"License": " Creative Commons Attribution 4.0 International License",
"Authors": [
"F. Deligianni",
"M. Centeno",
"D.W. Carmichael",
"G.H. Zhang",
"C.A. Clark",
" J.D. Clayden "
],
"Acknowledgements": "Thanks to C.R. Pernet for preparing the dataset following BIDS",
"ReferencesAndLinks": [
"F. Deligianni, M. Centeno, D.W. Carmichael and J.D. Clayden (2014). Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands. Frontiers in Neuroscience 8:258",
"F. Deligianni, D.W. Carmichael, Gary H. Zhang, C.A. Clark and J.D. Clayden (2016). NODDI and tensor-based microstructural indices as predictors of functional connectivity. PLoS ONE 11(4):e0153404"
],
"SourceDatasetsURLs": " https://osf.io/94c5t/"
} with SimultaneousRecording using the names we use to describe each modality |
I assumed that SimultaneousRecording is specific within a session, not at the project level. A more free form description could potentially exist in the dataset_description, but in the example above it could even mean that DWI and BOLD are acquired simultaneously. Would it be an option to add optional columns to the _scans.tsv to cross-reference simultaneous scans (which would require simultaneous recordings to add an _scans.json file) |
oh you so right @dorahermes my bad {
"Name": "EEG, fMRI and NODDI at rest'",
"SimultaneousRecording": "EEG, BOLD'",
"BIDSVersion": "v1.1.X",
"License": " Creative Commons Attribution 4.0 International License",
"Authors": [
"F. Deligianni",
"M. Centeno",
"D.W. Carmichael",
"G.H. Zhang",
"C.A. Clark",
" J.D. Clayden "
],
"Acknowledgements": "Thanks to C.R. Pernet for preparing the dataset following BIDS",
"ReferencesAndLinks": [
"F. Deligianni, M. Centeno, D.W. Carmichael and J.D. Clayden (2014). Relating resting-state fMRI and EEG whole-brain connectomes across frequency bands. Frontiers in Neuroscience 8:258",
"F. Deligianni, D.W. Carmichael, Gary H. Zhang, C.A. Clark and J.D. Clayden (2016). NODDI and tensor-based microstructural indices as predictors of functional connectivity. PLoS ONE 11(4):e0153404"
],
"SourceDatasetsURLs": " https://osf.io/94c5t/"
} |
But with
you are simplifying it too much. In the "POM" dataset (the one on the FT website but that is not shared) there is EMG recorded during the anatomical MRI and during the DWI (to control for quality, i.e. patients with severe tremor) and also during the functional MRI. Furthermore, there is eye tracker recorded during the functional MRI. To be honest, I did not look with sufficient detail in the "POM" dataset to determine precisely which stream was recorded simultaneously with which, but there were 4 devices involved: presentation pc, eyetracker, brain amp and MR scanner. That results in general in such a pattern, where time is along the horizontal axis. There is no way you can get that properly represented as a list with I furthermore agree with @dorahermes that correctly representing this is to be done within the session. PS I pointed to the "POM" example, because I know it is a complex one. But in case it is a simple with two devices and for each one one recording (e.g. one EEG and one video), we don't really need to solve it: a sentence in the README is enough, and the |
yes, let's see with POM - we could use a list {"T1w, EMG"}, I'm only suggesting something that would make it easy to search through dataset without having to look from scan.tsv files |
A few pointers from our chat with @robertoostenveld who reminded about this old issue which remains open and often referenced for a reason
|
|
2 fields
add to dataset_description.json the field SimultaneousRecording to indicates the different imaging modalities acquired at the time e.g. SimultaneousRecording: 'func','EEG','eye tracker', 'physio', 'behav'.
add SimultaneousRecordingWith field within the sidecar files to indicates the different imaging modalities acquired at the time and on different hardware.
e.g. ds000117 SimultaneousRecording: {'MEG','EEG'} but no SimultaneousRecordingWith because meg and eeg data are in the same file and recorded together on the same hardware.
SimultaneousRecordingWith
Tibor solution is to point to all files related to each others: for instance in the run-02_bold.json we would have SimultaneousRecordingWith = { 'eeg/sub-01_ses-01_task-something_run-02_eeg.vhdr', 'eyetracker/sub-01_ses-01_task-something_run-02_eyetracker.asc'}.
Chris G proposed to add the relative root path (e.g. {'/sub-01/func/sub-01_task-something_run-02_bold.nii.gz']) to accommodate hyperscanning (simultaneous recordings across participants).
Robert pointed out that 'It might also be relevant to know whether behaviour (behav) was recorded during the functional brain recordings (e.g. func+behav), or prior to (or after) the functional scan. It is often assumed (e.g. in the main bids spec) that these are recorded simultaneously, but e.g. section 8.7 and 8.8 already have to deal with the simultaneous versus sequential (or separate) measurement of the two. When considering SimultaneousRecording as a general field, it might have the side effect of “behav" becoming a mature data type on its own, rather than an addition to functional brain data. This would make it more symmetric'.
Mainak has an open issue: should we split concurrent recordings: eg split the meg and eeg data of ds000117 ; most felt it is not necessary nor recommended since it comes from the same hardware - one issue to solve is then ensure metadata cover all modalities properly e.g. both MEG and EEG without conflict (seems largely possible to me)
synchronization issue
Since we have multiple modalities, each one should have their own timing information recorded using events.tsv files. This is important since different frequency sampling exists for each modality and often clock on different hardware run a different speed. If there is no event (rest) we need at least one marker to ensure this is synchronized (a typical case in point is starting the eeg recording before the MRI starts).
Chris G pointed out that to compare events.tsv they should probably have the same number of rows since there is no unique identifier for events.
future format to be adopted like xdf contains multimodal data (say video screen cap, eye tracker, physio and eeg) with a master clock, and specific clocks -- again splitting files seems redudent and not necessary - but under which folder such file will appear ; my idea would be that's it the experimenter to know / decide and put it under the primary measure of the study.
The text was updated successfully, but these errors were encountered: