In recent years, there has been an increasing amount of multi-lab collaborative experiments in consciousness science. While this facilitates replication together with data sharing, it also reveals the need for cross-lab, standardized ways to accumulate and test the quality of the collected data, to allow its integration across labs. This work will introduce a systematic, generalizable data quality assessment framework. Initially developed for the COGITATE adversarial collaboration, it accommodates both neural datasets (fMRI, MEG/EEG, and intracranial EEG), as well as non-neural datasets (behavioral and eye-tracking data types). The framework includes three key levels: the first level assures data completeness, consistency, and anonymization. The second level tests whether the participant followed the task and if the manipulation worked. This can be done by testing behavioral or eye-tracking data, for example, independent of the hypothesis tested in an experiment. The third level checks whether neural data are of sufficient quality to test the hypotheses. In this work, we will advocate for defining clear data quality standards prior to data collection, testing that the datasets adhere to these standards prior to conducting tests on the main research hypotheses, and documenting and openly sharing the testing procedure together with the data. |