How to build a pipeline for machine learning at YASA API? #61
-
First, the PSG EDF is converted into a wav file for each signal for convenience of use. The converted wav file is then divided into 30-second intervals, which is the sleep stage reading cycle, and classified by reading result file provided with the data. Finally, the divided files are made into spectrogram images of wav files, and the brain waves are used as band-pass filters to extract characteristics by band. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
@PhD-GOAT I don't think I understand the question. Please refer to the eLife publication for a description of the automatic sleep staging pipeline, and/or this example notebook. Note that I do not think that the conversion to wav format is required, instead you can just directly load your EDF file into Python using the MNE library: mne.io.read_raw_edf. |
Beta Was this translation helpful? Give feedback.
@PhD-GOAT I don't think I understand the question. Please refer to the eLife publication for a description of the automatic sleep staging pipeline, and/or this example notebook. Note that I do not think that the conversion to wav format is required, instead you can just directly load your EDF file into Python using the MNE library: mne.io.read_raw_edf.