Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up files after running doctest examples #530

Merged
merged 2 commits into from
Feb 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/dada/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,13 @@ To set up a file for writing as a stream is possible as well::
>>> assert (d == d2).all()
>>> fr.close()

.. testcleanup::

>>> from pathlib import Path
>>> from glob import glob
>>> for f in glob("2013-07-02*.dada"):
... Path(f).unlink()

Here, we have used an even smaller size of the payload, to show how one can
define multiple files. DADA data are typically stored in sequences of files.
If one passes a time-ordered list or tuple of filenames to
Expand Down
13 changes: 13 additions & 0 deletions docs/gsb/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -253,6 +253,12 @@ To write a rawdump file::
>>> assert np.all(dr == fh_rd.read())
>>> fh_rd.close()

.. testcleanup::

>>> from pathlib import Path
>>> Path("test_rawdump.timestamp").unlink()
>>> Path("test_rawdump.dat").unlink()

To write a phased file, we need to pass a nested tuple of filenames or
filehandles::

Expand All @@ -272,6 +278,13 @@ filehandles::
>>> assert np.all(dp == fh_ph.read())
>>> fh_ph.close()

.. testcleanup::

>>> import pathlib
>>> for file_name in (("test_phased.timestamp",)
... + tuple(f for d in test_phased_bin for f in d)):
... pathlib.Path(file_name).unlink()

Baseband does not use the PC time in the phased header, and, when writing,
simply uses the same time for both GPS and PC times. Since the PC time can
drift from the GPS time by several tens of milliseconds,
Expand Down
9 changes: 9 additions & 0 deletions docs/guppi/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,15 @@ from above and ignore the extra 64 samples we got from the reader)::
>>> assert (d2 == d[:-64]).all()
>>> fr.close()

.. testcleanup::

>>> del d, d2, fw, fr
>>> import gc
>>> gc.collect() # doctest: +IGNORE_OUTPUT
>>> from pathlib import Path
>>> Path("puppi_test.0000.raw").unlink()
>>> Path("puppi_test.0001.raw").unlink()

Here we show how to write a sequence of files by passing a string template
to `~baseband.guppi.open`, which prompts it to create and use a filename
sequencer generated with `~baseband.guppi.GUPPIFileNameSequencer`. One
Expand Down
5 changes: 5 additions & 0 deletions docs/mark4/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,11 @@ to pass in the ``decade`` when reading back::
>>> assert np.all(fh.read(80000) == frame.data)
>>> fh.close()

.. testcleanup::

>>> from pathlib import Path
>>> Path("sample_mark4_segment.m4").unlink()

Note that above we had to pass in the sample rate even when opening
the file for reading; this is because there is only a single frame in
the file, and hence the sample rate cannot be inferred automatically.
Expand Down
6 changes: 5 additions & 1 deletion docs/mark5b/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,6 @@ we also must provide ``nchan``, ``sample_rate``, and ``ref_time`` or ``kday``::
When writing to file, we again need to pass in ``sample_rate`` and ``nchan``,
though time can either be passed explicitly or inferred from the header::


>>> fw = mark5b.open('test.m5b', 'ws', header0=header0,
... sample_rate=32*u.MHz, nchan=8)
>>> fw.write(d)
Expand All @@ -102,6 +101,11 @@ though time can either be passed explicitly or inferred from the header::
>>> assert np.all(fh.read() == d)
>>> fh.close()

.. testcleanup::

>>> from pathlib import Path
>>> Path("test.m5b").unlink()

.. _mark5b_api:

Reference/API
Expand Down
23 changes: 23 additions & 0 deletions docs/tutorials/using_baseband.rst
Original file line number Diff line number Diff line change
Expand Up @@ -525,6 +525,11 @@ We can check the validity of our new file by re-opening it::
>>> fr.close()
>>> fh.close()

.. testcleanup::

>>> from pathlib import Path
>>> Path("test_vdif.vdif").unlink()

.. note:: One can also use the top-level `~baseband.open` function for writing,
with the file format passed in via its ``format`` argument.

Expand Down Expand Up @@ -575,6 +580,10 @@ Lastly, we check our new file::
>>> fr.close()
>>> fh.close()

.. testcleanup::

>>> Path("m4convert.vdif").unlink()

For file format conversion in general, we have to consider how to properly
scale our data to make the best use of the dynamic range of the new encoded
format. For VLBI formats like VDIF, Mark 4 and Mark 5B, samples of the same
Expand Down Expand Up @@ -650,6 +659,11 @@ frameset of the sample file::
>>> assert np.all(fsf.read() == fh.read())
>>> fsf.close()

.. testcleanup::

>>> for f in filenames:
... Path(f).unlink()

In situations where the ``file_size`` is known, but not the total number of
files to write, one may use the `~baseband.helpers.sequentialfile.FileNameSequencer`
class to create an iterable without a user-defined size. The class is
Expand Down Expand Up @@ -698,6 +712,11 @@ that fits the template::
>>> fr.close()
>>> fh.close() # Close sample file as well.

.. testcleanup::

>>> for f in glob.glob("f.edv*.vdif"):
... Path(f).unlink()

Because DADA and GUPPI data are usually stored in file sequences with names
derived from header values - eg. 'puppi_58132_J1810+1744_2176.0010.raw',
their format openers have template support built-in. For usage details, please
Expand Down Expand Up @@ -794,3 +813,7 @@ missing frames. Indeed, when one opens the file with the default
samples_per_frame = 16
sample_shape = (2, 1)
>>> fh.close()

.. testcleanup::

>>> Path("corrupt.vdif").unlink()
3 changes: 3 additions & 0 deletions docs/vdif/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,10 @@ Both examples can be adjusted for copying just some channels::
... header0=out_header, nthread=2) as fw:
... fw.write(fr.read())

.. testcleanup::

>>> from pathlib import Path
>>> Path("try.vdif").unlink()

.. _vdif_troubleshooting:

Expand Down