0

I see this note in the docs of TarFS of pyfilesystem

Close the filesystem and release any resources.

It is important to call this method when you have finished working with the filesystem. Some filesystems may not finalize changes until they are closed (archives for example). You may call this method explicitly (it is safe to call close multiple times), or you can use the filesystem as a context manager to automatically close.

Is that OK to not close the FS if I do read-only operations? For example:

archive = io.BytesIO(get_my_tar_as_bytes())
read_stuff_from_fs(TarFS(archive))
Anton Daneyko
  • 6,528
  • 5
  • 31
  • 59
  • The docs clearly say that you should close it. What exactly happens under the hood is unclear, it is an implementation detail of TarFS, it is a black box. Maybe it is safe. But perhaps there's some (os wide) locking involved, even in read case? And what if the implementation changes with a new version? Either way, why would you go against that suggestion? – freakish Oct 19 '20 at 09:56
  • Under the hood TarFS closes the tar file that it opened. Pyfilesystem uses a metaphor of a directory for everything: S3 bucket, FTP server, tar file, and etc. So it kind of forced to have `close` to manage resources (connections or files). I see the rationale of `close` in these cases, but not for the read-only in-memory view of a tar. Why part: I am contemplating to return a read-only FS object from my functions. If I don't actually need to close it, then it would simplify resource management -- I can pass them around. Compare `def f() -> FS` vs `def f() -> ContextManager[FS]`. – Anton Daneyko Oct 19 '20 at 13:18

0 Answers0