Hi everyone,
I have been running my sequencing analysis on a remote server. The sequencing data that I receive is compressed (.gz), so what I upload onto the server is a compressed file. I have been using gunzip to decompress these.
My concern is that often the process seems to take so long that I get a "Write failed" "broken pipe" error. The file no longer has the .gz extension, so it seems to have decompressed, but I have no decompressed file with which to compare size. So, a few questions:
1. Is there a way to make sure a file decompressed completely, without having a size to compare beforehand?
2. If a file does fail to decompress, does it remain in some kind of partially-decompressed state or does it revert?
3. Is there a faster way to decompress large files?
Thanks in advance.
I have been running my sequencing analysis on a remote server. The sequencing data that I receive is compressed (.gz), so what I upload onto the server is a compressed file. I have been using gunzip to decompress these.
My concern is that often the process seems to take so long that I get a "Write failed" "broken pipe" error. The file no longer has the .gz extension, so it seems to have decompressed, but I have no decompressed file with which to compare size. So, a few questions:
1. Is there a way to make sure a file decompressed completely, without having a size to compare beforehand?
2. If a file does fail to decompress, does it remain in some kind of partially-decompressed state or does it revert?
3. Is there a faster way to decompress large files?
Thanks in advance.
Comment