Log of #ghostscript at irc.freenode.net.

Search:
 <<<Back 1 day (to 2017/08/27)20170828 
deekej hello folks! Does ghostscript have any specific support for GPFS from IBM?12:17.46 
kens Input, output, something else ?12:18.08 
  Oh file system. No12:18.28 
deekej or do you do any checks if the ghostscript is running on parallel filesystem to change its behaviour somehow?12:18.34 
kens No12:18.38 
deekej ok, thanks12:18.41 
  one of our customers is using GPFS, and that's the only environment where they are able to reproduce the issue (Segmentation fault)12:19.19 
kens Not a lot I can say there12:19.36 
deekej we assume its because of GPFS, since we're not able to reproduce it on common FS12:19.43 
  kens: me neither, so I just wanted to check :)12:19.56 
  thanks a lot12:19.57 
kens If they take the same binary to a system without the GPFS installed, does it continue to fail in the same way ?12:19.58 
deekej no, it does not AFAICT12:20.15 
kens Well, that woudl be odd12:20.24 
deekej I will try to get some proper reproducer from them, and see if I can get my hands on GPFS12:20.43 
kens Basically, especially for Unix like file systems, we just use fopen and friends12:20.54 
  If there was a problem there, I'd expect it to manifest on other systems12:21.20 
  Other applications I mean12:21.32 
  I' assuming this is an unmodified set of Ghostscript sources, its perfectly possible to add support for other operating-system specific file systems12:22.19 
deekej well, who knows what exactly is going inside the GPFS :) it's a proprietary code from IBM, so it's hard to tell if they are doing anything which is not standard there :)12:22.41 
kens Yeah but I would have thought that if the standard C run-time file operators didn't work, then other people would already be screaming12:23.10 
  Could be an edge case.12:23.24 
  I think we expect large file system support12:23.33 
deekej ah, I see your point12:23.46 
kens Or whatever 64-bit file system support is called12:23.46 
  If they have an exceptionally large output file, so its more than 4 Gb and the underlying OS doesn't support that the way we expect, tehn I could just about see it causing a problem12:24.31 
  Not that many applications write files < 4Gb12:25.03 
  Or even >12:25.10 
chrisl deekej: Do you have a list of the command line params they use?12:26.43 
deekej chrisl: gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=/tmp/test_output.pdf -c .setpdfwrite -f /scratch/public/<ommitted_path>/input_file.pdf12:30.49 
kens Oh well pdfwrite is 'different'12:31.02 
  Its unlikely they'll be exceeding 4Gb with that12:31.33 
chrisl Well, that does for my theory - I figured multithreaded rendering might cause issues12:32.04 
deekej kens: I think the file is max. 100 MB12:32.10 
kens Sadly that doesn't limit the size of the scratch file12:32.23 
deekej I will try to get more info, and if we find a bug in the latest vanilla build, I will report it :)12:32.41 
kens The entire content of the originsal PDF file will be decompressed, any large entities (such as bitmap images) will be wrtiten to the scratch file, at last twice with different compression options.12:33.01 
  And there's a host of other complications as well12:33.23 
 Forward 1 day (to 2017/08/29)>>> 
ghostscript.com #mupdf
Search: