| <<<Back 1 day (to 2015/05/13) | 20150514 |
HD | . | 04:33.08 |
| Good morning all. :) | 04:44.05 |
kens | is getting very fed up with Hin-Tak | 08:57.45 |
chrisl | I'm not really sure what we can do about him | 09:01.36 |
kens | A gag ? | 09:01.46 |
chrisl | A mallet? | 09:02.04 |
kens | Mafia hit squad ? :) | 09:02.16 |
chrisl | Recommend him to Global? | 09:02.29 |
kens | Oooh that's just nasty | 09:02.36 |
Robin_Watts | kens: to hin-tak or to global? | 09:44.47 |
kens | Both :-) | 09:44.53 |
pedro_mac | sounds like a perfect match then ;) | 09:55.52 |
kens | Its a win-win for Chris and I :-D | 09:56.10 |
| Robin_Watts : I finally got the MItchell filter working (in basic form) with pdfwrite and compared it against the existing 'bicubic' filter. I used the same tets file and tried both at a downsample factor of 4 and 2. The Mitchell filter is clearly superior at both factors. | 10:13.14 |
Robin_Watts | kens: great. | 10:13.40 |
kens | SO I reckon I'm going to swith the 'bicubic' filter over to just always using the Mitchell filter, unless you think otherwise ? If you want to see the files I canput them up on Casper or something | 10:14.31 |
Robin_Watts | kens: In terms of quality it will always be a win. | 10:15.06 |
| In terms of speed it will be slower. | 10:15.18 |
| This is only for downscaling, right? | 10:15.33 |
kens | Yes indeed, but I htnk the default is average or subsample (I forget which) so you actually have to take actoin to get this filter. | 10:15.46 |
Robin_Watts | kens: Then go for it. | 10:15.55 |
kens | Yes, we only downscale in pdfwrite, never upscale | 10:15.57 |
| Actually, thinking about it, we may use the 'bicubic' filter with teh 'Press' PDFSETTINGS, but anyone using that is obviously interested in quality, I should check though. | 10:16.47 |
Robin_Watts | When does tor8 disappear? | 10:42.40 |
kens | early june | 10:42.58 |
chrisl | Beginning of June | 10:42.59 |
tor8 | Robin_Watts: ping. | 10:53.25 |
| re: largefile support, I think we should be able to get that into the fz_stream ond pdf_xref apis without adding the whole size_t insanity everywhere | 10:57.20 |
Robin_Watts | tor8: How? | 11:01.09 |
| We're going to need an fz_off_t type, of some sort. | 11:01.23 |
| MuPDF uses signed ints for offsets currently, so in large file mode, we could use signed int64_t's | 11:01.57 |
| That's what I tried to do before, but it ended up mushrooming. | 11:03.08 |
| Are you suggesting we leave all the allocation etc, as using ints ? | 11:03.18 |
| I will try to push the use of fz_off_t (being either an int or int64_t) through in a bit. | 11:07.24 |
tor8 | yeah, only change the file position apis | 11:15.26 |
| I'd be happier if we didn't need #ifdef and just always had 64-bit file support though | 11:15.52 |
| and as such, just use int64_t everywhere | 11:16.07 |
Robin_Watts | ew. | 11:18.39 |
| That doubles the size of the storage on 32bit systems. | 11:18.50 |
| (for all the xrefs etc) | 11:19.09 |
tor8 | multiple configurations have a tendency to bit-rot the lesser used bits | 11:19.20 |
| even 32-bit systems have 64-bit files... | 11:19.30 |
Robin_Watts | But for embedded stuff, it seems overkill. | 11:19.51 |
tor8 | the pdf_xref sections would balloon | 11:19.51 |
| depending on complexity, we could have pdf_xref32 and pdf_xref64 types and pick one based on the actual file size | 11:20.24 |
Robin_Watts | I think a MUPDF_LARGE_FILE_SUPPORT define that chooses between int64_t and int should be fine. | 11:20.32 |
tor8 | which would save memory on 64-bit configurations when using small files | 11:20.36 |
Robin_Watts | That sounds like *much* more work. | 11:20.42 |
tor8 | could we tie it into the regular system LARGEFILE defines? | 11:20.58 |
Robin_Watts | You mean the linux specific ones? | 11:21.27 |
tor8 | yeah... the unix ones | 11:21.36 |
Robin_Watts | I think that would be a bad idea. | 11:21.53 |
| just because a linux box supports LARGEFILE doesn't mean we'd necessarily want MUPDF_LARGEFILE. | 11:22.27 |
kens | Hmmm Bugzilla not talking to me again | 11:22.40 |
tor8 | largefile is such an ugly hack everywhere | 11:23.28 |
| -D_LARGEFILE64_SOURCE and crap like that | 11:24.06 |
Robin_Watts | tor8: Yeah. | 11:24.42 |
tor8 | so, okay, I guess I could live with a FZ_LARGEFILE and #define'd fz_lseek etc | 11:25.00 |
Robin_Watts | tor8: cool. And we can massage that to fit with LARGEFILE as best we can. | 11:25.24 |
tor8 | and fz_off_t for file positions, and not touch malloc or the size argument to fz_read, etc | 11:25.58 |
Robin_Watts | yeah. | 11:26.11 |
tor8 | I doubt we'll need to read more than 2gb into one buffer | 11:26.12 |
kens | lunches | 11:26.27 |
tor8 | Robin_Watts: with | 11:31.30 |
Robin_Watts | Ah, I transitioned us from fd's to FILE *'s last time I did this... | 11:32.09 |
| I still think that's the right thing to do. | 11:32.35 |
tor8 | Robin_Watts: with -D_LARGEFILE64_SOURCE we can call open64 etc directly, which seems like a cleaner solution to me | 11:32.50 |
| FILE* large file support is even dodgier :( | 11:33.07 |
| transitioning to FILE* is a separate issue, please don't do both at once... | 11:33.56 |
Robin_Watts | I think we should transition to FILE * first. | 11:34.44 |
| Cos that's a small change. | 11:34.54 |
| I agree that the commits should be kept separate. | 11:36.31 |
tor8 | to get large file support with FILE*, you have to tweak with the _FILE_OFFSET_BITS and that creates a mess | 11:36.35 |
Robin_Watts | fopen64, fseek64, etc. | 11:37.03 |
tor8 | it turns off_t into off64_t, and who knows what it does with ABI-compatibility | 11:37.06 |
Robin_Watts | What's wrong with that? | 11:37.06 |
tor8 | if one bit of code compiled without _FILE_OFFSET_BITS=64 passes a FILE* to code that hasn't been compiled, things are likely to implode | 11:37.38 |
HD | Hi Robin | 11:37.46 |
Robin_Watts | tor8: I think it's nicer than that. | 11:37.59 |
HD | Sorry to disturb you.. | 11:38.04 |
| any update for me? | 11:38.10 |
Robin_Watts | For 64bit, we always use off64_t and always call xxxx64. | 11:38.25 |
tor8 | Robin_Watts: so if we're going with FILE* I believe we should be using off_t, and let the user set _FILE_OFFSET_BITS=64 | 11:38.29 |
Robin_Watts | No ABI issues at all. | 11:38.30 |
tor8 | Robin_Watts: there is no such 64-bit explicit api for FILE* | 11:38.48 |
Robin_Watts | http://www.mkssoftware.com/docs/man3/fopen.3.asp | 11:39.25 |
| There are docs for fopen64 etc. | 11:39.32 |
| HD: This was the signature stuff? | 11:40.10 |
HD | Yes | 11:40.19 |
Robin_Watts | I have no update for you. Paulgardiner is the man to talk to about this, and I doubt he's looked any more than commented on here earlier. | 11:40.55 |
tor8 | Robin_Watts: ah, it's fseeko64 | 11:41.00 |
Robin_Watts | fseek, fseeko, fseeko64, yes. | 11:41.25 |
paulgardiner | HD: your file seemed to work for me. I tapped the field and the signing dialog appeared | 11:41.40 |
Robin_Watts | HD: When you built stuff, you would have done "ndk-build" at some stage. | 11:42.27 |
HD | I also tried with, https://play.google.com/store/apps/details?id=com.artifex.mupdfdemo&hl=en | 11:42.33 |
Robin_Watts | Did you do: "ndk-build SSL_BUILD=1" ? | 11:42.37 |
HD | but no success | 11:42.44 |
tor8 | so your're going with fz_fopen, fz_fseeko, fz_fclose, etc to switch between fopen and fopen64 based on FZ_LARGEFILE? | 11:42.52 |
Robin_Watts | HD: On a device with what processor? | 11:42.56 |
| tor8: That would be my proposal. | 11:43.05 |
HD | Samsung Note 10" | 11:43.22 |
tor8 | I take it that's the easiest way to get things to work on windows as well? | 11:43.25 |
Robin_Watts | tor8: There are 64bit FILE *functions on windows in the same mould, yes. | 11:43.49 |
tor8 | because on linux, setting _FILE_OFFSET_BITS=64 and using off_t and plain fopen/fseeko would accomplish the same | 11:43.58 |
| but if that's not going to fly on windows, then yes, I think we'll need to do our own macros | 11:44.24 |
Robin_Watts | tor8: I bet that does #definery to make stuff point to fopen64 etc. | 11:44.37 |
tor8 | Robin_Watts: I'm sure it does | 11:44.44 |
Robin_Watts | It's the only way to make the ABI safe. | 11:44.47 |
paulgardiner | HD: it was the google play version I used. v 1.7a | 11:44.49 |
Robin_Watts | HD: What CPU in the Samsung Galaxy Note 10? ARM or x86 ? | 11:46.13 |
tor8 | Robin_Watts: bah, it's less trivial than that on gnu libc | 11:46.28 |
HD | paulgardiner: I also tried http://mupdf.com/downloads/mupdf-1.7a-android-75-armeabi.apk, but fail :( | 11:46.45 |
tor8 | it does gnu voodoo with symbol redirection | 11:46.46 |
| if available, otherwise it does #define | 11:47.05 |
Robin_Watts | tor8: urgh... Call me a luddite but I'd prefer to just call the right functions on our end. | 11:47.15 |
tor8 | Robin_Watts: agreed. | 11:47.23 |
Robin_Watts | Ok, so I'll do fd -> FILE | 11:47.38 |
| then look at the 64bit stuff. | 11:47.45 |
paulgardiner | I don't know what to suggest. Current google play version is working fine for me. | 11:47.49 |
Robin_Watts | HD: The link you give there *is* the current google play version. | 11:48.33 |
HD | is it possible to send .so file? | 11:48.49 |
paulgardiner | I don't see how that helps. If we are both using the google play version, we are using the same .so | 11:49.35 |
HD | So let me try with your .so files. May It will work me.. | 11:50.15 |
paulgardiner | My .so is the one off google play. It is the one you are using. | 11:51.27 |
| Sorry, am I being daft?! | 11:51.50 |
Robin_Watts | paulgardiner: I don't see how. | 11:52.13 |
| paulgardiner: Double check what version you have installed, maybe? | 11:52.34 |
paulgardiner | I'll delete it and reinstall | 11:52.49 |
Robin_Watts | (Run it, and see what build number it gives on the top of the file picker) | 11:52.51 |
paulgardiner | 1.7a is on the picker | 11:53.05 |
Robin_Watts | It should say 1.7a (Build 75/armeabi) | 11:53.21 |
| or 1.7a (Build 76/armv7a) | 11:53.46 |
paulgardiner | Reinstalled. It says Build 76armv7a. It still works | 11:54.43 |
Robin_Watts | HD: So... try using http://mupdf.com/downloads/mupdf-1.7a-android-76-armv7a.apk and you should be on exactly the same version as paulgardiner. | 11:56.29 |
HD | Robin: ok let me check | 12:00.26 |
Robin_Watts | tor8: fd -> FILE change on robin/master | 12:15.15 |
| along with another that removes a duplicate define. | 12:17.26 |
| mvrhel_laptop: Hey. | 14:01.10 |
| Did you see my burblings to Jung? | 14:01.24 |
mvrhel_laptop | just woke up... | 14:01.49 |
Robin_Watts | mvrhel_laptop: OK. I've found another very small leak and propose a fix for it. | 14:02.21 |
mvrhel_laptop | Robin_Watts: cool. I will get those changes committed | 14:02.22 |
| thank you! | 14:02.28 |
Robin_Watts | mvrhel_laptop: No worries. | 14:02.31 |
| I'll tidy up the memento stuff and try and get that committed. | 14:02.47 |
mvrhel_laptop | ok great | 14:02.53 |
Robin_Watts | Then it can be used to watch for cpp leaks too. | 14:02.58 |
ofirr | is it possible to use fill pattern when converting pdf to svg with mudraw? | 14:04.37 |
Robin_Watts | ofirr: Did you open a bug about that? | 14:05.43 |
ofirr | http://bugs.ghostscript.com/show_bug.cgi?id=695988 | 14:05.49 |
Robin_Watts | yeah, I saw it come in. | 14:05.57 |
| ofirr: It's certainly possible. We just haven't done it yet. | 14:07.37 |
ofirr | I saw something about reduced memory when using patterns in release notes but maybe it's not used for svg? | 14:07.38 |
Robin_Watts | ofirr: That's to do with reading files in rather than writing them out. | 14:08.02 |
ofirr | ok | 14:08.31 |
| Robin_Watts: I can see that svg has good support for patterns | 14:09.33 |
Robin_Watts | actually... looking at the code, it looks like we DO try to use patterns when we write stuff out. | 14:09.41 |
ofirr | not sure how hard it will be to translate the pattern from pdf | 14:09.45 |
| can you link to the code? | 14:10.03 |
Robin_Watts | ofirr: Can you hang around for a bit? | 14:10.07 |
ofirr | sure | 14:10.12 |
Robin_Watts | source/fitz/svg-device.c is the output SVG device. | 14:10.27 |
| I'll try and find time to have a quick look in a bit. | 14:10.42 |
ofirr | thanks | 14:11.21 |
rayjj | Robin_Watts: AFAIK, even though we have customers 534 and 535, I don't think they are licensed for mupdf. | 14:27.55 |
| Robin_Watts: one of those divisions (I think 535) uses GS on the workstation to convert PDF files into PS files to drive their PS printer | 14:29.24 |
Robin_Watts | rayjj: I thought that they were... | 14:29.25 |
rayjj | our customer support list is very unclear as to which products various customers are licensed for | 14:30.12 |
Robin_Watts | rayjj: Ah, the mail thread involving miles that kicked all this off says: "Our developer is studying MuPDF and have some technical questions." | 14:30.42 |
| so I suspect they are looking at a license. | 14:30.49 |
rayjj | Robin_Watts: yeah, probably so. | 14:32.08 |
| this would be yet a third division, AFAICT | 14:32.28 |
Robin_Watts | remembers to buy US mains lead for new laptop. | 14:32.38 |
rayjj | Robin_Watts: good idea -- easier than some bulky converter | 14:33.21 |
Robin_Watts | rayjj: and easier on planes. | 14:33.34 |
kens | I specified 2 power bricks last time, one for US one for UK | 14:34.28 |
Robin_Watts | kens: The power brick takes a figure of 8 cable. | 14:34.51 |
| (IEC C7) | 14:34.55 |
kens | THa'ts nicely standarrd, they aren't always | 14:35.12 |
Robin_Watts | so I just need to replace that bit of the lead. 6.39 from amazon :) | 14:35.17 |
kens | IIRC mine matches Tor's poer supply also | 14:35.34 |
| Hmmm.... since Chris imported new versions of FreeType, libpng and Zlib I can't rebase my branch from master any more. I keep getting 'cannot stat' errors and the directories are indeed unable to be opened (acess denied) | 14:42.32 |
| Ah, and now I can't checkout master | 14:42.59 |
chrisl | Can you delete the directories, and the checkout each explicitly? | 14:43.52 |
kens | I can't delete the directories 'permission denied' | 14:44.04 |
| Hmm, now the offending directories seem to be gone | 14:45.01 |
chrisl | Hmm, there is a difference in the permissions..... | 14:45.23 |
kens | I did a git reset --hard in my branch and now the directories are gone. However, when I try to checkout master it throws me all the same errors | 14:45.58 |
| Ah no I'm OK now I can checkout master again | 14:46.34 |
| The directories in question do seem to be part of the new stuff | 14:46.56 |
| For some reason if I try to rebase my branch with those present, it gets cross | 14:47.19 |
chrisl | I don't see why there should be a problem | 14:48.35 |
kens | It seems to be happy now (baffled) | 14:48.46 |
chrisl | Possibly the difference in the permissions confused it | 14:49.01 |
kens | I guess something did, no idea what but it seems to be OK now, so..... | 14:49.19 |
chrisl | Actually, it seems git only tracks the executable bit, and no other permissions.... <shrug>.... | 14:52.29 |
tor8 | Robin_Watts: LGTM | 15:06.18 |
Robin_Watts | tor8: Thanks. | 15:06.24 |
| This raises an interesting problem... | 15:06.36 |
| when we have a PDF object that contains an integer offset. How do we represent that? | 15:07.02 |
| Do we make all pdf ints be fz_off_t's too ? | 15:07.16 |
tor8 | make them int64_t? | 15:07.24 |
| or nuke the pdf ints and make both ints and floats into doubles | 15:07.48 |
Robin_Watts | urgh. | 15:07.55 |
| I prefer the idea of FZ_LARGEFILE making int's int64's. | 15:08.08 |
tor8 | a double is able to represent the reasonable subset of useful integers | 15:08.13 |
Robin_Watts | indeed, but using FP to hold ints is icky at best. | 15:09.03 |
tor8 | Robin_Watts: appendix c of pdfref17 lists architectural limits for integers and reals | 15:10.12 |
Robin_Watts | yeah, doesn't it say 32bit and float ? | 15:10.28 |
tor8 | using FP to hold ints is not unheard of... both javascript and lua do it | 15:10.44 |
| it does, so I wonder how the new-style xref trailer dictionaries hold 64-bit offsets | 15:11.24 |
kens | Are there any PDF constructs which contain offsets, other than the xref and associated stuff ? | 15:11.28 |
| xref is 'different' I htnk | 15:11.42 |
tor8 | kens: the compressed object stream xref objects are the only ones IIRC | 15:11.44 |
| the /Prev key has an offset, and is a regular PDF dictionary | 15:12.04 |
kens | That's true | 15:12.11 |
Robin_Watts | It's exactly the Prev one that matters. | 15:12.30 |
| I wonder if they have an 'int64' type internally which they keep quiet about? | 15:12.58 |
kens | I suspect they handle the xref stuff differently | 15:13.17 |
Robin_Watts | I bet that appendix is largely unchanged from the previous versions. | 15:13.19 |
kens | I'm pretty sure its unchanged more or less | 15:13.59 |
| I suspect Adobe handle the 'structural' aspects of a PDF file dirrectly from its contents, which is how they get away with it | 15:14.31 |
| Even in a regular PODF file the /Prev can be more than an integer anyway, can't it ? xref offsets are 10 digits ? | 15:15.23 |
Robin_Watts | Define regular PDF file? | 15:15.38 |
kens | One without compressed xref stgreams | 15:15.50 |
Robin_Watts | oh, yes. | 15:16.07 |
| but xref offsets are not PDF objects. | 15:16.18 |
kens | I know, but the /Prev is, and it seems to me that ths could technically always have been a 'problem' | 15:16.40 |
Robin_Watts | I am tempted to just make all ints int64_t's if we're in an FZ_LARGEFILE build. | 15:17.03 |
| kens: Presumably gs hits this too? | 15:17.09 |
kens | I thnk we handle them as reals | 15:17.21 |
| Robin_Watts : Implementaton note 21 in the PDF reference covers ths for Acrobat | 15:18.15 |
| "Byte addresses can be as large as needed to address an arbitrarily large PDF file, regardless of the implementation limit for PDF integers in general." | 15:18.24 |
Robin_Watts | reals are... doubles in gs ? | 15:18.36 |
kens | That's specifically referenced from the /Prev documentation | 15:18.45 |
| I don't recall what a PostScript real is defined as, but its big enough not to be a problem :-) | 15:19.04 |
Robin_Watts | kens: Well, it'll either be a float or a double. A float is 24 bits of integer goodness (so not enough). A double is 48 (so loads) | 15:19.34 |
chrisl | I think gs will just use an integer, which should be 64 bits if a 64 bit integer type is available | 15:20.24 |
kens | Well the PLRM implies its a flot | 15:20.25 |
| Typical implementation limits says a real is +/- 10^ +/-38 | 15:22.42 |
| Sorry +/- 10^38 | 15:23.01 |
chrisl | Why would we use a real? | 15:23.31 |
kens | THe 'architectural limits' for PostScript have an integer as +/- 2^32 | 15:24.02 |
tor8 | Robin_Watts: that's one way to do it, or just make all of them int64_t always, or add a separate int64 type tag | 15:24.06 |
| fz_toint64 | 15:24.19 |
Robin_Watts | tor8: I have an fz_atoo here. | 15:24.30 |
chrisl | kens: but we now ignore that, and use 64 bits by default | 15:24.31 |
kens | But if we have a 64-bit integer as an extension, then I guess we wouldn't. However, does that mean a 32-bit GS can only read 2Gb PDF files ? | 15:24.31 |
chrisl | No, because most 32 bit compilers still have a 64 bit integer type | 15:24.59 |
kens | Well, then we're covered :-) | 15:25.09 |
tor8 | Robin_Watts: I'd be inclined to just use int64_t internally | 15:25.21 |
kens | THough I'm still inclined to think that monster PDF files are stupid | 15:25.24 |
Robin_Watts | tor8: Everywhere in all cases? | 15:25.41 |
| Or everywhere in the FZ_LARGEFILE case ? | 15:25.49 |
chrisl | If you're using either a weird embedded compiler, or a very old compiler without a 64 bit integer type, you should give up on the idea of handling 2Gb+ PDF files! | 15:26.03 |
tor8 | everywhere in FZ_LARGEFILE and add a fz_toint64 for the /Prev usecase? | 15:26.05 |
Robin_Watts | tor8: I think we're in agreement. | 15:26.22 |
tor8 | so externally, only one extra function would be added (fz_toint64 or fz_tooffset) but that's only used for offsets | 15:26.51 |
chrisl | kens: oh, and we do have a mode that tweaks the number parsing so we still get the "right" results from the QL tests that rely on integers being 32 bit | 15:28.36 |
kens | :-) | 15:28.46 |
chrisl | Possibly triggered by "//true .setCPSImode" | 15:29.41 |
kens | I would expect so yes | 15:29.50 |
Robin_Watts | tor8: yeah. | 15:30.50 |
| actually, we're going to need a pdf_to_offset too. | 15:33.11 |
ofirr | Robin_Watts: I'm here if there is anything I can do to help with the svg pattern issue | 16:33.34 |
Robin_Watts | ofirr: Sorry, I'm trying to get another job done so I can have a look at it. | 16:45.23 |
| Feel free to leave it with me, and pop back tomorrow. | 16:45.35 |
| Whatever is easiest for you. | 16:45.40 |
| I will look as soon as I get a mo. | 16:45.46 |
ofirr1 | Robin_Watts: thanks | 16:49.02 |
sebras | Robin_Watts: how does FILE * work with streaming for the curl case? | 17:13.13 |
Robin_Watts | sebras: MuPDF operates on fz_streams. | 17:13.34 |
| The question is do we build our file accessing fz_stream on file descriptors, or on FILE *'s. | 17:13.54 |
sebras | Robin_Watts: ok, so there is an intermediate layer inbetween anyway..? | 17:14.11 |
Robin_Watts | file descriptors are posix things. FILE *'s are generic C lib things. | 17:14.14 |
sebras | Robin_Watts: yeah I know. | 17:14.21 |
Robin_Watts | sebras: Yeah, sorry. | 17:14.31 |
sebras | Robin_Watts: I thought we had used curl's file descriptors directly though, hence my apprehension. | 17:14.47 |
Robin_Watts | Yes, it's just a question of what layer we want to put the 64bit file pointer stuff in at. | 17:14.50 |
| sebras: We don't use file descriptors when talking to curl. | 17:15.16 |
| (as far as I remember, and I haven't run into that while pushing it through :) ) | 17:15.39 |
| In any case the curl fz_stream is a different kettle of monkeys. | 17:15.57 |
sebras | Robin_Watts: alright. | 17:21.12 |
Robin_Watts | I am confused. | 17:21.58 |
| I set _LARGEFILE64_SOURCE before I include stdio.h | 17:22.31 |
| Why then do I see warnings about implicit declaration of fseeko64 on linux? | 17:23.00 |
sebras | Robin_Watts: do you need to set _FILE_OFFSET_BITS as well? | 17:24.34 |
Robin_Watts | Not according to my reading of the code. | 17:24.46 |
| Ah! | 17:25.18 |
| I see it. | 17:25.21 |
sebras | Robin_Watts: http://linux.die.net/man/7/feature_test_macros if you read _LARGEFILE64_SOURCE here it recommends _FILE_OFFSET_BITS=64 instead | 17:25.25 |
Robin_Watts | sebras: If I set _FILE_OFFSET_BITS=64 then I'm supposed to use off_t's. | 17:25.52 |
| tor8 and I discussed this earlier, and we prefer the idea of always using fz_off_t, and just calling the correct functions. | 17:26.12 |
sebras | Robin_Watts: I think so, yes. | 17:26.12 |
Robin_Watts | hence we don't run the risk of abi confusion. | 17:26.24 |
| I see what's wrong though. | 17:26.32 |
sebras | Robin_Watts: ..? | 17:26.41 |
Robin_Watts | cmapdump.c's #includery is upsetting my quick test-hack :) | 17:27.09 |
sebras | Robin_Watts: ah. | 17:27.17 |
Robin_Watts | tor8, (and sebras, if you are interested): fz_off_t commit on robin/master | 17:29.33 |
| henrys: ping ? | 18:04.32 |
henrys | yup | 18:04.40 |
| go ahead | 18:04.51 |
Robin_Watts | so the customer just got back to me about the large file stuff... | 18:04.59 |
| I should go ahead and share the work in progress with him then? | 18:05.18 |
henrys | I meant yup go ahead send them the code | 18:05.22 |
Robin_Watts | fab. Will do. | 18:05.29 |
henrys | I'm worried that customer copied the patterns and I'm trying to figure out how to say that and not sound accusatory | 18:21.56 |
Robin_Watts | I don't understand the issue well enough to comment. | 18:23.29 |
| mvrhel_laptop: I just sent you an email about Memento C++ operation. | 18:36.39 |
| It's not urgent at all. | 18:36.56 |
mvrhel_laptop | Robin_Watts: ok cool. | 18:37.18 |
Robin_Watts | Jungs code is still leaking, but I'm damned if I can see where. | 18:37.35 |
| The C++ stuff gets a clean bill of health now, as does the C. | 18:37.48 |
mvrhel_laptop | Robin_Watts: ok I am going to look it over now | 18:40.17 |
| I will see if the windows tools show me anything | 18:40.26 |
Robin_Watts | If you can spot a problem, I'd love to know how. | 18:40.43 |
mvrhel_laptop | the code analysis actually flagged a couple things I was doing wrong | 18:40.50 |
Robin_Watts | You'd think that the windows tools would have some way to spot leaks, but I am not aware of them. | 18:41.05 |
mvrhel_laptop | In performance and analysis section there is a memory usage test | 18:41.45 |
| I am going to fool with that to see if it is useful | 18:42.01 |
| ha it crashed! | 18:43.00 |
| oh I see why | 18:43.10 |
| dll path issue.... | 18:43.33 |
rayjj | Robin_Watts: mvrhel_laptop: there is a "memory_usage_tool" mentioned in the VS2015 "performance and diagnostics hub" http://blogs.msdn.com/b/visualstudioalm/archive/2014/11/13/memory-usage-tool-while-debugging-in-visual-studio-2015.aspx | 18:44.52 |
mvrhel_laptop | yes | 18:45.01 |
| its in 2013 | 18:45.04 |
| rayjj | 18:45.06 |
Robin_Watts | where where where? | 18:45.20 |
mvrhel_laptop | Analyze | 18:45.37 |
rayjj | mvrhel_laptop: (oops I was looking on the web and missed your comment) | 18:45.38 |
mvrhel_laptop | Performance and Diagonstics | 18:45.43 |
| Select Memory Usage | 18:45.59 |
| It will do both Native and Managed code | 18:46.16 |
Robin_Watts | Hmm. Memory Usage is greyed out in the "Not Applicable Tools" for me. | 18:46.18 |
mvrhel_laptop | oh what version do you have of VS? | 18:46.31 |
rayjj | Robin_Watts: do you have the Express edition ? | 18:46.34 |
Robin_Watts | VS2013 Community | 18:46.55 |
mvrhel_laptop | I have Professional Update 4 | 18:47.05 |
Robin_Watts | I think the tool is there, it just doesn't want to work with a C# project. | 18:47.29 |
mvrhel_laptop | no it works with a c# project | 18:47.39 |
| I am doing it now | 18:47.41 |
Robin_Watts | mvrhel_laptop: Are you using his MemoryLeakVS2013 project? Or one you made yourself? | 18:48.15 |
mvrhel_laptop | I am using his | 18:48.21 |
| well, I take that back | 18:48.32 |
| I grabbed his code that he gave us in an email | 18:48.41 |
| where you opens, intits and closes a bunch of times | 18:48.59 |
| s/you/he/ | 18:49.06 |
Robin_Watts | mvrhel_laptop: Right. | 18:49.14 |
| He gave us a couple of different versions of that. | 18:49.22 |
| The one I got working was called "MemoryLeakVS2013" | 18:49.34 |
| I had to make a tweak to cmapdump.c (or one of those) to make it compile. | 18:50.06 |
| 75% of the bytes allocated come from: System.AppDomain.InitializeDomainSecurity, apparently. | 18:55.34 |
rayjj | Robin_Watts: I am trying to understand the logic you have in gx_ht_construct_threshold that does all of the t_level "adjustment" from the basic t_level = (256 * l) / d_order->num_levels | 18:56.14 |
Robin_Watts | rayjj: oh, gawd. context switch. | 18:56.34 |
rayjj | Robin_Watts: that's OK. I'll dig through it. It was stuff you did in 2011 and 2013 | 18:57.15 |
| Robin_Watts: for bug 695929 it gets the threshold array totally wrong -- no levels in the array > 132 | 18:57.54 |
Robin_Watts | 65aa942c ? | 18:58.32 |
| and michaels 9fe33030 and 3ee407fa | 18:59.02 |
mvrhel_laptop | that would be a problem | 18:59.21 |
rayjj | Robin_Watts: and 300c3ea8. I _will_ ask you to review my changes | 18:59.23 |
| once I get this figured out. I suspect your logic works for num_levels < 255 but not with more | 19:00.21 |
Robin_Watts | rayjj: Right. The english in 300c3ea8 sounds plausible enough :) | 19:00.22 |
| rayjj: Ok. The t_adjust stuff is not mine. | 19:04.45 |
| 300c3ea8 has whitespace changes in it which makes it look like it's mine. | 19:05.13 |
| The t_level_adjust code originally comes from 5a435470 which was (sorry) mvhrel's. | 19:06.17 |
rayjj | Robin_Watts: yeah, I just dug back far enough to find that as well | 19:06.41 |
Robin_Watts | rayjj: I think you're probably exactly right; it will fail for num_levels >= 255 or something. | 19:07.13 |
| What is num_levels for you? | 19:07.22 |
rayjj | 337 | 19:07.30 |
| actually 338 | 19:07.44 |
Robin_Watts | So we are doing for (l = 1; l < num_levels; l++) | 19:09.32 |
| and t_level is being set to a value between 0 and 255 (equivalent to 256 * l/num_levels) | 19:10.14 |
mvrhel_laptop | actually some of that is rayjj's. I stole it from the tiff halftone device.. | 19:10.26 |
| we all are guilty | 19:10.45 |
Robin_Watts | and then I lose the sense of what it's doing :) | 19:10.58 |
| mvrhel_laptop: I am bothered that you can use the "Memory Usage" tool and i can't. | 19:12.30 |
| Did you rebuild his solution into something more sane ? | 19:12.42 |
rayjj | mvrhel_laptop: the original code in tiffsep gets it (mostly) right in that I get a reasonable threshold array | 19:12.49 |
mvrhel_laptop | I made my own and used his code (cleaned up a bit) | 19:13.10 |
rayjj | Robin_Watts: it's really not that much different except for the t_level adjustment | 19:13.20 |
Robin_Watts | mvrhel_laptop: Can I get a copy of that solution please? | 19:13.29 |
mvrhel_laptop | Robin_Watts: sure. let me send it to you | 19:13.45 |
rayjj | at least as far as that loop goes | 19:13.45 |
Robin_Watts | Thanks. | 19:13.52 |
rayjj | mvrhel_laptop: I'm sure you don't remember what it is you were trying to do. The log doesn't have much useful: | 19:14.56 |
| Fix so that the threshold values when applied will match the results given by the HT tiling code. Source of problem was due to the lack of a correction to account for the number of shading levels that are represented by the threshold array versus the number of levels indicated in the whitening order structure. The transition point was made in the same manner that it is handled in the HT... | 19:14.58 |
| ...tiling code. With the threshold fixed, there is a remaining phase issue in gximono.c to tackle | 19:14.59 |
mvrhel_laptop | rayjj: I do remember what I was doing. Trying to create a threshold array ;) | 19:15.21 |
rayjj | yeah, that's what the code you lifted did too. It's just that your version doesn't work ;-) | 19:15.52 |
mvrhel_laptop | I was trying to match what the tiling code did | 19:16.01 |
| rayjj: then copy and paste it | 19:16.23 |
rayjj | mvrhel_laptop: no, I'm sure your 'fix' fixes something. Just not when num_levels is large which doesn't happen all that often I guess | 19:17.20 |
mvrhel_laptop | Robin_Watts: so I had not yet added your fix to release the context and the tool does show the 8,232 bytes left over each time | 19:18.24 |
| it is actually a bit useful | 19:18.34 |
Robin_Watts | mvrhel_laptop: cool. | 19:18.46 |
mvrhel_laptop | it divides up the managed (c#) and unmanaged code | 19:18.53 |
| managed codes is all good as expected | 19:19.00 |
| let me add your fixes and see what I see | 19:19.16 |
Robin_Watts | really? I was expecting managed to be the culprit. | 19:19.18 |
mvrhel_laptop | well there is some funny stuff that I do, where the managed code is *supposed* to do a release of some unmanaged allocations. Maybe that is where there is an issue | 19:19.51 |
| but we will see | 19:20.07 |
| but those don't happen in this open and close scenario | 19:20.33 |
| only with some other operations | 19:20.37 |
| which I suppose I will need to test | 19:20.44 |
| I will probably make a more detailed memory test project for this stuff | 19:21.06 |
| rayjj: yes, I did not test with a large number of levels (more that 256) | 19:21.51 |
| s/that/than/ | 19:21.55 |
rayjj | mvrhel_laptop: so when I come up with something, I'll run it past you for review (not Robin_Watts, who I blamed at first -- sorry for that) | 19:23.27 |
Robin_Watts | rayjj: No worries. | 19:23.40 |
mvrhel_laptop | rayjj: and I will dust off the cobwebs in my neurons and hope that I can be of help | 19:24.10 |
rayjj | mvrhel_laptop: np. don't bother with it for now | 19:24.31 |
mvrhel_laptop | Robin_Watts: just sent you the cs project that I am using | 19:26.28 |
Robin_Watts | Thanks. | 19:26.35 |
mvrhel_laptop | just add it to the gsview solution | 19:26.36 |
| it should copy the mupdf64.dll builds when you build it | 19:27.04 |
Robin_Watts | I don't have the gsview solution to hand... | 19:27.12 |
mvrhel_laptop | ah | 19:27.15 |
| how are you calling the dll? | 19:27.25 |
Robin_Watts | I'll have to clone it. | 19:27.25 |
mvrhel_laptop | well you can just copy the dlls then from the gsview directory | 19:28.01 |
Robin_Watts | I'm using his solution that has generated/libmupdf/libthirdparty/MemoryLeak.cs in. | 19:28.08 |
mvrhel_laptop | ok | 19:28.15 |
Robin_Watts | I've got to go help with dinner. If you don't solve it I'll bash some more tomorrow. | 19:28.35 |
mvrhel_laptop | I just added this to the gsview solution. I may actually add it in permanently with some other tests | 19:28.43 |
| Robin_Watts: ok. have a good diner | 19:29.22 |
Robin_Watts | thanks. | 19:29.39 |
mvrhel_laptop | oh good catch with the String_to_char. There are actually a few places where that is an issue. I will fix those now | 19:32.47 |
| Robin_Watts: so with your fixes there are no more leaks | 19:59.28 |
| it is rock steady | 19:59.39 |
| I will commit this and send jung a new version of mupdfnet | 20:00.29 |
| ok done | 20:05.00 |
| heading off to lunch | 20:05.24 |
Robin_Watts | mvrhel_laptop: For the logs. Brilliant. The thing is Jung claims that even with those fixes, he can see the memory use increasing in the process manager, and I can reproduce that here. | 22:52.59 |
sebras | Robin_Watts: hm... mu tool show test.pdf grep trailer gives me warnings since commit f533104 | 23:44.48 |
| s/mu tool/mutool | 23:44.56 |
| it's complaining that we can't seek backwards. | 23:45.11 |
| I bisected to either f533104 or 563c482 being the culprit. | 23:45.34 |
| can't say I understand how these could be related to seeking in files..? | 23:46.12 |
| Robin_Watts: https://web.archive.org/web/20080410212601/ href="http://www.adobe.com/devnet/acrobat/pdfs/pdf_reference.pdf">http://www.adobe.com/devnet/acrobat/pdfs/pdf_reference.pdf this is the file. | 23:50.35 |
| Robin_Watts: I think commit 563c482 leaves the stm->rp and stm->wp being non-NULL which causes fz_seek() to have fz_tell() return 364 for the linked file. which means that offset == -364 which then leads to the warning about seeking backwards. | 23:58.34 |
| Robin_Watts: why does this commit affect what stm->rp and stm->wp would be..? | 23:59.07 |
sebras | is too tired to debug. going to bed. | 23:59.16 |
| Forward 1 day (to 2015/05/15)>>> | |