| <<<Back 1 day (to 2013/06/24) | 2013/06/25 |
sebras | hi | 00:16.24 |
ghostbot | hey | 00:16.24 |
sebras | ghostbot: welcome back! :) | 00:16.31 |
kens | God, close 2 bugs, get 4 new ones.... | 08:03.53 |
chrisl | Pretty sure at least a couple aren't really yours | 08:04.16 |
kens | I just passed one to Ray (15+ Gb temp files) | 08:05.35 |
| I think that's actually legit, but Ray probably can figure out why the files are so big faster than me | 08:05.54 |
| Its a monstrously complex PDF file | 08:06.11 |
chrisl | It may have been fixed - he did fix something similar not that long ago | 08:06.26 |
kens | <sigh> and his other report is another 15Mb PDF file | 08:07.19 |
chrisl | Which, the JPG one? | 08:07.31 |
kens | No, the other Gigs one, with (at least) 8 spot colours | 08:08.04 |
chrisl | Oh, right, that'll be a color one, then? | 08:08.22 |
kens | 'probably'. Looks like it has full page overprinting, Michael will be delighted | 08:08.50 |
| And its NChannel too. | 08:09.36 |
| By the way, whats 'gsc' ? | 08:11.18 |
| I wasn't aware we built GS with that name on Linux | 08:11.32 |
| Hmm, I get an undefinedfilename in showpage | 08:13.14 |
| OK fixed that. | 08:13.41 |
| Guess I'd better boot a VM and try this in Linux | 08:14.17 |
| No, fails on Windows too, and in a debug build :-) | 08:14.37 |
| Good grief, showpage is a procedure in gs_init.ps. Of course it is.... | 08:20.07 |
| Joy looks like its a VM bug | 08:20.27 |
chrisl | kens: gsc is one of the dynamic lib aps, IIRC - gsx and gsc | 08:26.48 |
kens | Oh, OK | 08:27.18 |
| Right, that's the other Gigs one off to Ray as well. | 08:36.52 |
Robin_Watts | tor8: ping ? | 08:38.16 |
tor8 | Robin_Watts: hi. | 08:38.29 |
Robin_Watts | In pdf-repair.c | 08:38.54 |
| line 44ish | 08:38.58 |
| we call pdf_parse_dict and we pass NULL for the pdf_document | 08:39.11 |
| There is a comment there that says "Send NULL xref so that we don't try to resolve references" | 08:39.29 |
| I can't see anywhere in pdf_parse_dict that we WOULD try to resolve references. Am I being dim? | 08:39.48 |
tor8 | let me try to remember | 08:47.16 |
| I don't think it was set to NULL for the parse_dict call, but for later use | 08:47.36 |
Robin_Watts | tor8: Sorry, I have radu talking to me in a skype window. | 08:51.32 |
| I don't follow. | 08:52.06 |
| The one place where I can see a NULL is in the pdf_parse_dict call. | 08:52.18 |
tor8 | we set it to null so that the we wouldn't try to resolve any indirect references contained in the dict *later* | 08:53.05 |
| not during the parse call, but from any pdf_load_object calls that pulled up the same cached obj | 08:53.25 |
| and further operations on it | 08:53.36 |
| but I don't remember exactly why | 08:53.46 |
Robin_Watts | tor8: for that to be true, the NULL passed into the pdf_parse_dict call would need to be stored in the lexed objects somewhere, yes ? | 08:55.21 |
tor8 | we stored the pdf_xref type in the objects at one time | 08:55.47 |
| before fz_context was introduced | 08:55.55 |
| so this could all be old junk | 08:55.59 |
Robin_Watts | right. My suspicion is that it is indeed old. | 08:56.18 |
| Ah. We have a call to pdf_new_indirect in there, and that stores the xref. | 08:56.58 |
tor8 | maybe because we created a new pdf_xref object at the end? | 08:57.00 |
| so didn't want old stale pointers in there | 08:57.09 |
| (just a guess) | 08:57.18 |
Robin_Watts | but the pdf_document pointer won't change now. | 08:57.27 |
| which is what we store. | 08:57.33 |
| the xref contents might, but the pdf_document won't. | 08:57.41 |
| so I think we're safe. | 08:57.44 |
tor8 | yeah. I think it ought to be safe. | 08:58.15 |
| I just looked at the ohloh graph of mupdf | 08:58.29 |
| code size was stable from 2004-2011 | 08:58.43 |
| then it blew up, and has doubled in two years | 08:58.50 |
Robin_Watts | 3 months since last commit? | 08:59.08 |
tor8 | too many new features! :) | 08:59.15 |
| said last commit was 3 days ago | 08:59.28 |
| we had a local minimum in 2010 of 37kloc | 09:00.26 |
| now we're at 99kloc | 09:00.39 |
Robin_Watts | has a well established, mature codebase | 09:03.26 |
| maintained by a large development team | 09:03.28 |
| with increasing Y-O-Y commits | 09:03.30 |
| Apparently we're fat. :) | 09:03.33 |
kens | wonders what a Y-O-Y commit is | 09:04.48 |
Robin_Watts | Year on Yea | 09:04.55 |
kens | WHy oh Why ? | 09:04.56 |
Robin_Watts | Year on Year | 09:04.57 |
| :) | 09:05.04 |
tor8 | Why Oh Why, did we do that! seems more appropriate :) | 09:05.23 |
Robin_Watts | tor8: 2 commits on robin/master then. | 09:07.26 |
| Interestingly gs has ballooned since 2010 too. | 09:10.14 |
tor8 | s/xref_entry/xref/ in the commit message? | 09:10.27 |
Robin_Watts | That must be us pulling in dependencies. | 09:10.32 |
tor8 | in gs that's probably the case | 09:10.51 |
| or you're just outproducing the lot of us ;) | 09:11.00 |
kens | FreeType would be a big part of that | 09:11.02 |
Robin_Watts | do I not need xref ? | 09:11.11 |
| tor8: I am the infinite monkey :) | 09:11.19 |
| do I not *mean* xref_entry ? | 09:11.30 |
tor8 | xref_entry is the individual slots in the xref (section) struct | 09:11.53 |
| table | 09:12.01 |
| thingy | 09:12.03 |
Robin_Watts | I mean 'pdf_xref' | 09:12.18 |
| ok, will reword. | 09:12.21 |
tor8 | the patches look fine apart from that message | 09:13.28 |
| so go ahead once you've fixed | 09:13.35 |
Robin_Watts | radu is complaining that the build script is borked for ios. | 09:13.49 |
| i'm trying to build your latest code | 09:14.09 |
tor8 | Robin_Watts: I'm surprised the ios build has worked at all the last couple of months | 09:14.09 |
Robin_Watts | [09:51:26] Radu Lazar: iOS version | 09:14.11 |
| [09:51:50] Radu Lazar: even after make generate it still fails | 09:14.13 |
tor8 | I guess I could try to fix | 09:14.14 |
Robin_Watts | [09:55:23] Radu Lazar: no rule to make target libs | 09:14.15 |
| [09:59:03] Radu Lazar: this is from the build script | 09:14.16 |
| [09:59:03] Radu Lazar: make -C .. libs || exit 1 | 09:14.18 |
| [09:59:11] Radu Lazar: i thinks it should be ../.. | 09:14.19 |
| even with that | 09:14.23 |
| [10:12:20] Radu Lazar: it still fails after | 09:14.25 |
| [10:12:34] Radu Lazar: some paths don't add up, i guess | 09:14.27 |
tor8 | which version of ios/xcode is he using? may as well use the same when fixing it. | 09:14.45 |
| I have the Xcode 5 preview thingy installed too | 09:15.08 |
Robin_Watts | 4.6.3 and ios6.1 | 09:15.20 |
| ios61 | 09:15.30 |
tor8 | thanks. | 09:15.40 |
| I'll go ahead and ruin my day then.. | 09:15.57 |
Robin_Watts | :) | 09:16.46 |
tor8 | I got my new computer all assembled, now to set up linux and a development environment for opengl... but that can wait till after I've torn out my remaining hair over some new apple madness :) | 09:18.01 |
sebras | tor8: once the top of your head is cleared, you might want to start tearing your eyelashes. ;) | 09:25.56 |
tor8 | Robin_Watts: make -C .. libs -> "make -C ../.. OUT=$OUT libs" and then fix the ..'s in ../../$OUT/*.o in the ar line later on fixes it | 09:26.12 |
| tear off my eyelids! | 09:26.18 |
| Robin_Watts: ios build fix on tor/master | 09:28.22 |
| doesn't promise it runs after, but at least it compiles | 09:28.33 |
| don't push yet though | 09:32.40 |
Robin_Watts | right. hot water sorted, so there is a reasonable chance I can have a shower when I get back from my run. bbiab. | 09:47.51 |
tor8 | I. Hate. X. Code. | 10:11.56 |
paulgardiner | Robin_Watts, tor8: Ah good. Three of my reviews are in. Thanks. I'm guessing there are quesions about or problems with the forth. | 10:20.54 |
tor8 | paulgardiner: read the irc logs from yesterday :) | 10:21.09 |
paulgardiner | Right. Will do. | 10:21.19 |
tor8 | I and Robin had a long discussion about it | 10:21.19 |
paulgardiner | Eek! Sounds ominous. | 10:21.35 |
| Hmmm. Finding that hard to follow. One thing that it took me ages to realise is that we don't have to worry about multiple occurances of non-indirect objects: although our data structures support that possiblility, the file format doesn't, so it can never occur. | 10:35.49 |
| Second thing. We don't need to copy the whole hierarchy above an object we wish to change, just the nodes on a path up from the object to the next indirect one. | 10:38.31 |
| So, if I'm understanding correctly, the idea is to make it more systematic about which thing to clone rather than relying on the fact that we known the important nodes in the hierarchy for all the particular cases that arise with forms and annotations. | 10:42.28 |
| So, I guess we could, while reading the file, keep a parent pointer for each non-indirect object... but setting that pointer would be highly nonsystematic and so just as fragile as performing the cloning on a case-by-case basis | 10:46.06 |
tor8 | paulgardiner: the parser (and dict_puts, etc) could make sure the parent pointers are up to date | 11:03.11 |
| but we're not sure the parent is guaranteed to have the same lifetime or we'll introduce reference counting cycles | 11:03.44 |
| when writing out a new pdf the incremental xref section will need the full indirect object, so it won't hurt us (much) to clone the whole thing anyway | 11:04.15 |
| and it makes the book-keeping simpler | 11:04.23 |
paulgardiner | Lost me. Whole thing? | 11:04.56 |
tor8 | 10 0 obj << ...big nested dictionary... >> endobj | 11:05.13 |
| that whole thing has to be written out from scratch even if it's only one of the nested leaves in a big tree of dicts that has changed | 11:05.36 |
paulgardiner | Still lost | 11:06.19 |
tor8 | 10 0 obj << /Foo << /Bar 3 >> /Quux 42 >> endobj | 11:06.38 |
| if we change 10's /Foo/Bar entry | 11:06.53 |
| all of 10 will have to be written in the new xref section | 11:07.08 |
| anyway, if you read all of yesterday's discussion I think you'll find it doesn't matter | 11:08.17 |
| since we can just zero the "cache entry" in xref->entry[x]->obj for the old xref section and move the pointer to the incremental section | 11:08.58 |
paulgardiner | yeah, but I don't get how that impacts on the incremental xref section. | 11:09.06 |
| I read it. I don't get it. | 11:09.17 |
tor8 | okay. here's my proposal from scratch: | 11:09.45 |
| we keep some book keeping in each pdf_obj (the object number of the containing obj/endobj more specifically) | 11:10.18 |
| whenever we try to mutate a pdf_obj, we need to move the containing obj/endobj into the incremental xref section | 11:10.56 |
| so pdf_dict_puts etc would find the corresponding "old" xref section, zero out the obj pointer, and make a new xref slot in the incremental xref section and put the pointer there instead. | 11:11.34 |
| and then from the root object it'd go and zero out the book-keeping object number (so we don't waste cycles doing the same thing over and over) | 11:12.00 |
| make sense so far? | 11:12.08 |
paulgardiner | Not quite. | 11:12.47 |
tor8 | the idea is to automatically put *all* edits into the incremental xref section | 11:13.03 |
paulgardiner | zerp out the obj pointer? | 11:13.06 |
tor8 | anything before the incremental section is immutable | 11:13.14 |
paulgardiner | s/zerp/zero/ | 11:13.15 |
tor8 | copy-on-write essentially | 11:13.18 |
paulgardiner | But we may need to refer to the old version | 11:13.38 |
tor8 | pdf_xref->table[x].obj has a pointer to an object (which is cached) | 11:13.43 |
paulgardiner | So would not zeroing make it unobtainable | 11:13.55 |
tor8 | trying to read the old object by using the old (non-incremental) xref section would reload it from file using the xref file offset | 11:14.27 |
| anybody trying to read the object from the "top" of the xref section stack (i.e. the latest version of an object) would get the mutated copy that hasn't been saved to file yet | 11:15.07 |
paulgardiner | So you zero it out so that you can avoid cloning it? | 11:17.02 |
tor8 | yup. | 11:17.15 |
Robin_Watts | We zero it out a) so we don't try to read the old version and find the new version there | 11:17.29 |
| and b) so we don't clone it again and again. | 11:17.39 |
| oh, sorry, the pdf_obj 'parent' int - that's just for b) aiui. | 11:18.12 |
paulgardiner | Once you clone it and it's in the incremental section, you wouldn't clone it again anyway | 11:18.19 |
Robin_Watts | Right, but whenever we alter it, we'd have to check whether it was there or not. | 11:18.47 |
| it's a question; is nulling recursively on a move more or less expensive than repeatedly rechecking objects when we touch them ? | 11:19.21 |
| paulgardiner: There was a small problem with the 3rd commit. | 11:20.02 |
| Where you malloc a list, then populate it, then use it, then free it. | 11:20.14 |
| If you throw an exception during the building, you can be left with an unfreed partial list. | 11:20.31 |
paulgardiner | Robin_Watts: yeah I saw the comment. So all sorted now though? | 11:20.33 |
Robin_Watts | no, I didn't fix that yet. | 11:20.42 |
paulgardiner | Oh okay. I'll sort that. | 11:20.52 |
Robin_Watts | I have some work preparatory to the changes tor8 is talking about. | 11:21.20 |
| on robin/master | 11:21.32 |
paulgardiner | I have yet to understand how nulling is avoiding a check | 11:21.39 |
tor8 | paulgardiner: I think robin's referring to setting pdf_obj->number to 0 as opposed to checking the xref slot | 11:22.18 |
| (correct me if I'm wrong) | 11:22.28 |
paulgardiner | I don't get why this is confusing me so much. Now I can't imagine what I could do with a pdf_obj that had had its number set to 0. I can't even work out what it used to refer to. | 11:28.02 |
Robin_Watts | paulgardiner: Currently pdf_objs' don't have any 'parent' information. | 11:31.39 |
| If I have a pdf_obj that is (say) a name or a string or an int, there is no way to know what it's parent is. | 11:32.07 |
| Likewise if I have a dictionary, I have no way of knowing, purely from the pdf_obj where that lives in the heirarchy. | 11:32.31 |
| yes? | 11:32.32 |
paulgardiner | So by "number", did you mean the number of the containg indirect? | 11:33.58 |
Robin_Watts | We are proposing adding a new field to pdf_obj, called, say, parent. The purpose of this field is to tell us what the closest parent of the object that has an xref entry is. | 11:33.58 |
| Yes. | 11:34.06 |
paulgardiner | And nulling meant zero that? | 11:34.30 |
Robin_Watts | yes. | 11:34.36 |
| or setting it to -1 or something. | 11:34.45 |
| some indication that this object is guaranteed to be in an object that it's in the incremental section already. | 11:35.18 |
paulgardiner | And that happens somehow as part of the moving to the incremental section> | 11:35.29 |
| ? | 11:35.34 |
tor8 | there can't ever be an indirect object with number 0 (it's reserved for the first "free" slot in the xref) | 11:35.51 |
paulgardiner | number is set only for objects with parents in the incremental section? | 11:36.39 |
Robin_Watts | Whenever we do a pdf_dict_write (or whatever the function is), we need to ensure that the whole heirarchy that that dictionary lives in (up to the closest enclosing indirect reference) lives in the incremental section. | 11:36.44 |
| No. number is set for all objects. | 11:36.57 |
paulgardiner | But we null it for some? | 11:37.09 |
Robin_Watts | When we realise that we change an object (and hence need to move it's containing heirarchy) to the incremental section we can set the 'parent' value for each object in the bit that we move to 0. | 11:38.23 |
paulgardiner | Meaning don't move again | 11:38.57 |
Robin_Watts | indeed. | 11:39.02 |
tor8 | so the question is whether it's faster to zero the hierarchy, or do the test at each modification | 11:40.03 |
Robin_Watts | I suspect doing the test at each modification will be cheap enough. | 11:50.50 |
| read the number. read the xref pointer, read xref[number].type | 11:51.31 |
paulgardiner | Talking on the phone helps. Yeah, get it now. So the number = 0 is just an optimisation to avoid checking its in the incremental sect | 11:51.41 |
Robin_Watts | yeah. | 11:51.48 |
| Suppose we do *two* incremental things? | 11:52.04 |
| sign it once, then sign it again. | 11:52.10 |
| The number = 0 thing becomes problematic then. | 11:52.22 |
| because things that were updated and were in the incremental section no longer are. | 11:52.37 |
| so I'd vote for not doing the number = 0 thing. | 11:52.48 |
paulgardiner | But overall the idea is move instead of clone, which invloves setting old obj pointer to zero so it gets reloaded if needed, and make spotting of parent indirect object systematic rather than use case-by-case knowledge | 11:53.32 |
| Robin_Watts: ah yes, on saving the incremental section is no longer incremental | 11:54.41 |
| It was mainly the move instead of clone that was confusing me, but get it now. | 11:55.16 |
| Yes. Nice. | 11:55.24 |
Robin_Watts | paulgardiner, tor8, sebras: http://www.kickstarter.com/projects/1949537745/armikrog | 11:55.52 |
paulgardiner | So who's doing what. I could take this over if you like, or you could continue | 11:55.54 |
Robin_Watts | paulgardiner: I'm happy for you to take it over. | 11:56.20 |
paulgardiner | Okay great | 11:56.32 |
Robin_Watts | I ought to just test pdfclean with my existing change. | 11:56.41 |
| the cluster passes it, and tor8 was happy. | 11:56.50 |
tor8 | paulgardiner: yes, I agree. checking the xref section rather than 0 sounds more robust. | 11:57.12 |
Robin_Watts | If we ever do merging of documents, then having pdf_doc pointers in the pdf_obj's will mean we need to change ownership, but I don't think that affects us at the moment. | 11:57.50 |
paulgardiner | Hmmm, the only thing I wonder is whether this is worth growing the obj structure for: the cases we need to handle are very few. Another possibility would be to change my existing patch to use move rather than clone | 11:59.45 |
| It's not like this will need revisiting for each new annotation type. | 12:00.41 |
Robin_Watts | paulgardiner: The pdf_obj structure shrank by 8 bytes yesterday. | 12:00.42 |
| we're about to grow it by 4 :) | 12:00.55 |
paulgardiner | Oh well, would be rude not to take advantage of that. | 12:01.01 |
Robin_Watts | so it's a net win. | 12:01.03 |
| paulgardiner: Do you want to fix that list malloc failure thing? | 12:01.34 |
| while I test pdfclean ? | 12:01.43 |
paulgardiner | Yeah. Will do that in a min | 12:01.45 |
Robin_Watts | tor8, paulgardiner: If you "mutool clean -d pdf_reference17.pdf out.pdf" do you get warnings about trying to load object 333280 ? | 12:04.08 |
| tor8: See Skype. | 12:12.26 |
| ok, old code is giving me the same errors with the xrefs. | 12:14.19 |
| paulgardiner: pushed. | 12:15.13 |
| paulgardiner: So, there is a problem with your reworked xref code. | 12:24.45 |
| let's do this on the phone, it'll be easier. | 12:25.05 |
tor8 | Robin_Watts: ios fix on tor/master | 12:31.38 |
Robin_Watts | tor8: Let me look now. | 12:32.42 |
| pushed | 12:34.55 |
tor8 | thanks. | 12:35.24 |
paulgardiner | Hmmm, three cases: 1 - we call pdf_dict_set while loading from a file for a dict that has yet to be linked into the hierarchy of any indirect object; 2 - we call pdf_dict_set while loading from a file for a dict that is linked into an indirect object; 3 - we call pdf_dict_set to edit a document. | 12:57.58 |
| Not sure how to distinguish and handle these cases, especially as we load objects lazily, so we may edit page 1 before loading the objects for page 2. | 12:59.11 |
| There's a fix for the potential memory leak pushed to paul/master, BTW. | 12:59.48 |
tor8 | paulgardiner: when inserting an object into a dict or array, inherit the dict/array's parent number | 12:59.56 |
| and when loading an indirect object from file, set its number at or after creation | 13:00.22 |
| setting it after would have to be recursive thugh | 13:00.34 |
paulgardiner | tor8: yeah, I think we are forced to do the recursion because we probably create the non-indirect tree and then indirect it | 13:01.32 |
| Actually no, that's not quite right. | 13:02.00 |
| But still, whether a dict will already have a partent number to inherit will depend very much on the particular code used to load from the file | 13:04.04 |
tor8 | hm, the xref->trailer may need some special magic number here? | 13:05.44 |
paulgardiner | Sorry, not getting that. | 13:06.43 |
tor8 | when mutating the trailer object... wouldn't we need to do the same "move to incremental" stuff? | 13:07.19 |
paulgardiner | Each xref-section structure has its own trailer obj | 13:08.29 |
tor8 | do you create a new trailer object clone when creating the incremental section? | 13:09.02 |
| just figured we shouldn't forget about mutating the xref... | 13:09.28 |
paulgardiner | I think so. Just looking now. | 13:09.29 |
| Hmmm, I seem to just pdf_keep_obj it. That can't be right. | 13:09.49 |
tor8 | whether that should also automatically create a new incremental section or something else | 13:10.04 |
| the trailers for new format xrefs has the Prev and Next and file offsets in the dictionaries | 13:10.29 |
| so if you're just pointing to the same as the old trailer, it sounds like you've got some more work to do :) | 13:11.10 |
paulgardiner | Yeah, I was not worrying about that too much because it should have no ill affects until we want to compare versions of the file. | 13:12.09 |
| I'm much more concerned about how to recognise the different cases of pdf_dict_put | 13:12.46 |
tor8 | paulgardiner: there's only one pdf_dict_put at the bottom that does the work | 13:14.05 |
| there is array_put/push/insert | 13:14.51 |
paulgardiner | Yeah sure, but knowing whether it is being called for a object load from the file or for the sake of editing is the problem | 13:15.01 |
tor8 | paulgardiner: I'd suggest a call pdf_set_object_number(obj, num) that recursively sets the object parent number | 13:15.33 |
| and add calls to that to pdf_load_ind_obj and pdf_load_stm_obj | 13:15.44 |
| and pdf_update_object | 13:15.55 |
| maybe this xref and object loading stuff could be refactored a bit to be clearer | 13:16.19 |
| init a new object with parent number set to 0 | 13:16.49 |
| and then (recursively) inherit any time an object is linked into a dict or array | 13:17.19 |
| when parsing that shouldn't be too expensive (since parsing always adds to the end) | 13:18.07 |
| or if you have a better idea, use that :) | 13:18.30 |
paulgardiner | Trouble is, although I like this idea a lot, looking into the detail, I'm struggling to see a way to handle all the cases that is going to be less error prone than the case-by-case handling I did before. | 13:20.08 |
| I guess it should be the case that there are only a few well defined places in the code where we parse objects and just calling the recursive setting of the parent there should work. | 13:21.24 |
| And maybe we can distinguish between pdf_dict_put for the sake of editing or loading on the basis of whether the parent is set. | 13:22.28 |
| Yeah, would that work: when calling pdf_dict_put, dict's parent == 0 means this dict is being altered for the sake of being parsed from the file? | 13:24.25 |
| And once we've loaded a | 13:24.40 |
| n object, we recursively set parent. | 13:24.54 |
Robin_Watts | We discussed on the phone the idea of having a doc->incremental_mode flag. | 13:25.26 |
| If doc->incremental_mode =0, then we just change the dict, no messing with parents etc. | 13:25.53 |
| If doc->incremental_mode = 1 then we check the parent and move if required. | 13:26.11 |
| so we do normal loading with doc->incremental_mode set to 0. | 13:26.23 |
| Oh, but doing parsing later maybe a problem, is that what you're saying ? | 13:26.50 |
paulgardiner | Yeah. I realised that after we talked | 13:28.42 |
tor8 | paulgardiner: there are only two places objects are loaded from file -- pdf_parse_ind_obj and pdf_parse_stm_obj ... ad maybe the repair code has something as well | 13:30.08 |
Robin_Watts | so, we could stash the incremental mode flag somewhere, do the parse, then reinstate the flag ? | 13:30.17 |
paulgardiner | Above I suggested that the dict's parent number being 0 could be used to determine whether we are loading for editing. | 13:31.42 |
tor8 | parent==0 when creating objects, if parent==0 when mutating, do nothing. then set parent after the object is loaded. that should cover both the incremental and non-incremental cases. | 13:31.48 |
paulgardiner | tor8: so you thing that would work. | 13:32.21 |
tor8 | when insert an object into a dict or array, if the dict or array has a parent != 0, recursively set the object's parent | 13:32.25 |
Robin_Watts | tor8: That solves the parsing problem, yes. | 13:32.30 |
| but I still think we need an incremental_mode or not to say whether we are working in a way where all updates should be incremental, or whether we want to completely rewrite the file (like when we're cleaning) | 13:33.09 |
paulgardiner | tor8: that last case occuring only when editing? | 13:33.23 |
| Robin_Watts: yeah having a mode flag is still good. | 13:33.54 |
tor8 | Robin_Watts: yes. incremental mode should govern whether a mutation on a non-zero parent moves the object to the incremental section or not. | 13:34.06 |
Robin_Watts | This sounds like a nice sane solution. No odd corner cases that I can immediately see. | 13:34.34 |
tor8 | but the incremental flag shouldn't affect maintaining the parent numbers | 13:34.36 |
| so all objects that live in an xref have a parent number, objects that don't (yet) have a zero parent number | 13:35.02 |
paulgardiner | yeah, sounds good | 13:35.17 |
tor8 | any updates automatically move into the incremental xref section if the incremental mode is set, no possible way to forget | 13:35.30 |
| pdf_update_object which is used to insert a newly created object into the xref also needs to assert parent ownership | 13:36.11 |
paulgardiner | It's only arrays and dicts that need to be recursed through, right? | 13:36.20 |
tor8 | and of course the normal indirect and object stream parser cases | 13:36.29 |
Robin_Watts | Yes. | 13:36.34 |
tor8 | paulgardiner: correct. | 13:36.35 |
| the repair and cleanup code does magic trickery, but I think at the end of this incremental xref updating they should be mostly using the 'regular' apis for building and rebuilding xrefs | 13:37.20 |
| s/updating/project/ | 13:37.32 |
mvrhel_laptop | Robin_Watts: git workflow question for you (or for tor8) | 13:50.20 |
| I was going to get going on the lcms mupdf stuff and I know that tor8 has a branch tor/lcms2. Can I just checkout that branch and go on from there? | 13:52.28 |
| I am a little worried about what the origin is in this case | 13:52.56 |
Robin_Watts | mvrhel_laptop: You can. | 13:53.12 |
Gigs- | kens: thanks for looking at the bugs | 13:53.28 |
kens | NP | 13:53.35 |
mvrhel_laptop | And then when I push Robin_Watts does it go to my repos? | 13:53.42 |
Robin_Watts | git fetch from tor8's repo, then you'll see a tor/lcms2 branch. | 13:53.55 |
Gigs- | kens: we get those complex files sometimes... I think what they come from is a forced language level postscript converted to PDF | 13:53.58 |
mvrhel_laptop | Robin_Watts: yes I see the branch now | 13:54.08 |
Gigs- | kens: gradients and transparency decomposed to raster | 13:54.08 |
tor8 | hm, I think I should update the lcms2 branch to the latest reshuffle | 13:54.12 |
Robin_Watts | Create yourself a branch at that same place (lcms2). | 13:54.13 |
kens | Gigs, possibly, its really hard to tell | 13:54.15 |
Robin_Watts | Then you can rebase it. | 13:54.32 |
| You may be best to let tor8 rebase that first for sanity. | 13:54.41 |
mvrhel_laptop | tor8: oh I will wait if you are going to do that | 13:54.45 |
Gigs- | they do eventually render but I drive gs from the web and people hit stop and reload which just runs another copy of gs | 13:54.52 |
tor8 | I'll do it right now, shouldn't take long | 13:54.56 |
Gigs- | Robin_Watts: tested and they are private things | 13:55.09 |
| thanks not things | 13:55.20 |
kens | Gigs, one of those is definitely a bug, the crash, the ohter, well I'm not sure, so I asked Ray to look at it, 15Gb seems a lot, but.... | 13:55.53 |
Robin_Watts | Gigs: Eh? Me? What? | 13:56.06 |
| Oh, the attachments. Gotcha. | 13:56.34 |
Gigs- | kens: well maybe there's a way to handle them a little more efficiently or something, we'll see | 13:57.17 |
| kens: I guess with planar all those color spaces are getting rendered into planar layers? | 13:57.46 |
kens | Thats a question for Robin_Watts :-) | 13:58.02 |
Gigs- | I'm not too hung up on the complex file though, the segfault is more pressing IMO | 13:58.13 |
Robin_Watts | In planar mode, we render in planar mode, yes :) | 13:58.18 |
Gigs- | Robin_Watts: if you weren't keeping score this file is said to have 750 color spaces :P | 13:58.39 |
| kens: you are lucky though, back in the days of DCS2 files I had to get artifex attachments that were sometimes 150 megs | 13:59.10 |
kens | Many are duplicates of course | 13:59.10 |
Robin_Watts | 750 separations? | 13:59.24 |
Gigs- | no not separations | 13:59.29 |
| I think it's probably a gradient that is decomposed | 13:59.41 |
Robin_Watts | Well, what matters to planar is the number of separations. | 13:59.42 |
Gigs- | line-by-line gradient fun | 13:59.57 |
| one raster image for each line :P | 14:00.20 |
kens | That might explain it | 14:00.36 |
Gigs- | just guessing but it's usually something like that | 14:00.46 |
paulgardiner | Robin_Watts: potential memory leak fix is up | 14:00.49 |
Robin_Watts | paulgardiner: Yeah, will look in a just a mo. mid debugging now. | 14:01.06 |
paulgardiner | np | 14:01.18 |
tor8 | mvrhel_laptop: okay, tor/lcms2 has been updated | 14:04.39 |
mvrhel_laptop | tor8: ok. let me see if I can do this | 14:06.34 |
tor8 | mvrhel_laptop: git checkout -b lcms2 tor/lcms2 | 14:08.18 |
| or at least that's what I think the command should be | 14:08.25 |
mvrhel_laptop | tor8: that matched the command that tortoise git used | 14:12.46 |
tor8 | mvrhel_laptop: then you want to set it to push to your user repo on casper: | 14:14.31 |
| git push -u mvrhel lcms2 | 14:14.48 |
| the -u sets the 'upstream' flag for pull and push | 14:15.12 |
mvrhel_laptop | ok so for some reason I had to do a hard reset of my lcms2 branch to yours. It had set it to an earlier checkout. | 14:16.22 |
| tor8: so now the git push -u mvrhel lcms2 will ensure that it pushes to my repos on casper? | 14:16.57 |
tor8 | mvrhel_laptop: yes. after the first command with -u you then only need to "git push" and if you're on your local lcms2 branch it will push to mvrhel | 14:17.47 |
| "git push -n" if you're unsure of what it will do | 14:17.56 |
Robin_Watts | tor8: How can I see what the upstream branch for a given local branch is? | 14:23.11 |
mvrhel_laptop | hmm I must have to specify something other than mvrhel in the above command | 14:23.43 |
Robin_Watts | mvrhel_laptop: "origin" | 14:23.59 |
mvrhel_laptop | ok that makes more sense | 14:24.05 |
Robin_Watts | You have "origin" and "golden", right? | 14:24.14 |
mvrhel_laptop | exactly | 14:24.19 |
| Robin_Watts: ok that was it | 14:24.32 |
| thanks | 14:24.35 |
| now lcms2 = origin/lcms2 = tor/lcms2 | 14:25.13 |
| I have to run a rental car back to airport. Had another flight cancelled yesterday. Had to rent a car and drive from chicago to my parents | 14:26.47 |
Robin_Watts | geez. | 14:26.57 |
mvrhel_laptop | I have decided United is the worst airline | 14:27.02 |
| waited 1 hour for them to pull my bags | 14:27.24 |
| they had rebooked me on a flight on wed. | 14:27.35 |
| anyway. I will probably miss part of mupdf meeting but back shortly | 14:27.50 |
Robin_Watts | paulgardiner: Fix looks fine. | 14:31.26 |
| I'm slightly bemused by you using the itr=&head; trick in the second part, but using tail in the top half. | 14:32.08 |
| but it'll work. | 14:32.22 |
henrys | my IRC client showed everyone dropping off IRC and I was the only remaining one in the ghostscript group. I could get a complex from stuff like that. | 14:33.54 |
paulgardiner | Robin_Watts: need ** in the second loop because the error case must remove an item from the list. | 14:34.40 |
Robin_Watts | Right, but the same trick in the first half of the loop (head = NULL; itr = &head; ...... loop { (*itr) = annot; itr = &annot->next }; ) avoids the need for the hairy if in the loop. | 14:35.37 |
paulgardiner | Oh, I see what you are saying. I could have used similar to avoid casees int he top | 14:35.44 |
Robin_Watts | personal preference only. I'll push it as is. | 14:35.55 |
paulgardiner | I might change that later - just for consistency | 14:36.45 |
| tor8, Robin_Watts: more on this incremental stuff - pdf_dociment_s has page_objs and page_refs. An update might cause them to lose sync | 14:39.48 |
| In fact, any structure that holds an non-indirect obj has potential problems. | 14:41.02 |
tor8 | paulgardiner: actually, to reduce document load times we could move to using the page tree structs as is instead of pre-flattening into a page list | 14:42.03 |
| I believe the page_obj is the only place where we actually hold on to objects (that I've written) ... can't remember about the store though, maybe we use some form of pdf_obj as keys there | 14:43.06 |
| navigating the page tree (and name trees, etc) directly without preloading is probably worthwhile doing | 14:43.56 |
| the main reason I moved to flattening them was to reduce the need for error checking back before we had exceptions | 14:44.13 |
paulgardiner | tor8: I think with the store its the other way round. We use the ref as key in the store where we need to use the actual object | 14:44.13 |
| tor8: why do we have both the objs and refs? (assuming there's a simple explanation that avoids me having to suss it from the code) | 14:45.27 |
tor8 | the page_objs have inherited resources and mediaboxes copied into the dictionary | 14:49.21 |
Robin_Watts_ | oh, that's bad then. | 14:50.16 |
| In incremental mode, we'll always immediately update those into a new section. | 14:50.36 |
tor8 | yeah. I've been meaning to fix it for a couple of years... | 14:50.38 |
Robin_Watts_ | Similarly, I have code that 'marks/unmarks' dicts by adding bools I think. | 14:51.08 |
| I should change those to use the mark/unmark flags. | 14:51.17 |
henrys | 10 of the mupdf meeting | 14:51.18 |
tor8 | Robin_Watts_: the mark/unmark stuff could use the same 'mark' as the garbage collector does | 14:51.39 |
Robin_Watts_ | has tea and is standing by :) | 14:51.46 |
| tor8: indeed. | 14:51.58 |
tor8 | or maybe just another flag (if there's space) called 'seen' | 14:52.18 |
| since that's the most common use | 14:52.22 |
paulgardiner | The work involved in this seems to be expanding | 14:52.31 |
tor8 | I'll make a not of fixing the page tree loading stuff | 14:52.39 |
| note* | 14:52.42 |
Robin_Watts_ | paulgardiner: I think this is highlighting some minor areas that need fixing. | 14:53.03 |
tor8 | paulgardiner: that's not necessarily a bad thing, though. | 14:53.27 |
Robin_Watts_ | i.e. they probably ought to be fixed irrespective of this work. this work is just giving us an excuse to do it now. | 14:53.30 |
| tor8: Ah. The code I was half remembering that stashes bools is the "pdf_resources_use_blending" stuff. | 14:55.21 |
| and it's not just 'marked or not'. It's "have I been here before, and if I did I decide true or false". | 14:55.54 |
tor8 | yeah. there may be some places where it's used to prevent infinite recursion on cyclic chains of references | 14:56.28 |
Robin_Watts_ | I think the infinite chains of references can be coped with by the marked/unmarked flags in pdf_obj. | 14:58.13 |
| I *could* cope with the pdf_resources_use_blending by using 2 more bits out of the flags byte. | 14:58.49 |
| 2 are used out of 8 so far. | 14:58.59 |
kens | wonders who the idiot complaining about pdfopt is, and why he won't just go away.... | 14:59.45 |
Robin_Watts_ | kens: Give him a link in the bug to the final version in git. | 15:00.27 |
tor8 | Robin_Watts_: .useBM is the flag we set there. adding a two bits (SEEN, and TRUE/FALSE) flag words there should be generic enough | 15:00.31 |
chrisl | kens: tell him to report it to the author of the script | 15:00.38 |
Robin_Watts_ | And tell him that he can use it by special dispensation. | 15:00.47 |
| tor8: right, that's what I was thinking. It's a bit bletcherous though. | 15:01.09 |
tor8 | well, so is .useBM :) | 15:01.19 |
Robin_Watts_ | but if you're OK with it... | 15:01.22 |
| tor8: true :) | 15:01.29 |
henrys | we need to learn the starbucks customer service acronym "LATTE" | 15:01.39 |
tor8 | having a generic flag things in the pdf_obj structs that's used for cyclic checking etc seems like a better choice than adding voodoo entries in dicts | 15:02.21 |
henrys | okay meeting - one of the most exciting aspects of my job is I get to report everyone's most important current project to Miles. Now I've incorporated this into the workflowy, please look at the bottom of the agenda and see if you like your high priority job, please. | 15:03.19 |
tor8 | pdf_obj_mark could take an integer argument (and use 8 bits or so) perhaps? | 15:03.29 |
kens | chrisl, Robin_Watts_ I did just point out (again) that he can use the previous version of the program.... And since he is using a script to make fiels smaller, using pdfopt.ps is insane, since it only ever makes files bigger.... | 15:04.10 |
paulgardiner | henrys: can you post a link? | 15:04.52 |
kens | henrys can I have the URL for the agenda please ? I lost it :-( | 15:04.54 |
kens | ROFL | 15:05.00 |
henrys | dog ate it? | 15:05.10 |
kens | No I just forgot where I put it | 15:05.18 |
Robin_Watts_ | sends link to paul and kens | 15:05.25 |
chrisl | kens: I assumed he was starting with a linearised file, use ps2pdf to make it smaller, and then pdfopt to make it linearised again...... | 15:05.26 |
kens | THanks Robin_Watts_ | 15:05.31 |
Robin_Watts_ | don't post it here! | 15:05.32 |
| henrys: What do you mean by "Incremental Update"? | 15:05.58 |
kens | chrisl, I think not, I suspect he doesn't understand teh script a all, and has no idea what it does. Monkey see, monkey do.... | 15:06.02 |
Robin_Watts_ | Do you mean "progressive loading"? | 15:06.04 |
tor8 | if you post it here, we'll have to take ghostbot out back and put a bullet in its head... | 15:06.05 |
Robin_Watts_ | tor8: Nah, we just use surgery to remove a few neurons. I've had to do that a few times :) | 15:06.31 |
chrisl | kens: well, he's clearly not using pdfopt to make the file smaller, because it will *never* do that! | 15:06.52 |
kens | chrisl, exactly | 15:07.02 |
henrys | one second... | 15:07.12 |
Robin_Watts_ | "incremental update" = something that paul needs for digital signatures, and something that he's pretty much on top of (also it's the subject of the conversations we've been having over the past couple of days on here). | 15:09.24 |
henrys | I sent it to staff on 6/6/2013 10:12 am should I resend? | 15:09.35 |
| Robin_Watts_: I'm sorry yes I was mistakenly under the impression you were doing that. | 15:10.09 |
| I'll fix it. | 15:10.15 |
Robin_Watts_ | I'm happy to help with it (in fact I spent the past couple of days bashing on pdf_objs to make them more amenable to it), but I think Paul is most of the way there without me, and I suspect we'd be treading on each others toes if I was to continue. | 15:11.13 |
henrys | okay changed. | 15:12.22 |
Robin_Watts_ | My main projects should be progressive display or making pdfwrite work better for text (to enable the customers request for text annotations) | 15:12.45 |
henrys | all I don't think the url will change in the near future so a bookmark should work. | 15:12.51 |
Robin_Watts_ | let me add a cunning link to the dashboard ? | 15:13.20 |
henrys | I guess we could do that but it has to be password protected yes? | 15:14.18 |
Robin_Watts_ | yeah, hence cunning. | 15:14.29 |
henrys | so how are digital signatures? Having it by September is the goal. | 15:15.09 |
| paulgardiner: ^^^ | 15:17.36 |
paulgardiner | I think Sept should be okay. | 15:18.47 |
| This incremental stuff has expanded a bit. | 15:19.13 |
| I had a solution but it relied on picking out just the cases we need for annotations and forms. | 15:19.36 |
Robin_Watts_ | Can people check the dashboard now? The "Work items" link should be password protected by the same password as for bmpcmp. | 15:19.37 |
| If you've opened a bmpcmp window, you may not need to enter a password :) | 15:20.01 |
kens | It is so protected for me | 15:20.02 |
Robin_Watts_ | kens: Thanks. | 15:20.11 |
kens | But opens an empty page.... | 15:20.17 |
Robin_Watts_ | kens: yeah, I haven't put the secret sauce in there yet :) | 15:20.33 |
kens | Ah OK then | 15:20.38 |
henrys | anything to talk about mupdf wise? | 15:21.14 |
paulgardiner | We've been discussing today a more systematic technieque. Definitely better in the long run but more work | 15:21.27 |
henrys | tor8? | 15:21.28 |
tor8 | henrys: nothing to add. we've been talking about the incremental update stuff the past few days, but I think we've got a solution figured out now paul just needs to implement it | 15:22.34 |
henrys | paulgardiner: I've been following the discussion a bit, but without detailed understanding. | 15:22.39 |
Robin_Watts_ | kens: Try again now? | 15:23.15 |
henrys | tor8, right I do look forward to your looking at OpenGL, it's something I think we need to start looking at - generally both mupdf and possibly GS | 15:23.27 |
kens | Robin_Watts_ : yes that works for me | 15:23.31 |
Robin_Watts_ | thanks. | 15:23.37 |
paulgardiner | It depends on the priorities. The quickest was to get signatures might possibly be to stick with what I had for incremental update. | 15:23.46 |
tor8 | I've got two items on my urgent quick fix list left, and then I'm back to OpenGL. I have got a new linux box set up for running and debugging opengl stuff. | 15:23.46 |
Robin_Watts_ | paulgardiner: Do you believe that doing it "the nice way" is going to take more than a day or so ? | 15:24.15 |
paulgardiner | No. I think a day or so, but there is the risk that we then see all sorts of thing it breaks. | 15:24.46 |
henrys | tor8:oh great. Are you running linux on apple hardware? | 15:24.48 |
tor8 | henrys: no, bought a new i5 machine with the haswell chipset and an nvidia card | 15:25.16 |
Robin_Watts_ | paulgardiner: I think we pursue it for a day or so. If it runs into problems we back hastily away and pretend that those aren't our footprints. | 15:25.19 |
paulgardiner | We've realised a few problems like the pageobjs array. I worry there may be more | 15:25.25 |
| Robin_Watts_: that would be my preference. | 15:25.40 |
tor8 | hopefully that covers the bases of all three major gpu vendors (my windows box running amd) | 15:25.40 |
| in my experience with opengl development, you have to be really careful about not tripping over vendor driver bugs :( | 15:26.07 |
Robin_Watts_ | apple runs nvidia, right ? | 15:26.09 |
tor8 | apple runs intel, nvidia and amd chips. but with their own (very slowly updated) drivers | 15:26.37 |
Robin_Watts_ | paulgardiner: Feel free to get me to do donkey work if required. | 15:26.46 |
paulgardiner | I was mostly mentioning the alternative just to be sure henrys was aware of its being a possibility | 15:26.51 |
tor8 | apple pre-lion was only opengl 2.1 | 15:26.57 |
| and now they're all the way up to opengl 3.3 | 15:27.04 |
| (the latest version of opengl is 4.something or other) | 15:27.13 |
Robin_Watts_ | don't tell me, 3.3 is a special apple only version, right? :) | 15:27.26 |
paulgardiner | Robin_Watts_: the see where we are after I've spent a day on it looks like a good option | 15:27.46 |
tor8 | Robin_Watts_: well ... the apple opengl 3.x versions *don't* have the backwards compatible profile | 15:27.52 |
| so you're either running 2.1 or "core" 3.3 with no backwards compatible stuff at all | 15:28.05 |
Robin_Watts_ | tor8: So, are you going to reimplementing the nvidia path rendering from the ground up then? | 15:28.29 |
tor8 | Robin_Watts_: I'll most likely try several approaches | 15:28.55 |
henrys | paulgardiner: we have plenty of time so the "nice way" seems right | 15:28.57 |
paulgardiner | henrys: okay good. | 15:29.06 |
Robin_Watts_ | tor8: I guess start with the nvidia driver on linux and then we can reimplement it if required? And hope that the rest of the world clues up in the meantime. | 15:29.41 |
tor8 | Robin_Watts_: yeah. that was the immediate plan. get something up and running using the nvidia path extensions first. | 15:30.10 |
Robin_Watts_ | tor8: ok, that still leaves the "fun" of implementing blending in shaders. | 15:30.32 |
tor8 | well, first of all get some sort of opengl context setup and a viewer that does page flipping and zooming using GLUT or GLFW or something like that | 15:30.33 |
| Robin_Watts_: the "fun" will be implementing clipping... | 15:30.51 |
Robin_Watts_ | nvidia have code for clipping, I thought ? | 15:31.07 |
henrys | kens:were you laughing about starbucks? | 15:31.18 |
tor8 | they have some crude incomplete examples of clipping using their path rendering | 15:31.30 |
paulgardiner | henrys, Robin_Watts_ : should I be doing anything towards text annotations at this stage? | 15:32.06 |
tor8 | I think that to do general clipping using the stencil mask I'll have to render the clip mask once to a stencil bit plane when pushing the clip, and then clear that bit plane when popping | 15:32.36 |
Robin_Watts_ | paulgardiner: Well, we're waiting to hear back from christophe as to whether they are urgent or not, right? | 15:32.37 |
kens | henrys no, I don't think so, when was that ? | 15:32.50 |
tor8 | the main issue is the limit of 8 bits on the stencil buffers on mobile hardware | 15:32.52 |
paulgardiner | Yeah, I haven't seen a reply. | 15:33.10 |
henrys | you said ROFL I didn't know what you laughing about. | 15:33.22 |
kens | Oh, maybe I typed in the wrong window.... | 15:33.36 |
Robin_Watts_ | Listen to the customer | 15:33.50 |
| Ask what the issue was | 15:33.52 |
| Take the time to... listen? Haha | 15:33.54 |
| Thank the customer | 15:33.55 |
| Encourage them to come back | 15:33.57 |
| ? | 15:33.58 |
henrys | yea I should add it to the agenda ;-) | 15:34.32 |
kens | ah henrys I was ROFL because paulgardiner also asked for the agenda URL | 15:34.33 |
Robin_Watts_ | Loser At The Till Emergency? | 15:34.45 |
henrys | well L could be "lunge" - A "attack" ... | 15:36.10 |
Robin_Watts_ | Look At The Total Eejit. | 15:36.37 |
henrys | 7 past the meeting - see you in 23 minutes. | 15:37.57 |
| paulgardiner: looks like you might have a gs customer problem, ugh | 15:38.25 |
paulgardiner | cries | 15:38.55 |
| What's that then? Is there a communication from Raed that hasn't gotten through to me yet? | 15:40.15 |
chrisl | henrys: maybe we should suggest to marcosw, at this stage, I might as well do a "pre-release" of the commercial code for Raed | 15:40.36 |
Robin_Watts_ | I don't think so. I think the problem is probably that Raed is incapable of applying patches. | 15:40.57 |
henrys | well let's see why paulgardiner isn't getting this he must not be on support | 15:41.37 |
Robin_Watts_ | henrys: Has there been a mail since this morning? | 15:41.51 |
paulgardiner | I have one where Marcos is sending a patched gp_win32.c | 15:42.13 |
Robin_Watts_ | last mail I saw was 07:19 our time. | 15:42.19 |
| "I used it but now I’m getting the following linking error" | 15:42.30 |
chrisl | Robin_Watts_: what's more annoying is that he's apparently not willing to *try* the GPL code to see if these changes satisfy his needs :-( | 15:42.33 |
Robin_Watts_ | chrisl: Then tell him to wait til august. | 15:42.56 |
| the email was sent to Marcos, Paul and Support. | 15:43.23 |
chrisl | Robin_Watts_: that would be my preference, but marcosw will probably feel different..... | 15:43.31 |
Robin_Watts_ | So it would be be odd for Paul not to see it at all. | 15:43.34 |
henrys | well let's let marocsw sort though it first. | 15:43.34 |
| let me double check he's on support anyway. | 15:43.53 |
chrisl | At the moment this isn't a customer problem for paulgardiner | 15:44.04 |
Robin_Watts_ | The short answer to the email is "yes" | 15:44.39 |
| he should change the condition as he suggests, cos that's what we have in our repo. | 15:45.07 |
paulgardiner | So far today, I have an email from Joann timed 1:02 and a couple from Orikasa around 9:00 | 15:45.09 |
henrys | paulgardiner does seem to be on the support mailing list | 15:46.03 |
chrisl | I reckon paulgardiner has already blacklisted Raed....... ;-) | 15:47.16 |
henrys | but my last email from Raed was 9 hours ago. | 15:47.36 |
paulgardiner | :-) | 15:47.43 |
henrys | paulgardiner: you are explicitly in Raed recipient list along with support. | 15:49.53 |
kens | spam blacklist like Chris said :-) | 15:50.23 |
mvrhel_laptop | ok I am back | 15:51.27 |
| henrys: sorry that I missed the mupdf meeting | 15:51.34 |
henrys | paulgardiner: I assume you have seen some mail addressed to the artifex domain? | 15:51.53 |
mvrhel_laptop | I am going to get started on the ICC stuff w.r.t. mupdf and the windows phone app | 15:52.06 |
paulgardiner | yeah, I have some later messages | 15:52.14 |
henrys | mvrhel_laptop: hi np about the meeting, I'm barely here myself. | 15:52.33 |
| ;-) | 15:52.46 |
chrisl | paulgardiner: IIRC, did have some problems with Raed's mail being spam-binned a while back, it might be worth checking the gmail spam folder | 15:53.06 |
henrys | it's fine if it is just spam but if paul's artifex address is not getting used and he's getting some stuff because it is going to the old domain that's not so good. | 15:53.51 |
paulgardiner | I looked at my logs. Couldn't see anything suspicious around 7:19 | 15:54.23 |
henrys | old domain == laser-point | 15:54.35 |
chrisl | kens: the linux/unix file enumeration code *is* supposed to recurse into folders, but it looks like a couple of typos, and an outright mistake prevent it working | 15:55.11 |
Robin_Watts_ | I just forwarded paul a copy of the email to his artifex address and it arrived. | 15:55.13 |
| so his artifex address *is* working. | 15:55.24 |
kens | chrisl, well at least I wasn't completely mad :-) | 15:55.32 |
| chrisl hopefully it won't be too much effort to fix it. | 15:55.50 |
Robin_Watts_ | It's always possible that the mail daemon somewhere along the line had a problem, and it'll arrive in 24 hours time :) | 15:56.02 |
henrys | Robin_Watts_:I was just doing the same … so I'll stop | 15:56.14 |
chrisl | kens: no, but whoever wrote this was bordering on madness - *so* close, but clearly not tested :-( | 15:56.24 |
Robin_Watts_ | henrys: The answers to Raeds questions are "no" and "yes". | 15:56.44 |
| He clearly doesn't have the latest version of iapi.c | 15:56.52 |
kens | chrisl yeah that's what really puzzled me, on first inspection it looked like it 'ought' to work. but a very simple test showed it didn't | 15:56.54 |
Robin_Watts_ | whether that's because it's badly patched or whether the patch was bad, I can't say. | 15:57.09 |
chrisl | kens: the descend into directory section was gated on whether the path was too long, but it was comparing it to the length of the pattern string, not the scratch string | 15:57.52 |
kens | Oh, oops that was never going to work | 15:58.10 |
henrys | Raed must be sorted and filtered by marcosw before being considered by the rest of engineering ;-) | 15:58.15 |
chrisl | paulgardiner: I think gmail spam filtering happens even if you have it forwarding to another address, so I'd check that, just in case | 15:58.44 |
kens | wonders if Henrys is trying to get marcos to resign.... | 15:58.50 |
henrys | marcosw loves that stuff. | 15:59.05 |
chrisl | He must do, otherwise he'd have said "wait until August"! | 15:59.29 |
paulgardiner | chrisl: Oh yes. It might have gotten trapped in the gmail's spam filtering before forwarding | 16:00.10 |
henrys | we do waste a lot of time if multiple folks get embroiled in this stuff before marcosw has a discussion with the user and separates signal and noise. | 16:00.27 |
| gs meeting | 16:01.02 |
| all please check the workflow and see that your current high priority project is correct. Read back in the logs for what motivates this. | 16:02.01 |
paulgardiner | Yeah. There it is: stuck in gmail's spam filter | 16:02.19 |
henrys | s/workflow/workflowy/ the link was sent 6/6 - tech agenda stuff. | 16:02.35 |
| don't post the link here. | 16:02.45 |
Robin_Watts_ | (or it's linked from the dashboard now) | 16:03.22 |
ray_laptop | henrys: you mean we have to look at the tech agenda between meetings ? I liked it the old way where we could ignore things until just before the next meeting ;-) | 16:04.02 |
mvrhel_laptop | :) | 16:04.08 |
chrisl | ray_laptop: no, this is better - it reminds us *what* we're ignoring ;-) | 16:04.42 |
henrys | right quarterly idiocy was too infrequent | 16:05.00 |
kens | Now we can have weekly idiocy instead :-) | 16:05.24 |
chrisl | I, for one, incorporate idiocy into my daily routine, so........ | 16:06.01 |
kens | If nobody else has anything, there's a couple of bugs I'd like to talk about | 16:06.16 |
henrys | actually you can still ignore things. I'll update the priority project (Miles' thing) at the meeting. | 16:06.26 |
mvrhel_laptop | henrys: I think we decided the phone viewer had higher priority to the windows desktop viewer, or it that item the same? | 16:06.38 |
ray_laptop | kens: you can go before me. | 16:06.40 |
kens | :-) | 16:06.44 |
| First is #430175 | 16:06.49 |
henrys | a golden oldie | 16:06.58 |
ray_laptop | but I do have something after. | 16:06.58 |
kens | yeah, I think we should close it as 'wontfix', but I'd like to hear other opinions, especially Ray | 16:07.19 |
| Moslty because he commented on it 8 years ago... | 16:07.47 |
mvrhel_laptop | I would also add icc to mupdf as a priority and of course the color bugs are important. I am making some progress on the overprint simulation | 16:07.48 |
ray_laptop | kens: if you type "bug 430175" my irc tool will let me link to it (it things things that start with # are channel addresses) | 16:07.53 |
mvrhel_laptop | there are some strange issues going on in the ghent tests though | 16:08.07 |
kens | Hmnm, OK ray | 16:08.11 |
| will try that on the next one | 16:08.17 |
| or try 430175 | 16:08.28 |
| bug 430175 | 16:08.36 |
henrys | mvrhel_laptop: will do. | 16:08.41 |
mvrhel_laptop | ok. thanks. sorry to interlace in your discussion kens | 16:09.00 |
kens | NP mvrhel_laptop | 16:09.06 |
Robin_Watts_ | kens: Chatzilla keys on the "bug <number>' or 'bug #<number>' | 16:09.09 |
kens | Robin_Watts_ : neither works on Miranda | 16:09.22 |
| I could probably do something abou thtat... | 16:09.42 |
henrys | copy paste the url will work with all clients | 16:09.45 |
ray_laptop | Robin_Watts_: chatzilla doesn't need the # character | 16:09.46 |
henrys | I would think | 16:09.49 |
Robin_Watts_ | ray_laptop: Right, but it tolerates it, I believe. | 16:09.59 |
kens | Anyway, the point is this has been idle for 8 years, so my feeling is its not important enough to adpot (or we would have done it) | 16:10.15 |
ray_laptop | kens: does it look OK to stick in the toolbin ? | 16:10.46 |
Robin_Watts_ | kens: IF we do fix it, it should be at the device level, not at the input ps level. | 16:10.46 |
kens | Its also debatable whetheer the authour can still be contacted (I;'m having this problem with bug 226943 as well) | 16:10.47 |
| Robin_Watts_ : There's no way we're going to do imposition at the device level | 16:11.07 |
mvrhel_laptop | I agree it should be at the device level | 16:11.07 |
ray_laptop | kens: true. We can't include it without a CLA | 16:11.11 |
Robin_Watts_ | i.e. do stuff like "fineprint" does on windows. | 16:11.12 |
mvrhel_laptop | really? | 16:11.15 |
ray_laptop | Robin_Watts_: what's fineprint ? | 16:11.28 |
Robin_Watts_ | http://fineprint.com/ | 16:11.46 |
kens | Given there are commercial aolutions (quite imposing as well for instance) I'm reluctant to do anything with this PostScript solution. | 16:12.44 |
Robin_Watts_ | Installs a windows printer. You print to it, and you get a series of pages that you can view in memory and then reprint as pamphlets, odd pages, even pages etc. | 16:12.44 |
| Once we get savepage working, then it becomes possible to think of doing it at the device level nicely. | 16:13.17 |
ray_laptop | does it use Ghostscript ? | 16:13.20 |
Robin_Watts_ | ray_laptop: No. | 16:13.26 |
ray_laptop | savepage is what I want to talk about | 16:13.33 |
kens | Can we come back to that ? | 16:14.00 |
ray_laptop | kens: yes (you first) | 16:14.09 |
Robin_Watts_ | (Well, we're limited in that the clist is resolution specific so we'd need to pick scales etc beforehand, but...) | 16:14.11 |
kens | It doesn't sound like anyone is massively in favour of a PostScript solution | 16:14.39 |
| SO I'd like to 'wontifx' it, any objections ? | 16:14.52 |
Robin_Watts_ | I agree to close it as is. | 16:14.56 |
henrys | I'd like it closed as wontfix | 16:14.58 |
kens | OK my other one is a bug today | 16:15.10 |
| bug 694374 | 16:15.16 |
Robin_Watts_ | Maybe open an enhancement for imposition support that we can all ignore together? | 16:15.22 |
kens | It seems to process slowly (does on Acrobat too) but I cannot see why. Profiling it shows no specific hot spots (10% in decompression, 10% memory stuff are the highest) | 16:15.52 |
| But MuPDF can process the file in 1 second (2 minutes for gs) | 16:16.09 |
Robin_Watts_ | kens: Does this have lots of nested image tiles in it? | 16:16.38 |
kens | Obviously I can reccomend MuPDF to the reporter, but can someone else take a poke at it and see if they can figure out why its so slow please ? | 16:16.40 |
| Robin_Watts_ : Err, maybe | 16:16.51 |
Robin_Watts_ | I had a file that was a google maps type thing. | 16:17.01 |
kens | Its got lots of shadings | 16:17.02 |
henrys | I'll just put this out: let's just wont fix anything that is ridiculously old. thoughts? | 16:17.10 |
kens | henrys I'm mostly in favour. I would like to do the ramfs one though | 16:17.26 |
| If I can contact the patch authour | 16:17.41 |
Robin_Watts_ | and they plotted the lowest level tile, then higher res tiles on top of it etc. | 16:17.42 |
kens | Robin_Watts_ : no I don't think its like that | 16:17.51 |
Robin_Watts_ | ok. | 16:17.58 |
kens | its *is* a stupidly constructed file, forms nested up to 7 levels deep | 16:18.09 |
| WHich 'might' be the problem, but I can't see it on a profiler | 16:18.23 |
ray_laptop | I think we had a gs enhancement to port the (much faster) mupdf shading over to gs and replace the mess that gs has | 16:18.24 |
Robin_Watts_ | Is there transparency in the file ? | 16:18.28 |
kens | Robin_Watts_ : there is a page group, but removing it made no difference, the other elements are all opaque as far as I can tell | 16:18.49 |
Robin_Watts_ | ray_laptop: I am currently in charge of ignoring that enhancement I beloeve. | 16:18.50 |
henrys | I'm thinking everyone should look at their bugs and anything over 5 years without comment and of questionable value should be closed maybe as "LATER" | 16:18.56 |
kens | henrys I'm kind of doign that already, though I am putting a request for comment in some threads just in case | 16:19.29 |
henrys | agreed? | 16:19.34 |
ray_laptop | kens: -Z: will tell you if the clist found any actual transparency in any bands | 16:19.51 |
kens | I'm in favour, these ancient bugs are terrible | 16:19.52 |
ray_laptop | henrys: agreed | 16:20.01 |
kens | ray OK I'll give that a try, thanks | 16:20.08 |
chrisl | henrys: fine by me | 16:20.16 |
henrys | I'm sure you can create a query | 16:20.51 |
kens | OK I'm done. Ray's turn | 16:20.51 |
mvrhel_laptop | I only have a couple that are that old. I need to check this one though and see if this is for real Bug 689792 | 16:21.40 |
| from 8.62 ..... | 16:21.40 |
ray_laptop | for cust 801's issue #16 they want to re-order pages. | 16:22.22 |
| and the save page is the cleanest approach IMHO | 16:22.41 |
| I was thinking about adding it to the gx_device_printer class so all page devices that use the clist can use it | 16:23.17 |
kens | This is saving the clist after its generated ? Not the page buffer | 16:23.17 |
ray_laptop | kens: right, the clist | 16:23.27 |
mvrhel_laptop | that would be useful | 16:23.46 |
ray_laptop | that way for them the job starts faster since clist writing is faster than rendering | 16:23.58 |
kens | Making ti available to all devices sounds good. Note that the high level devices don't use the clist though.... | 16:23.59 |
| ray_laptop : using -Z: on that file the clist didn't mention transparency at all | 16:24.40 |
ray_laptop | what do you all think of having it invoked using printer device params: i.e., save_pages={memory file off flush} | 16:25.24 |
mvrhel_laptop | I like that idea | 16:25.58 |
ray_laptop | kens: page16 doesn't have transparency (according to pdf_info.ps) | 16:26.00 |
henrys | ray_laptop:what about resolution dependency? Will you know the final resolution always when the clist is created? | 16:26.09 |
kens | ray_laptop : yes I had alreadt tried that too | 16:26.12 |
Gigs- | I don't know if it's really in scope for gs, but I can tell you from the prepress side of things, imposition software sucks and you probably could tap a market opportunity. | 16:26.16 |
henrys | Gigs-:you should become an OEM, make millions, and give us a cut. | 16:27.00 |
Gigs- | I've thought about it | 16:27.10 |
ray_laptop | then to print the saved pages use: -sSavedPagesPrint=string | 16:27.16 |
| there is a guy who sells PDF imposition s/w that was in a booth next to us at a show a few years back. | 16:28.02 |
kens | Probably Aandi Inston of Quite Software | 16:28.16 |
| Quite Imposing is supposed to be good (I have never used it) | 16:28.37 |
ray_laptop | I was thinking the string could be either a list of pages, page ranges, or some common keywords like 'normal' 'reverse' 'even' 'odd' | 16:29.06 |
kens | Hmm, complex parsing, but it sounds good | 16:29.23 |
henrys | ray_laptop:the resolution dependency in the clist makes it ineffective as a viewer format I believe. | 16:29.25 |
ray_laptop | kens: that's who is was | 16:29.31 |
| henrys: this is for printing apps, not the viewer | 16:29.48 |
kens | ray_laptop : I used to meet him a lot, last I heard he was living in what used to be a Victorian hotel on the shores of a loch in Scotland.... | 16:30.01 |
henrys | ray_latpop:so printer is low on memory and wants to fall back to a lower resolution - then what? | 16:30.22 |
ray_laptop | viewer devices aren't gx_device_printer types | 16:30.31 |
kens | henrys what other alternatives are tehre which are not reolution dependent ? | 16:31.00 |
ray_laptop | henrys: saved_pages are not for small memory machines (primarily intended for disk based) | 16:31.11 |
| but among other things we could print collated copies | 16:31.37 |
Robin_Watts_ | The clist being a resolution dependent device is a killer for some things. | 16:32.09 |
ray_laptop | kens: if the input is PDF, we can print any order directly from the PDF and not need the clist | 16:32.13 |
henrys | kens:I think most display list formats in use are resolution independent - tor8 or Robin_Watts_ correct me if I'm wrong | 16:32.24 |
Robin_Watts_ | We *could* write a "serialiser" device. | 16:32.24 |
kens | ray_laptop : yes certainly, I was thinking more of PostScript and PCL. XPS and PDF the file format effectively is the disply list | 16:32.41 |
Robin_Watts_ | Something that just serialises gs device calls to a file. | 16:32.42 |
ray_laptop | we _could_ silently convert the non-pdf input to pdf and then do it | 16:32.51 |
Robin_Watts_ | Then we have a new input format for gs. | 16:33.02 |
| Or we could use pdf as that serialised format. | 16:33.29 |
| I've wanted to write a 'pdfmerge' using mupdf for ages. | 16:33.56 |
kens | Robin_Watts_ : gs device calls are not resolution independent I think | 16:33.58 |
henrys | right - that's what I believe Apple has done with Quartz more or less. | 16:34.00 |
ray_laptop | Robin_Watts_: sort of a variation on the 'txtwrite' device ?? | 16:34.01 |
Robin_Watts_ | Something that produces a pdf out by taking 1 or more pages from various pdfs in. Where each page out can optionally come from several pages in. | 16:34.53 |
chrisl | Robin_Watts_: device call serialisation falls apart a little bit with the text is done in gs - i.e. not as a device method. | 16:35.26 |
ray_laptop | most device calls are 'fixed' which have a 8-bit fraction | 16:35.28 |
henrys | but I'm not sure if we are getting too ambitious here and beyond the customer schedule horizon. | 16:35.29 |
Robin_Watts_ | chrisl: Ugh, ok. | 16:35.41 |
chrisl | Robin_Watts_: unless we fix that ;-) | 16:35.53 |
Robin_Watts_ | Using pdf as the intermediate format would be a sensible step, I suspect. | 16:35.57 |
kens | don't foget images,they have an enumerator too | 16:36.06 |
Robin_Watts_ | And then we can use a mupdf based tool to do the page merging. | 16:36.13 |
| which is a MUCH less insane route. | 16:36.22 |
ray_laptop | henrys: true. The -sSavePages=memory -sSavedPagesPrint= would be pretty fast | 16:36.23 |
| since much of it is already there. | 16:36.38 |
henrys | but I think you would get a lot more participation, ray_laptop if you didn't use the dreaded clist. | 16:37.24 |
ray_laptop | the problem with pdf as the page format is the speed of pdfwrite I think, then needing to re-parse it, but let me do some comparison timings | 16:37.53 |
| henrys: I agree, and I am drawn to the pdf intermediate approach. | 16:38.17 |
kens | pdfwrite will not be hugely fast, and will require a decent amount of memory and disk space | 16:38.23 |
| But its possible its not much worse than the clist. | 16:38.45 |
ray_laptop | kens: for this customer, memory and disk space aren't an issue, but SPEED it | 16:38.53 |
kens | Again, please don't forget that some elements may be rendered | 16:38.59 |
henrys | I think it should be a new middle level device like pxlcolor level - images, vectors, but not fonts. | 16:39.07 |
chrisl | kens: I meant to say, I've tried sending a mail through sourceforge to the ramfs guy earlier today - didn't bounce (yet), but haven't heard back yet either. | 16:39.15 |
kens | chrisl I've heard nothing yet either I'm afraid | 16:39.31 |
| I may end up having to reimplement it | 16:39.38 |
ray_laptop | kens: the clist writing is pretty quick, but rendering from the clist is MUCH more efficient, particularly since the pages will have to be written to clist first, then rendered | 16:39.57 |
henrys | chrisl:have you overcome the obstacle with the new directory structure. Is there something to discuss? | 16:40.02 |
chrisl | henrys: nothing to discuss, I know vaguely where I'm going now | 16:40.24 |
ray_laptop | henrys: the problem with any device is that it won't be resolution independent as PDF is | 16:40.42 |
henrys | sort of out of it today. Had to put my dog down yesterday after 13 years, been pretty mopey... | 16:41.04 |
kens | is sorry to hear that :-( | 16:41.17 |
henrys | amazing how attached you get. | 16:41.36 |
ray_laptop | henrys: sorry to hear that. My dog finished at 15 years last year. | 16:41.45 |
Robin_Watts_ | henrys: Crumbs. Sorry. It's a horrible thing to have to do, but (if it's anything like mine was) you know it was the right time, and for the best. | 16:41.57 |
chrisl | henrys: sorry hear that - went through the same thing with my cat back in February | 16:42.13 |
ray_laptop | recalls the conference calls with henrys' dog yapping in the background :-) | 16:42.25 |
henrys | yes it was the right time, thanks everybody. | 16:42.27 |
| yes he was vocal. | 16:43.10 |
| ray_laptop:so should we recruit help to your project or do you want to scope it out first? | 16:44.08 |
ray_laptop | henrys: I was thinking I could finish my approach with the clist saved pages this week. | 16:44.39 |
| given the initial look I've had at what is already there | 16:45.01 |
kens | It shoudl be easy to test the conversion speed to PDF as a comparison from that | 16:45.04 |
henrys | I really do hope we can think about a pdf intermediate format - I think that would have some value. | 16:45.18 |
kens | I'm willing to bet the pdfwrite then render approach will be slower | 16:45.19 |
ray_laptop | also -sSavePages=file will let us move to a separate process to render the pages | 16:45.30 |
kens | I have to go off and cook, goodnight all | 16:45.44 |
henrys | kens:I think we have to think about something much simple than pdfwrite | 16:45.50 |
chrisl | Robin_Watts_: do you know if here is a way to contact a user through github? | 16:45.51 |
ray_laptop | and if the place we write the saved page files to is a pipe, we have a queue on disk | 16:46.07 |
Robin_Watts_ | chrisl: Not offhand. | 16:46.15 |
kens | henrys a PDF output that's simpler than pdfwrite ? What were you thinking abou as the content ? | 16:46.17 |
ray_laptop | nite, kens | 16:46.19 |
| henrys: for the customer, I will look at the speed of pdfwrite and compare to clist writing. If pdfwrite is close, then I'll discuss it with everybody. | 16:48.30 |
| henrys: but doing anything 'from the ground up' might take too long (if it is a 'high level' type device) | 16:49.17 |
| henrys: and probably won't be much faster than the pdfwrite approach. | 16:50.31 |
henrys | ray_laptop:I believe the memory and temporary file space might be quite significant different with lower level pdf. | 16:51.20 |
| I'm really just thinking you don't need fonts | 16:51.30 |
ray_laptop | although some of the pdfwrite stuff to decide on image compression method should be disabled if we are interested in it as an intermediate format | 16:51.38 |
henrys | but I'm speculating without any hard data. | 16:52.21 |
Robin_Watts_ | ray_laptop: It would be really nice if we could find some way of passing images through without unpacking/repacking them. | 16:52.28 |
ray_laptop | henrys: so something like the old pswrite that did fonts as bitmaps (obviously NOT ps, but some format easier to parse). | 16:52.40 |
mvrhel_laptop | bbiaw | 16:52.53 |
ray_laptop | Robin_Watts_: sorry, but that's a new device interface | 16:52.58 |
| but for an intermediate format we could skip compression (the clist doesn't compress images) | 16:53.31 |
Robin_Watts_ | Of the top of my head, could we do it by having a new image enumerator type offered as the first option. It would check to see if the device underneath it (pdfwrite) could accept images without repacking. If it could, then it would claim the call. If not, it would pass on, and the existing enumerator things would handle it. | 16:54.25 |
ray_laptop | Robin_Watts_: the image enumerator gets decompressed data | 16:55.04 |
Robin_Watts_ | At the moment we have things like the interpolator enumeratee (is that the right term) that can either say "I'll do it" or "pass". | 16:55.12 |
| ray_laptop: hmm, ok. | 16:56.01 |
ray_laptop | all of the decompression is done by the parser. The graphics library doesn't get to see the original data | 16:56.13 |
| Robin_Watts_: but from the performance standpoint, as long as we only decompress once, we're OK | 16:56.52 |
henrys | yes like pswrite I always think of pxlcolor because that's my world but they are at a similar leve. | 16:56.57 |
Robin_Watts_ | We could add a new device interface that the parser could try calling first ? | 16:57.05 |
ray_laptop | but moving less data into the intermediate format _would_ be nice | 16:57.14 |
Robin_Watts_ | and if the device function is there, it could avoid decompressing? otherwise it would fallback and go the normal route? | 16:57.31 |
henrys | chrisl:the urw fonts have arrived where do you them? | 16:57.32 |
| sorry forgot to say that at the meeting. | 16:57.49 |
Robin_Watts_ | It's pretty shocking that in all this time, gs can't pass through images without recompression. | 16:57.53 |
chrisl | henrys: you can either mail them, or put them up on casper - either is fine | 16:58.10 |
ray_laptop | Robin_Watts_: it passes images through uncompressed. It's device specific whether or not the image is recompressed, but obviously pdfwrite wants to compress | 16:58.52 |
| Robin_Watts_: but rectifying that in the graphics library is a different enhancement bug | 16:59.20 |
Robin_Watts_ | ray_laptop: Right, but we *should* be able to allow images to go through compressed. | 16:59.28 |
henrys | chrisl:there on casper. | 16:59.28 |
Robin_Watts_ | I added that to mupdf recently. | 16:59.35 |
chrisl | henrys: thanks, I'll deal with them when I have time | 16:59.53 |
ray_laptop | Robin_Watts_: good. But in order to render the pages, mupdf is still "dumb" in that it decompresses the entire pixmap, right ? | 17:00.15 |
Robin_Watts_ | so for pdfwrite or svgwrite etc, we can write out the source image data without needing to recompress (and hence with no loss of quality) | 17:00.18 |
henrys | chrisl:yes the directory stuff is probably more important. | 17:00.30 |
Robin_Watts_ | ray_laptop: Yes. That's not a trivial thing to lift. | 17:00.50 |
| We can cope with decoding at subsample levels. | 17:01.03 |
ray_laptop | Robin_Watts_: right, it's obviously better for lossy compression to avoid the round-trip | 17:01.04 |
henrys | bbiab | 17:01.13 |
Robin_Watts_ | but we don't do banded decompression. | 17:01.13 |
chrisl | henrys: it shouldn't take long to sort out the fonts, I still have all the stuff from when I collated the report | 17:01.15 |
Robin_Watts_ | (partly because banded operation is something we don't really do in mupdf - I mean, we support it, but no one uses it like that) | 17:01.34 |
henrys | chrisl:really the fonts should go in soon - not just before the release. | 17:01.44 |
| like now or after august would be my vote. | 17:01.57 |
ray_laptop | Robin_Watts_: if you are zoomed into an image so the area needed is a subset of the image, does mupdf still require the full pixmap ? | 17:02.06 |
Robin_Watts_ | Also, the image decompressors we use don't generally have a mechanism for decoding a subsection. | 17:02.31 |
chrisl | ray_laptop: I wondered if there would be a way to add an interpreter callback to allow the graphics library to access the original data stream for the image, and access whatever point in the filter chain it needed. | 17:02.43 |
ray_laptop | votes for doing it now, it's far enough ahead of the release | 17:02.43 |
Robin_Watts_ | In some cases we could decode the whole thing and throw away the data not in the area we need, but that's still a lot of wasted work. | 17:03.12 |
ray_laptop | Robin_Watts_: JPEG is pretty common and is block oriented, right ? | 17:03.24 |
Robin_Watts_ | ray_laptop: Yes, mupdf still decodes the entire pixmap. | 17:03.24 |
chrisl | I'll do the fonts before the end of week - just not looking at it when I have to finish in <10 minutes | 17:03.30 |
ray_laptop | but I agree that Flate compressed streams aren't | 17:03.42 |
Robin_Watts_ | But very few JPEGs use the restart interval thing. | 17:03.43 |
| i.e. you need to decompress the whole bloody thing at least up to the point at which you've got enough. | 17:04.01 |
henrys | chrisl: sounds good | 17:04.10 |
| bbiab | 17:04.16 |
Robin_Watts_ | With progressive decoding it's even worse - you need to decode the whole image anyway, you can't (without some crufty extensions to jpeglib that I wrote for my previous job and are hence not available to us now) decode multiple strips from a progressive jpeg without restarting each time. | 17:05.23 |
| openJPEG requires the whole image in memory anyway. | 17:05.44 |
| faxes *could* be done line by line. | 17:05.59 |
| but then you get one that's flipped the wrong way up, and all your hard work is out the window. You need to go back to decoding on a per band basis. | 17:06.45 |
| When we get a customer complaint about image decoding memory size, we'll worry about it then :) | 17:07.21 |
ray_laptop | Robin_Watts_: sounds like a good approach | 17:08.28 |
Robin_Watts_ | ray_laptop: But surely it must be possible to add a mechanism to gs for passing compressed images across? | 17:09.31 |
| If the parser hits a jpeg compressed image, call .jpegimage or something. That would try to pass the image data compressed - and if the device doesn't support it, we'd fall back to current code. | 17:10.50 |
| PDF could be made to work like that, right? | 17:12.47 |
| It's PS that would be harder. | 17:12.52 |
| In PS we'd need to magically spot that the stream being fed to an image operation was a DCTDecode one and use the jpegimage call then instead. | 17:13.45 |
| and maybe we can't cope in all cases, but we should be able to cope in the common ones? | 17:14.07 |
ray_laptop | Robin_Watts_: Adding a special treatment for JPEG (and maybe JPX) may be useful for h.l. devices | 17:15.06 |
chrisl | Robin_Watts_: a.n.other RIP that I know of used a "fork" filter, which was pushed *first* in any image filter chain, then any interested party could find the fork filter, and pull the un-molested data from there - needed careful handling for the buffering, though. | 17:16.00 |
ray_laptop | Robin_Watts_: but even doing it for Flate compressed would make pdfwrite faster since we could just collect the data | 17:16.19 |
Robin_Watts_ | ray_laptop: Right. | 17:16.25 |
| chrisl: First is not always right though. | 17:16.57 |
| Imagine I have: ASCII85Decode then DCTDecode | 17:17.18 |
| You'd want the fork filter to be pushed just before DCTDecode. | 17:17.36 |
| In the PDF agent for my previous job, I 'shortstopped' the filters just before the last one if they were an image format and took the compressed data from there. | 17:18.10 |
ray_laptop | Robin_Watts_: still handling that in C isn't too bad. I'd just pass the raw data through (even if it's multiple filters) | 17:18.13 |
Robin_Watts_ | ray_laptop: Right. In MUPDF we offer both a pointer to the data, and details of the compression used. | 17:18.42 |
chrisl | Robin_Watts_: the fork filter meta-data included the entire requested filter chain, so decoding to the desired level was easy enough | 17:19.28 |
ray_laptop | Robin_Watts_: Oh. Well, we'd have to come up with a much less sensible approach for ghostscript then. ;-) | 17:19.28 |
Robin_Watts_ | If a device can cope with the compressed format it takes it. (so pdf can take jpeg straight through), but if it's in a format it doesn't understand (like PNG say), pdf will reject it. | 17:19.37 |
| so svgwrite can take PNGs and TIFFs unchanged, but PDF has to decode and flate them. | 17:20.22 |
ray_laptop | none of our parsers take in PNG AFAIK | 17:20.31 |
Robin_Watts_ | mupdf reads PNGs for XPS | 17:20.44 |
| And GhostXPS reads PNGS too therefore | 17:20.52 |
| (no, we don't use pnglib, tor wrote his own) | 17:21.16 |
ray_laptop | OK. so our xps parser could pass through PNG's to an xpswrite (if we ever get one) | 17:21.33 |
Robin_Watts_ | ray_laptop: We have an xpswrite, right henrys? | 17:21.51 |
ray_laptop | Robin_Watts_: I think all it can do so far is 'tiger' | 17:22.20 |
chrisl | Robin_Watts_: the other problem is making sure that no where do we try to up/down sample, change color space etc of the image "samples" being passed around. | 17:22.21 |
ray_laptop | if pdfwrite is changing the colorspace, it will have to decompress, and it'll probably just 'punt' on the call in that case | 17:23.04 |
Robin_Watts_ | chrisl: The point of passing the compressed data is so we'd avoid all that code. | 17:23.16 |
chrisl | I'm just saying it may not be totally obvious everywhere that the samples can be "touched" | 17:24.06 |
ray_laptop | and I guess if pdfwrite determines (from the image parameters) that it needs to resample, it would also just use the current methods | 17:24.45 |
Robin_Watts_ | ray_laptop: Right. | 17:24.54 |
| chrisl: The compressed and uncompressed paths would have to be more or less completely separate (after the initial detection) | 17:25.25 |
ray_laptop | but many times we DON'T need to resample, particularly if the image is already pretty low res, and then avoiding a JPEG round trip is particularly important | 17:25.59 |
Robin_Watts_ | yes. | 17:26.11 |
ray_laptop | sounds like a good project for kens since he now has BOTH the pdf interpreter and the pdf writer :-) | 17:26.46 |
Robin_Watts_ | I think this would be a massive step forward for pdfwrite, but I accept that I could be horribly underestimating the complexity involved. | 17:26.54 |
ray_laptop | Robin_Watts_: yes, you probably are. | 17:27.10 |
chrisl | When I originally talked this over with Ken it was purely from a quality POV and not performance, and we'd discussed giving pdfwrite access to the original data, and leaving the current image code doing as it currently does | 17:27.34 |
ray_laptop | we usually leave massively underestimating the scope to management | 17:27.34 |
Robin_Watts_ | Don't get me wrong, I think it'll be hard. But I do think it's possible :) | 17:27.35 |
| chrisl: How can you give pdfwrite access to the original data without doing what I suggest? | 17:28.33 |
chrisl | Robin_Watts_: using the generic "custom" device method you added to the device interface, we could pass buffers of data. | 17:30.09 |
ray_laptop | the info that came in from cust 801 is interesting! | 17:30.49 |
Robin_Watts_ | chrisl: but the trick is in spotting what those buffers of data is, right? | 17:31.02 |
| s/is/are/ | 17:31.09 |
chrisl | Robin_Watts_: so we have an "image data pending" call with any relevant data, then the buffers of image data, then a "done" call. | 17:32.04 |
Robin_Watts_ | So that would be a PDF only solution? | 17:32.27 |
chrisl | No, PS, too. Just a hook in the image operator | 17:32.49 |
Robin_Watts_ | So the image operator would spot that it was being called with a DCTDecode stream and do something special? | 17:33.34 |
chrisl | Anyway, it's moot, as it doesn't address the problem you guys are looking at | 17:33.35 |
| Robin_Watts_: no, any image would cause the calls to happen, and the target device would handle the buffers (or ignore it) as required | 17:34.34 |
| Eek, I have to go - Robin_Watts_ like I say, this stuff wouldn't help performance, so it's not really relevant | 17:35.48 |
Robin_Watts_ | chrisl: Ah, I see. | 17:36.44 |
| I think passing in the compression format would be sensible (as the zimage code can spot it from the stream), and (AIUI) only one thing can 'suck' the data out of a stream. So getting the data out from the stream in order to pass it across means you can't easily get it out again to pass to the regular image calls. | 17:38.23 |
| tor8: 2 commits on robin/master for you. | 17:58.48 |
ray_laptop | working from one of our multipage sample files, 0.pdf (80 pages), it takes 8.3 seconds to write the clist or to write a pdf (also at 600 dpi). | 18:31.46 |
| the kicker is that rendering from the clist only takes 4 seconds, but rendering from the saved pdf takes 8.1 seconds even with BGPrint=true (with BGPrint=false it takes 9.8 seconds) | 18:34.42 |
| The other "gotcha" is that the customer's files are likely to have complex Japanese fonts which will make the double pass processing even worse. | 18:35.48 |
| I'm looking for a multi-page Japanese file to check that | 18:36.05 |
| henrys: Robin_Watts: (and anybody else that chimed in): I think that the saved page in clist format is going to have to be the approach for the customer. It's the fastest to implement (other than pdfwrite) and is going to give the best throughput. | 18:37.54 |
Robin_Watts_ | did that file have images in? | 18:54.58 |
ray_laptop | Robin_Watts_: 0.pdf doesn't have much image data | 18:59.03 |
| and ibm.pdf is a file with Japanese that I tested. Takes 4.6 seconds to write the clist OR pdfwrite, but renders in 1.6 seconds (has 68 pages) | 19:01.51 |
Robin_Watts_ | I'm surprised that pdfwrite can write as fast as the clist. | 19:03.12 |
| and for situations where 1:1 output is possible, the clist is probably a better bet. | 19:03.50 |
| but for resolution independence pdfwrite seems a reasonable idea. | 19:04.14 |
ray_laptop | Robin_Watts_: I'm not, and for some files, pdfwrite _may_ win (with shadings) | 19:04.16 |
| Robin_Watts_: agreed | 19:04.50 |
| and if size of output is critical, then clist will be bigger than PDF, particularly with images | 19:05.32 |
| well, for ibm.pdf that isn't quite true. On some pages clist is smaller and on some it's bigger :-/ | 19:15.52 |
| but performance-wise, when the resolution is known, clist is a clear winner | 19:16.18 |
| Robin_Watts_: are you willing to help me come up with UI names? | 19:16.46 |
Robin_Watts_ | UI names ? sure | 19:17.13 |
ray_laptop | Robin_Watts_: initially I was thinking: -sSavePages=___ where ___ is "memory" or "file" or "off" | 19:17.59 |
| and -sSavedPagePrint=____ where the string is either a bunch of page ranges or keyword: "normal" "reverse" "even" "odd" | 19:18.56 |
| and having a parameter an application could interrogate: SavePageCount | 19:19.30 |
Robin_Watts_ | I might be tempted to go with -sSavedPages="1-10" etc | 19:20.13 |
| because that's the one param that people usefully have to use. | 19:20.29 |
ray_laptop | I like the verb in there, but PrintSavedPages=____ would be OK | 19:20.57 |
Robin_Watts_ | Are you intending that this is something that will work for all devices? | 19:21.18 |
| it'll be off unless people turn it on ? | 19:21.35 |
ray_laptop | Robin_Watts_: all 'printer' class devices (clist capable) | 19:21.38 |
Robin_Watts_ | Right. so I was wrong. | 19:21.46 |
ray_laptop | Robin_Watts_: yes, default is "off" | 19:21.51 |
Robin_Watts_ | -sSavePages="memory/file/off" is the one that people HAVE to use. | 19:22.02 |
ray_laptop | Robin_Watts_: and if it's on, then it forces clist mode, even if it's a single band | 19:22.16 |
Robin_Watts_ | You could have -sSavePagesTo="memory/file" | 19:23.04 |
| and then have -sSavePages="page selection" | 19:23.20 |
| and by sending a selection that implicitly turns it on. | 19:23.31 |
ray_laptop | -sSavePages=flush will be the one that will clean up the clist files that were accumulated | 19:23.32 |
Robin_Watts_ | ray_laptop: Thats for clearing up after a crash? | 19:23.54 |
| shouldn't need it in normal use? | 19:24.01 |
ray_laptop | Robin_Watts_: so for multiple copies, the saved page print mode would implicitly collate the copies ? | 19:24.57 |
| but if someone wanted "even" then "odd" we'd only delete the ones that were printed ? | 19:25.38 |
Robin_Watts_ | Sorry, I'm not following. | 19:26.01 |
ray_laptop | I was thinking an explicit "flush" so they could do multiple -sSavedPagePrint=____ actions before flushing | 19:26.21 |
Robin_Watts_ | Why would we want to allow multiple SavedPagePrint actions? | 19:26.46 |
| and how would that even work? | 19:26.59 |
| In gs, -sSavedPagePrint sets a string in a dictionary, right? | 19:27.13 |
ray_laptop | Robin_Watts_: I used this at my previous company (with ghostscript saved pages) to do one copy of a job, as a "proof" then if it's OK, print N copies | 19:27.14 |
Robin_Watts_ | how can you have multiple ones? | 19:27.19 |
ray_laptop | these would be printer device parameters | 19:27.34 |
Robin_Watts_ | ray_laptop: Give me a command line ? | 19:27.47 |
| gs -sDEVICE=ljet4 -o lpr: -sSavedPagePrint="odds,pause,even" in.pdf | 19:28.53 |
ray_laptop | gs -sDEVICE=ppmraw -o x-%d.ppm -sSavePages=memory 0.pdf -sSavedPagesPrint=reverse -sSavePages=flush | 19:28.59 |
| Robin_Watts_: what would "pause" do ? | 19:29.24 |
Robin_Watts_ | pause and wait for user input. akin to showpage. | 19:29.39 |
| Give people a chance to take the paper out, and reinsert it for the other side to be printed. | 19:30.00 |
| manual duplexing. | 19:30.02 |
| or to check for a proof. | 19:30.13 |
| I don't see how your command line would work. | 19:30.27 |
ray_laptop | Robin_Watts_: well, the printer device doesn't have access to gs_stdin, but we _could_ parse for pause in the command line processing and send the "even" then pause for input, then the "odd" | 19:30.44 |
Robin_Watts_ | I see that 0.pdf would run and the pages would be saved. | 19:31.00 |
| but then when we process -sSavedPagesPrint=reverse, all that would do is set a dictionary param. | 19:31.22 |
ray_laptop | Robin_Watts_: right, then the next argument is processed (-sSavedPagesPrint=reverse) | 19:31.32 |
Robin_Watts_ | how can that actually trigger an action? | 19:31.32 |
| Is there some special class of param that I'm not familiar with ? | 19:31.45 |
ray_laptop | Robin_Watts_: it gets sent to the printer device as a parameter (using putdeviceprops) | 19:31.55 |
Robin_Watts_ | Right, that's what I don't understand. | 19:32.19 |
ray_laptop | so the parameter is detected in gdev_prn_get_params | 19:32.53 |
Robin_Watts_ | My understanding (probably flawed) was that it would go into a dictionary. And that dictionary would only get sent to the device when we do the next round of get_params/put_params. and that only happens when we have a new file to process. | 19:33.07 |
ray_laptop | Robin_Watts_: we can do whatever we want with the parameters on the command line | 19:33.48 |
Robin_Watts_ | We should not introduce special cases for this work. | 19:34.06 |
| Before we go any further, let's make sure I'm not talking rubbish. | 19:35.04 |
ray_laptop | well, the way that doesn't require a special case is sort of UGLY: -c "<< /SavedPagesPrint (even) >> setpagedevice" | 19:35.30 |
Robin_Watts_ | OK, so what I was saying was right? As it stands at the moment, without adding special cases, just using trailing -sBLAH won't work ? | 19:36.20 |
ray_laptop | but an application (or a printer that uses the gsapi_run_string) would do that | 19:36.21 |
ray_laptop | looks at the code | 19:36.32 |
Robin_Watts_ | I don't see why we need to reach for the special cases. | 19:36.54 |
| Surely we can achieve exactly what we want without changing anything. | 19:37.15 |
| gs -sDEVICE=ppmraw -o x-%d.ppm -sSavePages=memory -sSavedPagesPrint=reverse 0.pdf | 19:37.46 |
| And if you want to allow for proofing: | 19:38.33 |
| gs -sDEVICE=ppmraw -o x-%d.ppm -sSavePages=memory -sSavedPagesPrint=reverse,pause,reverse 0.pdf | 19:38.56 |
ray_laptop | Robin_Watts_: that works for the keyword modes, but if you don't know page ranges ? | 19:39.21 |
Robin_Watts_ | ray_laptop: Well, you run the bbox device first. | 19:39.44 |
henrys | I should have brought this up a the meeting but I'm wondering if we shouldn't start using Skype a bit. The new salesperson is going to come on board soon and IRC has deficiencies in 2 areas: sort of geeky and can't talk about customers. Any thoughts? | 19:39.52 |
ray_laptop | YUCK! | 19:39.54 |
Robin_Watts_ | or you use things like: even,odd,1-10,12-13 | 19:40.01 |
| I don't see how your solution helps. | 19:40.16 |
ray_laptop | henrys: not about Skype, about Robin's idea | 19:40.24 |
Robin_Watts_ | henrys: I do use skype. | 19:40.30 |
ray_laptop | of the using the bbox device | 19:40.32 |
Robin_Watts_ | The MuPDF customer talks to me on skype. | 19:40.48 |
| I talk to Scott on skype. | 19:40.54 |
ray_laptop | henrys: we can talk about customers on private chats | 19:40.55 |
henrys | Robin_Watts:yes I know but this would involve a commercial app I believe because the free stuff is just one on one right? | 19:41.22 |
Robin_Watts_ | but I try not to use it to talk to (say) tor or paul, because discussions here are logged and public. | 19:41.24 |
| henrys: free skype can do voice calls between n people. | 19:41.59 |
| for video calls, at least one of you needs a premium subscription. | 19:42.13 |
ray_laptop | henrys: it was you who wanted mvrhel_laptop and I to discuss things here so others could listen (and learn?) -- or just chime in with irritating questions ;-) | 19:42.29 |
Robin_Watts_ | ray_laptop: sorry :) | 19:42.40 |
ray_laptop | Robin_Watts_: I didn't mean you. I meant henrys ;-) | 19:42.58 |
Robin_Watts_ | ray_laptop: Most of the time for page ranges, you want evens, or odds, or 1-10, or 25- or something. | 19:43.42 |
| I'd imagine it'd be unusual to want to print the saved pages, THEN decide on the page ranges. | 19:44.20 |
henrys | I am not saying get rid of IRC as the primary interchange, but if the new salesperson is going to get involved it would be convenient to say okay this is not okay for IRC let's switch to Skype for this discussion. | 19:45.31 |
ray_laptop | Robin_Watts_: so we'd always "flush" after printing ? | 19:45.51 |
henrys | anyway just something to think about. | 19:46.05 |
Robin_Watts_ | yes. | 19:46.12 |
| henrys: I don't think it's at all unreasonable to suggest we should all install skype. | 19:46.40 |
| and be logged into it. and share contact details with each other. | 19:46.57 |
ray_laptop | I have Skype (and a skype account) but I usually have it as "offline" so people don't bother me | 19:47.26 |
henrys | you would be alerted on IRC that there is a Skype meeting, enable it, then turn it off. | 19:48.12 |
Robin_Watts_ | I have ray in skype. I don't have henry. | 19:48.14 |
ray_laptop | somehow my skype ID has been picked up by advertisers and if I'm online I usually get 6 or more chat requests from people I don't know | 19:48.31 |
henrys | Yes I haven't done anything with it yet, or I did a long time ago and forgot everything. | 19:48.48 |
Robin_Watts_ | ray_laptop: Did you allow your id to go into the directory? | 19:48.56 |
henrys | there's that guy who missed the meeting ;-) | 19:49.36 |
ray_laptop | Robin_Watts_: but the other thing is the customer wants to run in server mode (and we want him to) to avoid first page startup times | 19:50.04 |
Robin_Watts_ | ray_laptop: And what that I am suggesting is a problem with that? | 19:50.43 |
| I think we'd need a strong reason to make it so that saved pages persist beyond the lifetime of the file. | 19:51.50 |
ray_laptop | so that's why I was thinking of doing it after the file was processed. Also what if there are multiple files on the command line: in1.ps in2.ps -- when do we "apply" the print action | 19:51.52 |
| Robin_Watts_: the printer device has NO idea what file the pages come from | 19:52.14 |
marcosw_ | sorry about missing this morning's meeting. I was working on my car; a 30 minute job ended up taking me over an hour. | 19:52.42 |
henrys | a phd working on a car, impossible. | 19:53.03 |
Robin_Watts_ | ray_laptop: Ah. Now you're making it sound like we want something like: | 19:53.16 |
| henrys: 1) yes, he was trying to put fuel in it. | 19:53.39 |
ray_laptop | so another variation is to have some pseudo operators that can be used: -c "(1-10, 25) SavePagesPrint" | 19:53.53 |
Robin_Watts_ | henrys: 2) yes, that's why the job took so long. | 19:53.59 |
| henrys: 3) etc. | 19:54.05 |
ray_laptop | these would send the parameter to the device with the string | 19:54.17 |
henrys | marcosw_:look forward to the plotter results but I'm afraid they aren't going to help. This is a tough one. | 19:54.29 |
Robin_Watts_ | gs -sSavePages="memory" in1.ps in2.ps -sSavedPagePrint="1,2,3....-c ".savedPagePrint" (or something) | 19:55.18 |
ray_laptop | Robin_Watts_: as long as we have to use the -c syntax, then I prefer -c "(1-10, 25) SavedPagesPrint" | 19:56.52 |
| the -sSavedPagePrint="1-10,25" is redundant and no less confusing | 19:57.19 |
Robin_Watts_ | Is SavedPagesPrint a postscript command then? | 19:57.21 |
ray_laptop | IMHO | 19:57.22 |
| Robin_Watts_: yes, set up as a pseudo operator in gs_init.ps (or one of the init files) | 19:58.02 |
Robin_Watts_ | Actually, I dislike relying on PS. Think about pcl etc. | 19:58.26 |
| Can we define a new command line thing, like: --savedPagesPrint=..... | 19:58.49 |
| that would work on pcl/ps etc much nicer. | 19:59.06 |
| And it can be implemented the same on all languages. (It could call a dso?) | 19:59.45 |
ray_laptop | it would perform the somewhat less handy: mark /SavedPagesPrint (1-10,25) currentdevice .putdeviceprops | 19:59.55 |
Robin_Watts_ | by all means have it exposed within ps, but we should not *require* ps. | 20:00.18 |
ray_laptop | Robin_Watts_: -c is specific to PS | 20:00.46 |
Robin_Watts_ | Right. hence me saying we shouldn't use -c. | 20:01.03 |
| We should use a new thing. | 20:01.09 |
| Like --savedPagesPrint | 20:01.19 |
| we already support things like --debug | 20:01.27 |
ray_laptop | but a defining a new command line thingy would be doable in plmain arg processor OR ps arg processor | 20:01.52 |
Robin_Watts_ | and by adding a new thing, we aren't complicating and overloading existing functionality like -s. | 20:02.02 |
| right. where possible we should avoid ps. | 20:02.23 |
ray_laptop | Robin_Watts_: OK. That's acceptable to me. So would you thing --SavePages=___ and --SavedPagesPrint=___ ... | 20:03.08 |
| it requires implementing in plmain.c and imainarg.c, but not a big deal | 20:03.41 |
Robin_Watts_ | I've only implemented --debug in imainarg.c | 20:04.20 |
ray_laptop | Robin_Watts_: so it doesn't work with PCL ? | 20:04.38 |
Robin_Watts_ | does too. | 20:04.44 |
ray_laptop | PCL doesn't use imainarg | 20:05.12 |
Robin_Watts_ | Then how is --debug working!? | 20:05.27 |
| "main/debugobj/pcl6.exe --debug" gives me a list of debug flags. | 20:06.35 |
ray_laptop | Robin_Watts_: well, somebody did it (henrys?). Look at pl/plmain.c line 943 | 20:08.02 |
Robin_Watts_ | oh, it's in plmain.c too! | 20:08.08 |
| Phew. grepping for "--debug" wasn't showing it up :) | 20:08.17 |
| sorry about that, so yes, needed in 2 places. | 20:08.35 |
| but the actual implementation can be common. | 20:08.44 |
ray_laptop | but in any case, adding some slightly special code in plmain as well as imainarg doesn't bother me | 20:08.48 |
Robin_Watts_ | That seems a far nicer solution to me, personally. | 20:09.02 |
ray_laptop | Robin_Watts_: and I agree that the implementation can (probably) be shared | 20:09.11 |
| I'm still not convinced about flushing implicitly | 20:09.48 |
Robin_Watts_ | no, flushing implicitly doesn't work this way. | 20:10.14 |
ray_laptop | Robin_Watts_: it could | 20:10.25 |
Robin_Watts_ | could it? | 20:10.34 |
| Then maybe we should have a way to disable automatic flushing ? | 20:10.49 |
| i.e. have it automatically flush by default? | 20:10.57 |
| I'm gonna have to go in a minute, so if I disappear, that's why, sorry. | 20:11.14 |
ray_laptop | gs -sDEVICE=ppmraw -o x-%d.ppm --SavePages=memory in.pdf --SavePagesPrint=even,odd | 20:11.33 |
| it could keep the clist files and the list in memory for a second --SavePagesPrint=reverse or we could require --SavedPagesFlush | 20:12.24 |
| or maybe just use --SavePages=flush | 20:12.59 |
| and the question is whether or not we keep saved pages around after gs exits. I would think for --SavePages=file then, yes | 20:14.03 |
Robin_Watts_ | I like the idea of a single --savepages flag. | 20:14.11 |
ray_laptop | unless someone explicitly does SavedPagesFlush | 20:14.22 |
Robin_Watts_ | --savepages={memory,flush} | 20:14.50 |
ray_laptop | Well, it's easier to remember | 20:14.52 |
Robin_Watts_ | or a page range. | 20:14.54 |
| even/odd/1-10,12- | 20:15.08 |
| Could we offer: 1:10 ? | 20:15.20 |
| meaning 10 copies of page 1 ? | 20:15.27 |
ray_laptop | --SavePages="print even" | 20:15.37 |
Robin_Watts_ | or evens:10 | 20:15.40 |
| why print ? | 20:15.45 |
ray_laptop | just because I like it :-) | 20:16.03 |
Robin_Watts_ | personally I dislike HavingToRememberToShiftCaps | 20:16.20 |
| gotta go, sorry. | 20:16.49 |
ray_laptop | I also prefer "copies=10 print normal" to "normal:10" | 20:17.11 |
| I'll work on the saving and printing given printer device params and we'll discuss the UI more later | 20:18.17 |
tor8 | Robin_Watts_: two commits look fine | 21:20.04 |
Robin_Watts_ | tor8: (FOr the logs) Thanks. | 23:24.48 |
| Forward 1 day (to 2013/06/26)>>> | |