| <<<Back 1 day (to 2015/02/22) | 20150223 |
Guest39373 | I could see a huge gain in throughput when i use NumRenderingThreads option, With NumRenderingThreads=4 CPU utilization was around 50% and increasing NumRenderingThreads was not having much impact on throughput. | 10:43.15 |
| is there any other way to maxout cpu? | 10:43.26 |
| *maxout CPU utilization? | 10:44.02 |
kens | Hmm no IRC logs this morning ? Ghostsbot is not happy ? | 10:44.43 |
| Ah no its OK, I see somehting now | 10:45.20 |
| Guest39373 : There's a limit ot how many rendering threads will be useful. | 10:45.47 |
Guest39373 | do you mean that due to dependencies in the code, there is limit on max threads | 10:47.21 |
kens | Since the clist requires reading to/from disk (unless you have a memory based clist) no matter how many threads you ahve there is always going to be some time spent by those threads reading from disk, and also there is contention for other resource which may be locked (eg writing to memory etc) | 10:47.34 |
| So although you may have N threads spawned, at any given moment, several of them will be waiting for access to some system resource. | 10:48.01 |
| Which limits the maximumCPU utilisation | 10:48.11 |
Guest39373 | OK, So is there a way to identify the proper value for NumRenderingThreads? | 10:49.08 |
chrisl | Also, we'll only ever spawn a thread for each band - so even if you specify 16 threads, if you only have 4 bands, we'll only use 4 threads | 10:49.40 |
kens | Guesswork or experimentation. Also it depends (obviously) on the CPU you are running a dual core CPU with no hyper-threading isn't going to benefit much | 10:49.43 |
| Oh yeah that's true too | 10:49.50 |
Guest39373 | OK. Thank You. | 10:51.10 |
kens | Hmm, reading colour specifications from a Default Appearance for an annotation looks to be totally screwed up, I wonder if it was me that wrote that :-( | 10:53.47 |
| Anyone know where there's a definition of the PDF operator 'r' which takes a 3 element array as an argument and appears to define a colour in rgb ? | 11:20.39 |
Robin_Watts | chrisl, paulgardiner: (In particular, though others may be interested)... Weather in Colorado looks... interesting. | 11:45.16 |
kens | Looked cold last time I looked | 11:45.27 |
| an snowy | 11:45.30 |
Robin_Watts | Indeed. A high of -4 today. A low of -12. | 11:45.59 |
kens | Yeah that's what I saw too | 11:46.16 |
| but the -12 is at night I think | 11:46.24 |
Robin_Watts | yeah. | 11:46.34 |
| but nonetheless... | 11:46.41 |
kens | Oh sure, its going to be cold | 11:46.50 |
kens | has thermal gear ready to pack | 11:46.58 |
Robin_Watts | and longmont/rocky mountain park/vail will be colder. | 11:47.18 |
kens | Still 2 weeks, could change | 11:47.34 |
chrisl | I think the only thing I'm missing is a warm hat..... Hmmm, I hate warm hats :-( | 12:01.09 |
kens | Robin_Watts : the 10 day forecast for Copper has next Tuesday/Wednesday as -15 to -7 Going to be very chilly ski-ing weather | 12:05.40 |
Robin_Watts | tor8: Updated commits on robin/master for the mupdf clipping stuff. | 13:07.33 |
| I'm down to differences in just 2 files now, both of which seem fine to me. | 13:08.00 |
tor8 | Robin_Watts: how does the cluster handle merge commits? | 13:36.02 |
| Robin_Watts: looking now | 13:36.06 |
Robin_Watts | tor8: it'll test down the first parent. | 13:37.04 |
| so if you merge a branch into master, only the merged thing will be tested. | 13:37.22 |
tor8 | Robin_Watts: good. | 13:38.01 |
| that's what I want to happen when I merge the 'html' branch | 13:38.11 |
| then the question that remains is -- do the fixups that need to be done to build after the merge go in a separate commit or squashed into the actual merge? | 13:38.43 |
| as a separate commit, they read properly when viewing diffs | 13:39.22 |
| Robin_Watts: fz_irect_from_rect_nonzero instead of _clip maybe? | 13:40.37 |
| Robin_Watts: how does acrobat handle a 0, 0.1 to 100, 0.1 clip region? | 13:42.04 |
Robin_Watts | tor8: Again, that's a 1 pixel high beast. | 13:42.27 |
| Which fixups? | 13:42.37 |
tor8 | I suspect that might be the fill rule playing in | 13:42.39 |
| Robin_Watts: for my html branch | 13:42.45 |
Robin_Watts | tor8: So, let me see if I am following this... you have a html development branch. | 13:44.01 |
| it's ready to be merged back to master, except master has moved on, and so there are fixups required. | 13:44.20 |
| The correct thing to do in a situation where merges are allowed would be to merge master into your branch. | 13:44.42 |
| Then do the fixups there. | 13:44.45 |
| Then merge your branch back to master. | 13:44.51 |
| That way, all the commits on master test out perfectly. | 13:45.06 |
| Now, we tend to disapprove of merges. | 13:45.18 |
tor8 | so ping-pong the parents and have a separate fixup commit | 13:45.22 |
Robin_Watts | so the alternatives would be: | 13:45.30 |
tor8 | yeah, but rebasing across this change won't be easy | 13:45.34 |
Robin_Watts | 1) swallow our pride and accept the merge. | 13:45.40 |
| 2) rebase the whole of the html branch. | 13:45.49 |
| 3) commit the html branch as 'html-development-branch' to golden. | 13:46.20 |
| and commit a squashed version of 3 to master. | 13:46.46 |
tor8 | the html branch has upwards of 60 commits, that I want to preserve historically | 13:46.48 |
| I'm not fond of the keep-a-branch-for-posterity and commit a squashed version idea | 13:47.14 |
Robin_Watts | where the squashed version of 3 has all the fixups in, and says "for full history see..." | 13:47.22 |
| It's the only way to keep bisectability. | 13:47.33 |
tor8 | this is the case where I'd say merges are useful | 13:47.33 |
Robin_Watts | I agree that long term development of stuff like this deserves to have the history kept. | 13:48.20 |
| And if that means having merges, then so be it. | 13:48.37 |
| One way to do this that would work... | 13:49.00 |
| merge to master. do the fixups. | 13:49.11 |
tor8 | my one-commit-merge that has both the merge and fixups in it is on tor/master | 13:49.11 |
Robin_Watts | squash the fixups into the merge. | 13:49.18 |
tor8 | as well as two other unrelated commits ready for review | 13:49.21 |
| Robin_Watts: yeah, that's what I've done now (though redoing any other way is easy enough) | 13:49.55 |
Robin_Watts | Oh... are you saying that not every commit on your branch tests out nicely? | 13:50.08 |
tor8 | Robin_Watts: they should | 13:50.18 |
Robin_Watts | Then great. | 13:50.25 |
| Let me look at your commits | 13:50.30 |
tor8 | ah, and the merge commit in on tor/html not tor/master | 13:51.05 |
Robin_Watts | The 2 commits on tor/master look good. | 13:51.32 |
tor8 | I'm not happy about how gitweb and gitk display the squashed-merge commit though | 13:52.06 |
| so maybe your tango with merge one way, fixup, merge the other way is best | 13:52.21 |
Robin_Watts | tor8: Let me grab lunch, while you experiment with your favourite way of working :) | 13:54.03 |
tor8 | however, the two-step merge will mean that the first of the merge commits won't actually build for bisect | 13:55.08 |
Robin_Watts | I'd *REALLY* like some comments about the fz_css_ structures. | 13:55.31 |
tor8 | and 'git merge html' in the second step just does a fast-forward | 13:55.44 |
| so maybe that's not the way to do it... | 13:55.52 |
Robin_Watts | tor8: maybe. | 13:56.05 |
jogux_ | tor8: Iâd think go with how youâve already done it. If it was an actual merge conflict (ie. one where git couldnât automatically reconcile conflicting changes) weâd have to do it the way youâve done it I think⦠| 13:56.08 |
| (you could force âgit merge htmlâ to add a merge commit instead of doing a fastforward, but not sure that particularly helps) | 13:57.19 |
tor8 | jogux_: yeah. I suspect that is the way things should work, despite the weird way merge commits are displayed in various tools | 13:58.03 |
| actually, gitweb's display is starting to make sense now | 14:02.46 |
| Robin_Watts: the fz_css_ structures map to the CSS grammar productions | 14:04.17 |
jogux_ | yeah, the gitweb one looks kind of sane to me - you can see both parents of the merge, and the additional changes that were made show in the diff. I think | 14:04.27 |
tor8 | yeah, looking at diff1 | diff2 lets you see both ways of viewing the merge | 14:04.49 |
| but you have to pick each individual diff file to see those | 14:05.47 |
| just looking at the 'parent' commit diff doesn't show the fixups as nicely | 14:06.24 |
| because the fixup is hidden in all the diffs from html to master | 14:07.44 |
Robin_Watts | include/mupdf/html.h has various enums in. | 14:30.49 |
| Those might be nicer with FZ_ (or FZ_SOMETHING_) prefixes ? | 14:31.14 |
| Things like fz_css_number_s have an 'int unit'. | 14:32.07 |
| Could that be an fz_number_unit ? where fz_number_unit is a named enum ? | 14:32.37 |
| anonymous enums are nicer than defines, but not as nice as named enums. | 14:33.34 |
tor8 | yeah, named enums should be easy enough to add | 14:34.19 |
| if there should be a prefix, it should be FZ_CSS_ and FZ_HTML_ which is a lot of prefix to add | 14:35.11 |
Robin_Watts | I'd be in favour of that, personally. | 14:35.27 |
tor8 | it might be better to move this to an internal header file since the enums and structs should be opaque to most clients | 14:35.51 |
Robin_Watts | I am right in thinking that these are external currently? | 14:35.59 |
| Yes. That would be a better solution. | 14:36.05 |
| (I am not averse to that being done post commit) | 14:36.31 |
tor8 | this code is still in-progress but I want to get it into master before the meeting | 14:37.16 |
| and we can certainly do plenty of cleanup tasks | 14:37.30 |
| I've got the external naming and APIs sorted, just not the actual header file organisation | 14:37.48 |
| some c files could be merged, and then many of the structs and enums could be file specific | 14:38.07 |
| and I got it to not leak like a sieve :) | 14:38.24 |
pipitas | Short question about PDF spec and how Ghostscript's generates "streamâ¦.endstream": | 14:49.48 |
| The official spec says: "There should be an end-of-line marker after the data and before endstream;" | 14:50.04 |
| However, I just noticed GS 9.16 (selfcompiled from today's Git sources) does not put an EOL before 'endstream'. | 14:50.48 |
kens | Yeah that's one of the PDF spec's weaknesses 'should be' not 'must be' | 14:50.52 |
| Try setting PDFA or PDFX | 14:51.11 |
Robin_Watts | kens: Presumably there are things that would be broken if we did always put an EOL marker in there? | 14:52.19 |
pipitas | kens: So it is a deliberate decision by you/GS developers to not put an EOL before 'endstream'? (Just asking, no criticism intended) | 14:52.23 |
kens | Robin_Watts : not broken, but streams would be larger (by one byte). | 14:52.44 |
Robin_Watts | oh. hmm. | 14:52.58 |
kens | Robin_Watts : and pipitas I fixed the code for PDF/A and PDF/X because conformance checkers for those sub-types specifically check. I didn't want to change existing behaviour, so I left alone all the places where it was simply 'endstream' | 14:53.22 |
pipitas | I noticed it when trying to apply another CLI tool, "pdf-parser.py" to a GS-generated PDF, and it couldn't extract the stream because of that. I'm just asking because I can easily submit a request to the author of pdf-parser.py, telling him that GS behaves like that (unless it was a bug in current Git). | 14:54.19 |
Robin_Watts | tor8: I've just read through the merge commit (which looks to be everything) | 14:54.42 |
| It seems plausible. | 14:54.54 |
| So, good stuff! Go for it. | 14:55.01 |
kens | Note from that point on, new code always adds white space as required, EOL or whatever. Eg line 2160 of gdevpdf.c, the linearisation code. | 14:55.15 |
| pipitas; "Acrobat can open it" :-D | 14:55.40 |
| I'm almost certain that a search would turn up files produced by other PDF tools which don't always put a \n before an endstream. | 14:56.40 |
pipitas | kens: Sure, I didn't say all other tools put EOL before 'endstream' and Ghostscript is the only one behaving so, nor that Acrobat refuses to open it. | 14:58.37 |
kens | The PDF 1.7 spec says "It is recommended that there be an end-of-line marker after the data and before endstream; this marker is not included in the stream length." and we don't even produce files requiring a version that high I don;t think. | 14:58.46 |
| The PDF 1.4 spec doesn't evenhave that recommendation | 15:00.04 |
| You should probably tell the authour of pdf-parser.py that he can't really rely on that. | 15:00.28 |
pipitas | kens: I haven't checked previous specs (there is an almost complete list on that relatively recent web resource: http://acroeng.adobe.com/wp/?page_id=321 â other good stuff there tooâ¦) â but I doubt this was a new recommendation for 1.7 | 15:00.42 |
| Ah, you already checked 1.4 spec⦠| 15:00.58 |
kens | I have all the versions, I just can't be arsed to check themall | 15:01.04 |
pipitas | No-one (in my room at least) wants to arse you, kens :-) | 15:01.37 |
kens | It is quite specific that a 'stream' must be followed *only* by a CR/LF or just a plain LF. CR is unacceptable | 15:02.01 |
| "Can't be arsed" is UK slang for 'can't be bothered' only slightly more vehement | 15:03.38 |
pipitas | Ok, I'll write to Didier Stevens then about pdf-parser.py (I just needed confirmation that this GS behaviour is not exceptional for current Git). | 15:03.59 |
kens | No,its been liek that for ages, and I'm in no rush to change it | 15:04.18 |
| Its fine according to older versions of the spec, and a 'should' is insufficiently strong at this point for me to bother changing it. | 15:05.10 |
| Also, it wasn't specified in earlier versions, so I'm pretty sure there will be other files like this, and it would be better for pdf-parser.py to be liberal in this regard, it wilol increase the number of files it can process without problem | 15:05.52 |
| If you really need to have a PDF file like that, setting PDFA or PDFX for pdfwrite will generate files like that. | 15:06.28 |
tor8 | Robin_Watts: thanks. | 15:24.29 |
| did you have any thoughts about naming the irect from rect for clipping function? | 15:25.02 |
Robin_Watts | tor8: Urm... | 15:26.03 |
| I kinda liked _clip as it emphasised the reason for it being a special case. | 15:26.22 |
| but I'm open to persuasion. | 15:26.33 |
tor8 | yeah, but it seems very specific :) | 15:26.41 |
Robin_Watts | It's a specific workaround :( | 15:26.52 |
tor8 | but given the comments around it, I'm okay with it | 15:26.55 |
Robin_Watts | If we omit the special case, we get progressions in a few files. | 15:27.12 |
| and just 1 file that's a regression against acrobat. | 15:27.27 |
| a file goes from having 3 vertical 1 pixel lines in to being completely blank. | 15:27.51 |
| That's 3 vertical lines that are 1 pixel wide. | 15:28.09 |
| tor8: let me run a bmpcmp with the progressions in and you can see. | 15:36.23 |
tor8 | if it were up to me I'd ignore the special case for acrobat | 15:43.53 |
Robin_Watts | ok. let's double check the bmpcmp and if we're both happy we'll do that. | 15:46.40 |
| I'd like to ignore acrobats in(s)anity too. | 15:46.52 |
| tor8: wierd. | 15:56.52 |
| gah. | 15:57.37 |
tor8 | Robin_Watts: huh? | 16:04.19 |
henrys | good morning ron_ | 16:04.43 |
ron_ | Good morning.... | 16:05.11 |
rayjj | hi, ron | 16:06.32 |
henrys | ron_: our first mission will be to get you over to skype. That's where all the SOT stuff happens. Have you used the skype channel Ghostdocs? | 16:06.35 |
rayjj | henrys: is the 8am meeting for all of us, or just the SOT crowd? | 16:06.54 |
henrys | folks ron_ is going to be helping out with SOT support. | 16:07.02 |
Robin_Watts | ron: Hi. | 16:07.10 |
kens | What 8 am meeting ? | 16:07.15 |
ron_ | I have a skype account but I don't usually leave it "on". I'll fire it up and see if I can find the channel. | 16:07.32 |
Robin_Watts | Ron: See my private tab... | 16:07.34 |
henrys | no meeting. Just ron_ is starting today | 16:07.35 |
| well starting SO support | 16:08.10 |
| ron_: I've added you to the SO support email list. | 16:09.10 |
Robin_Watts | chrisl: hey | 16:36.21 |
| I see the ghostscript makefile now has a TARGET_ARCH_FILE thing. | 16:36.48 |
| How does that work then? Everything still seems to look for 'arch.h' | 16:38.48 |
mvrhel_laptop | good morning | 16:40.39 |
chrisl | Robin_Watts: you ought to know, you added it...... | 16:42.05 |
Robin_Watts | I did? Crap. | 16:42.22 |
chrisl | Basically, if you set TARGET_ARCH_FILE then we'll copy the contents to obj/arch.h instead of generating a new arch.h | 16:42.57 |
Robin_Watts | yeah, I see that now. | 16:43.07 |
| gcc include paths are bonkers. | 16:43.18 |
chrisl | Unfortunately, not just gcc..... | 16:43.44 |
Robin_Watts | malc_: You were looking for me yesterday... | 16:58.11 |
malc_ | Robin_Watts: yeah, again the dimension collection woes | 16:58.57 |
| Robin_Watts: when i use fz_document style approach for this (load_page/bound_page) it works fine for some pdfs, but then again fails miserably i.e. very slow on others | 16:59.34 |
Robin_Watts | tor8: OK, bmpcmp is up. | 17:00.51 |
malc_ | on this machine it takes ~10 seconds to "measure" ECMA-376... | 17:00.51 |
Robin_Watts | malc_: It's not a broken file that's being repaired as part of bound_page is it ? | 17:01.56 |
malc_ | Robin_Watts: nope | 17:02.08 |
Robin_Watts | pdf_bound_page does nothing time consuming at all. | 17:02.21 |
| pdf_load_page might.... | 17:02.30 |
| is it the load or the bound that's slow ? | 17:02.33 |
malc_ | Robin_Watts: i believe it just doesn't have too many Pages with more than 16 entries | 17:02.34 |
| the profile is dominated by strcmp | 17:02.48 |
Robin_Watts | malc_: Can you time the same thing without the fz_bound_page please? | 17:06.18 |
malc_ | Robin_Watts: bound page plays no role in timing (at all) | 17:09.30 |
| iow i reran the test | 17:09.45 |
Robin_Watts | malc_: Right. | 17:09.51 |
| Ah. pdf_load_page does the transparency detection... | 17:10.45 |
| Can you try #if 0ing out the last bit of pdf_load_page ? | 17:11.39 |
| #if 0\nfz_try(ctx)....fz_catch(ctx){....}\n#endif | 17:12.18 |
malc_ | do_blending part? | 17:12.35 |
| 9.2 sec | 17:13.41 |
Robin_Watts | I don't see a do_blending anywhere. I see a "pdf_resources_use_blending"... | 17:14.00 |
| Ok, I was hoping that would have saved more time. | 17:14.09 |
malc_ | the problem is pdf_lookup_page_obj | 17:14.21 |
tor8 | Robin_Watts: remind me (yet again) which side of the bmpcmp is the reference and which is the candidate? | 17:37.31 |
| could we *please* add that to the page header | 17:37.37 |
| I always forget | 17:37.39 |
| the middle column seems to look better overall, and where it doesn't the page is so full of anti-aliased image alignment leaks I don't care about a pixel this way or that | 17:40.46 |
mvrhel_laptop | ah! finally figured out why the color names for the ICC NCLR output did not work | 17:43.43 |
rayjj | tor8: it is CRD (check, ref, diff) | 17:43.48 |
mvrhel_laptop | kens had reordered things in put params which caused the profile to load before the names list of the colorants | 17:44.24 |
| so we were getting the default names and the colorant names | 17:44.42 |
rayjj | tor8: so if the middle looks better, that's the "ref" -- CRD is also "Changed", "Reference", "Difference" | 17:44.46 |
mvrhel_laptop | I really need to spend a couple weeks and make some regression tests for all these options | 17:46.19 |
| this was broken in May :( | 17:46.48 |
| oh and there was actually another bug lurking in all of this | 18:05.55 |
hyper_ch | hi there, I have a strange thing... my document scanner seems to have created an invalid pdf file... well, I couldn't OCR it with abbyocr11... so I thought I run it through ghostscript and it produced those results: **** Warning: File has insufficient data for an image. and warning: ignoring zlib error: incorrect data check ..... now I wonder, does my document scanner have issues? | 18:39.17 |
| looking at the pdf I see half of a page is missing and there's some color screw up on another... I assume those are the two warnings as they concide with one another | 18:43.02 |
| it's the first time I notice such a problem. Can I use gs somehow to actually verify the pdf created by the scanner is ok? | 18:43.38 |
Robin_Watts | tor8: I disagree. | 18:53.23 |
hyper_ch | hmmmm, are there somewhere exit codes for ghost script? | 18:53.32 |
Robin_Watts | I think the left column looks better in every case where I have a preference. | 18:53.35 |
| hyper_ch: gs is not a pdf validator. | 18:53.51 |
| is your scanner a xerox? | 18:54.13 |
hyper_ch | Robin_Watts: no, a brother document scanner | 18:54.25 |
| ads-2600w | 18:54.34 |
Robin_Watts | dunno then. | 18:54.38 |
hyper_ch | other files scanned with it are fine | 18:54.42 |
| http://paste.debian.net/155873/ | 18:55.34 |
| first file produces errors, second file is fine | 18:55.41 |
| Robin_Watts: you know a pdf validator then? | 18:56.02 |
Robin_Watts | nope. | 18:56.09 |
hyper_ch | --> **** This file had errors that were repaired or ignored. --> I need to parse that then | 18:56.25 |
Robin_Watts | tor8: The only duff ones is tests_private/comparefiles/Bug693509.pdf | 18:56.39 |
| and that's the acrobat zero sized clip thing. | 18:56.51 |
| oh, and the W's. | 18:57.52 |
hyper_ch | Robin_Watts: gs tells me to inform itextpdf.sf.net-lowagie.com about it.... is that: http://itextpdf.com/ ? | 18:57.52 |
Robin_Watts | presumably. That's the software that was used to generate your pdf | 18:58.16 |
hyper_ch | thx | 19:01.12 |
| Robin_Watts: got my little work around ;) | 19:03.57 |
kens | hyper_ch : You can use -dPDFDEBUG and -dPDFSTOPONERROR to find out which object is causing an error, and why. However, the first error seems quite explicit, the image stream didn't have enough data to satisfy the declared size of the image. THe second one indicates a Flate decompression problem, that *could* be our bug I seem to recall there was a fix that might be related. | 19:12.53 |
hyper_ch | kens: my solution: | 19:13.41 |
| checkGS=$(gs -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile="${destPath}/raw.pdf" -f ${partFile} 2>&1) | 19:13.42 |
| if [[ "${checkGS}" == *"file had errors"* ]] | 19:13.44 |
| then | 19:13.45 |
| errorGS="ERROR" | 19:13.47 |
| fi | 19:13.48 |
kens | However, the current released version of GS is 9.15 so I'd suggest you at least try that. If you are comfortable compiling GS and using Git then you can try the master code. | 19:13.53 |
| Or send us the PDF file and we can look at it. | 19:14.24 |
hyper_ch | kens: yeah, the pdf has half a page missing on page 4 | 19:14.37 |
mvrhel_laptop | hi kens | 19:14.40 |
hyper_ch | and the other error is weird | 19:14.46 |
| however I can't send you the pdf ;) | 19:14.52 |
kens | Hi mvrhel_laptop I'm not really here, just passing through | 19:14.59 |
mvrhel_laptop | do you happen to have handy a deviceN ps or pdf file that has a DeviceN image that is not in an indexed color space | 19:15.12 |
| kens: i understand | 19:15.18 |
kens | Err, I'm not sure, let me see. | 19:15.22 |
mvrhel_laptop | don't worry. I will find one | 19:15.26 |
hyper_ch | kens: I have already my workaround.... after all the processing I'll make sure that the file will be labelled ERROR.... pdf | 19:15.28 |
kens | mvrhel_laptop : I can always *make* one :-) | 19:15.36 |
mvrhel_laptop | that is what I think I am going to have to do | 19:15.47 |
kens | hyper_ch : well, if that's sufficient..... | 19:15.47 |
| mvrhel_laptop : give me a minute, I may have one, I put some tets together for the colour work in pdfwrite, I have quite a few lying around | 19:16.14 |
mvrhel_laptop | kens: that would be great | 19:16.24 |
kens | I have to remember *where* I filed them though ;-) | 19:16.45 |
mvrhel_laptop | kens: we need to think about getting lift tickets too | 19:17.02 |
| we can chat about that tomorrow | 19:17.12 |
kens | mvrhel_laptop : Yes I was looking online, they claim to be cheaper | 19:17.17 |
| 3 days for $240 | 19:17.26 |
mvrhel_laptop | right. rayjj mentioned something about multi-resort tickets also | 19:17.43 |
| if we wanted to go to different locations | 19:17.53 |
kens | Yeah I'm not sure about that. | 19:18.00 |
mvrhel_laptop | I am fine staying where we are | 19:18.09 |
kens | From Vail the 3 day pass allows you to go elsewhere, it doesn't seem that way from Copper | 19:18.13 |
hyper_ch | kens: yeah, basically I scan the files in... then I send them through OCR - which failed because of those errors -.... then I attach a blank page and apply a qualified timestamp and a digital signature... then I count total pages and rename file to YYYY-MM-DD HH:MM - #pages.pdf.... and it will then end up in the user's folder. So the only thing I have to do know is track if gs has an error and if so, alter the final name by prepending it | 19:18.41 |
mvrhel_laptop | but if its easy I am fine with either way | 19:18.41 |
hyper_ch | with ERROR | 19:18.42 |
kens | TBH I think I'm fine staying in Copper, I doubt I'll do all the trails in 3 days, especially if its going to be freezing | 19:18.42 |
mvrhel_laptop | right | 19:18.53 |
| how cold is it? | 19:19.00 |
kens | OK I have a hand-built PS file that does 'something' with images and DeviceN | 19:19.02 |
| the weather forecast for next wed/Thurs is -15 to -7 Centigrade | 19:19.19 |
| low numbers Fahrenheit | 19:19.28 |
mvrhel_laptop | ok. that is on the chilly side | 19:20.02 |
kens | mvrhel_laptop : I'll mail you this file, its small and you can see if its any good | 19:20.03 |
tor8 | Robin_Watts: the chartab.pdf corners of the W's are clipped in the old vs new | 19:20.05 |
mvrhel_laptop | thanks kens | 19:20.10 |
Robin_Watts | tor8: They are. | 19:20.15 |
| That's because the d1 specifies a size that says they should be clipped, I believe. | 19:20.29 |
| previously when we bounded the paths, we got the mitres in too. | 19:20.47 |
| but strictly speaking we were wrong. | 19:20.54 |
tor8 | odd that it would clip that way when rotated though | 19:20.58 |
mvrhel_laptop | henrys: I wish there was a way I could get stefan to pull from the repository. Handing him "changed" files is going to get ugly I fear | 19:21.47 |
tor8 | are they type3 fonts in that file? | 19:21.55 |
Robin_Watts | tor8: I was assuming that the glyph was not rotated, and that the contents for the type3 glyph said: rotate, and stroke this 'W'. | 19:22.03 |
tor8 | Robin_Watts: in that case, it would make more sense | 19:22.27 |
| guess I should take a closer look | 19:22.34 |
mvrhel_laptop | got it kens. thanks | 19:22.38 |
kens | NP | 19:22.42 |
| I shoudl have some better ones here somewhere | 19:22.51 |
| I had an exhaustive set, every possible colour space with every possible alternate | 19:23.17 |
Robin_Watts | tor8: acrobat wants a font pack before it will display that file. | 19:24.02 |
mvrhel_laptop | damn network . | 19:24.09 |
| network issues here | 19:24.18 |
tor8 | Robin_Watts: <stroke_text font="www.pdflib.com" wmode="0" colorspace="DeviceGray" color="0" matrix="0.5771 -0.8167 -0.8167 -0.5771 0 842" trm="112.833 0 0 112.833"> | 19:24.21 |
| looks like a rotation matrix to me | 19:24.26 |
mvrhel_laptop | kens. oh this is index colors | 19:24.32 |
kens | Is it ? Drat | 19:24.40 |
mvrhel_laptop | I am looking for an image that is not index | 19:24.42 |
Robin_Watts | tor8: right, so we're good then ? | 19:24.42 |
tor8 | so we probably bound the glyph badly when stroking? | 19:24.53 |
mvrhel_laptop | oh no | 19:25.08 |
| never mind | 19:25.10 |
| it is a mix | 19:25.12 |
Robin_Watts | tor8: No, previously we used to bound the glyph and that was why it worked. | 19:25.20 |
kens | AH yes possibly, there are several spaces in there | 19:25.23 |
Robin_Watts | With the new code we obey the rectangle set by d1. | 19:25.33 |
kens | You can delete the ones you don't want of course | 19:25.34 |
Robin_Watts | I did a test file where I set a small rectangle with d1, and drew a huge rectangle. | 19:25.51 |
tor8 | Robin_Watts: yeah, but we should still expand that if using stroked text render mode | 19:25.54 |
mvrhel_laptop | ok this network is really pissing me off | 19:26.02 |
Robin_Watts | tor8: no. | 19:26.07 |
| Our old code drew the huge rectangle. | 19:26.12 |
mvrhel_laptop | one of the images appears to be separation | 19:26.13 |
Robin_Watts | Acrobat, and our new code, both draw the small rectangle. | 19:26.21 |
| Thus the d1 rectangle is taken as gospel. | 19:26.36 |
kens | mvrhel_laptop : I don't see any occurence of 'Separation' in the PostScrip file | 19:26.45 |
tor8 | Robin_Watts: but then all stroked text rendering is going to look bad...? | 19:26.52 |
mvrhel_laptop | oh I was looking at the pdf | 19:26.57 |
| let me look at the PS | 19:27.01 |
Robin_Watts | tor8: No. Cos the d1 rectangle should already be expanded to allow for the stroked width. | 19:27.18 |
kens | I was starting from the PostScript, I *thought* they were the same file, but I could easily be mistaken | 19:27.21 |
tor8 | is that true for type1 and truetype fonts as well? | 19:27.30 |
kens | THe PostScript I can read easily though :@-) | 19:27.31 |
Robin_Watts | The d1 is only type3s. | 19:27.42 |
tor8 | I mean, stroked text rendering is a completely separate step | 19:27.46 |
mvrhel_laptop | kens: so what I am looking for is a deviceN color space and an image | 19:28.16 |
kens | OK that was the wrong test file | 19:28.29 |
| I have a probably better one, and one that I'm confident has 2 BPP images in just about every conceivable colour space, but its probably too complicated | 19:28.59 |
Robin_Watts | Any type3 that happens to use stroked paths (or glyphs) in the glyph definition must have had the d1 rectangle expanded to allow for it. | 19:29.02 |
tor8 | but if you mean that d1 is actually supposed to set a clip rectangle, rather than just list the metrics to be scaled and fudged as needed by the font caching | 19:29.06 |
Robin_Watts | d1 corresponds to a postcript operator like setcachedevice (or something like that) | 19:29.29 |
| It specifically sets a bbox. | 19:29.36 |
| so it's exactly a clip rectangle. | 19:29.46 |
kens | MacWinner : Much bigger test file and a different simpler one on the way. | 19:30.19 |
| I'd strongly suggest you try the simpler one first..... | 19:30.33 |
tor8 | the pdfref17 spec says "The declared bounding box must be correct. [snip] If any marks fall outside this bounding box, the result is unpredictable." | 19:30.47 |
Robin_Watts | yeah. | 19:30.57 |
hyper_ch | kens: my error check work now: FEHLER 2015-02-23 20-29-21 - 25s - sig.pdf | 19:31.27 |
kens | OK | 19:31.33 |
| Robin_Watts : As far as I cna see form the customer's PNG files, they are complaining about the anti-aliasing ? | 19:31.53 |
hyper_ch | so OCR still works and I will now that there was an issue with the pdf :) | 19:31.54 |
Robin_Watts | tor8: From the same page: PDF consumer application prints to a PostScript output device. This applies particularly to the operands of the d0 and d1 operators, which in PostScript are named setcharwidth and setcachedevice. For further explanation, see Section 5.7 of the PostScript Language Reference, Third Edition. | 19:31.58 |
| kens: eh? | 19:32.15 |
kens | The MuPDF fonts question | 19:32.21 |
tor8 | okay, given that reading I'm convinced we're not doing the wrong thing | 19:32.24 |
Robin_Watts | Oh, I haven't restarted my mail client since rebooting earlier. | 19:32.26 |
kens | Email just arrived about the 'shadow' round text | 19:32.33 |
Robin_Watts | kens: yeah, I wondered about that. | 19:32.33 |
tor8 | so this bmpcmp, is that with or without the acrobat specific workaround? | 19:32.46 |
Robin_Watts | tor8: This is without the acrobat specific workaround. | 19:33.08 |
| With the workaround, we're left with the 'W'. | 19:33.44 |
tor8 | kens: as suspected, customer cluelessness regarding what anti-aliasing actually is | 19:33.46 |
kens | Yes indeed! | 19:33.55 |
Robin_Watts | kens: Right. I will write a "you are a fool" message back. | 19:34.38 |
kens | :-D | 19:34.46 |
| I could be wrong, but it *looks* like anti-aliasing to me | 19:35.18 |
tor8 | kens: I can't think of anything else they could possibly mean. | 19:35.47 |
kens | Grr, wizard killed by a sergeant with a wand of cold :-( | 19:35.58 |
tor8 | the word "anti-aliasing" is like "graviton particle beams" to laymen... | 19:36.10 |
| kens: nethack? | 19:36.24 |
kens | Yeah :-( | 19:36.30 |
| Was going well, got past Gnomish mines, had the luckstone, +3 studded leather and +3 boots of speed, identify spellbook, blessed lamp, and a bunch of other good stuff. BNut no cold resistance and no reflection | 19:37.28 |
hyper_ch | thx for the help :) | 19:37.37 |
kens | You're welcome :-) | 19:37.50 |
hyper_ch | DnD? | 19:37.56 |
kens | Nethack | 19:38.01 |
kens | is an old hacker | 19:38.14 |
hyper_ch | me and my roomies from university still play dnd :) | 19:38.52 |
kens | I don't play D&D any more, mostly I play board games these days, but occasionally some GURPS or Call of Cthulhu | 19:39.24 |
| Or The Laundry Files, hoping to have a bash at Primeval sometyime | 19:39.55 |
mvrhel_laptop | kens: thanks for the files! | 19:40.13 |
tor8 | kens: that the stross-based CoC-spinoff? | 19:40.18 |
hyper_ch | well, we play D&D through Roll20 website and Teamspeak :) we live too far away... maybe 1-2 per month for an evening... | 19:40.23 |
kens | tor8 yes, its the game based on teh Laundry. I cna't recall what the rules system is called | 19:40.45 |
| hyper_ch : I used to play D&D by email :-) | 19:41.02 |
hyper_ch | kens: ok, that's weird ;) | 19:41.20 |
kens | Well, its slow :-) | 19:41.28 |
| But back then there was no teamspeak | 19:41.42 |
hyper_ch | roll20 is rather nice :) I'm positively surprised | 19:41.50 |
kens | At least we weren't using snail mail..... | 19:41.56 |
mvrhel_laptop | kens: this has what I needed | 19:42.13 |
kens | mvrhel_laptop : the smaller file I hope! | 19:42.27 |
mvrhel_laptop | yes ;) | 19:42.35 |
kens | Thank goodness, the other is a torture test | 19:42.44 |
| The small one was the one I meant to send, I somehow dragged the wrong file, the names are sort of similar | 19:43.27 |
mvrhel_laptop | I am glad to have both though | 19:43.39 |
kens | Oh well, may be useful to you one day | 19:43.52 |
| OK off to watch University Challenge and Only Connect, night all. | 19:45.07 |
Robin_Watts | "Let me know if it still isn't clear and I can have another go at explaining it. Perhaps using bright colors, small words and glove puppets." | 19:56.26 |
mvrhel_laptop | ok. the named color stuff appears to all be working now. | 20:14.29 |
| one final commit to test | 20:14.35 |
| henrys: you there? | 20:20.18 |
| marcosw: are you around? | 20:25.40 |
| I had a question with http://bugs.ghostscript.com/show_bug.cgi?id=695790 | 20:25.46 |
| heading out for lunch | 20:32.41 |
Robin_Watts | mvrhel_laptop: For the logs: haha. I know exactly what the problem with that file is. | 21:09.25 |
| it's exactly the acrobat clipping wierdness that I've been talking to tor8 about. | 21:09.47 |
| The path is a series of zero sized rectangles. | 21:10.25 |
| Accordingly, when the path is filled, nothing should be drawn. | 21:10.43 |
| but acrobat treats zero sized rectangles as 1 pixel wide when they are used for clipping paths. | 21:11.32 |
| The reason that file actually draws anything is that just occasionally the rectangles aren't zero sized. | 21:12.57 |
henrys | last regression run brought down henrysx6, I suspect its days are numbered | 21:17.32 |
| mvrhel_laptop:for the logs I wouldn't fix an xpswrite bug unless we have a user report from the beta release. | 21:50.02 |
| bbiab | 21:50.57 |
mvrhel_laptop | I understand about xpswrite. I will not spend anytime on it. marcosw has that bug as a blocker though. but from what Robin_Watts pointed out it is really an odd one | 23:50.32 |
| ok fixed a couple warnings, doing regression test and hopefully that will be it for stefan. I imagine something else might come up but I think that I should be just about there | 23:58.43 |
| Forward 1 day (to 2015/02/24)>>> | |