| <<<Back 1 day (to 2012/04/26) | 2012/04/27 |
tarepanda | I'm trying to figure out how to use the ghostscript .so in C++ in Linux -- are there any examples anywhere? I haven't been able to find any at all and have no experience using .so. :( | 00:51.52 |
ray_work | tarepanda: You should read (at the minimum) the Use.htm API.htm documents | 01:41.53 |
tarepanda | I did. | 01:42.23 |
| The examples in API.htm are for Windows DLLs. | 01:43.11 |
ray_work | for the latest docs, see: http://git.ghostscript.com/?p=web/ghostscript.com.git;a=tree;f=doc/current;h=c62e6d9601d292da60f6228b46403d577459d779;hb=HEAD | 01:43.23 |
| tarepanda: the gsapi entry points are ubiquitous | 01:43.50 |
tarepanda | I understand that, but I don't understand how to use it as a shared object. | 01:44.02 |
ray_work | tarepanda: as gsapi_new_instance | 01:44.09 |
tarepanda | I have no experience using shared objects and don't understand what I need to do to be able to use gsapi_new_instance, that is. | 01:44.48 |
ray_work | tarepanda: well, I don't think we can help if that's where you are at | 01:45.15 |
tarepanda | All right. | 01:45.30 |
ray_work | I suggest a basic tutorial on C/C++ -- maybe a stackoverflow topic will have stuff to help you | 01:46.06 |
tarepanda | I've been searching for the last two days and haven't really found much to help. | 01:48.12 |
ray_work | once you understand how to link your applet with the gs .so, then you can make the calls shown in the API.htm. Usually the simplest invocation uses the equivalent of the command line interface: gsapi_init_with_args | 01:48.27 |
tarepanda | I feel like I just don't know the keywords to search for since I've never used Linux -- all of my experience up to now has been Windows. | 01:48.38 |
| Yeah, I have everything set up except a way to invoke the gsapi methods. | 01:48.56 |
ray_work | tarepanda: five seconds with google turns up: http://www.network-theory.co.uk/docs/gccintro/gccintro_25.html and http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html and http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html | 01:50.39 |
tarepanda | All purple links for me. | 01:51.06 |
ray_work | tarepanda: did you look at: http://ehuss.net/shared/ | 01:53.59 |
| The simple way to use a shared library is to just link it like a normal library archive. For example: | 01:55.48 |
| gcc -o my_application my_application.o -lmylib | 01:55.50 |
| The -lmylib option tells the linker, "go look for libmylib.so" for the library. It will also try other variants on the filename such as libmylib.a and mylib.a, etc. | 01:55.51 |
| hopefully tarepanda didn't just give up and will know to check the irclogs :-/ (but I am not hopeful that isn't too complex as well) | 01:56.52 |
Robin_Watts | paulgardiner: ping | 11:36.54 |
paulgardiner | pong | 11:37.08 |
Robin_Watts | So, do you want to be talked through using the cluster? | 11:37.26 |
tor8 | *sigh* I hate iOS | 11:37.41 |
Robin_Watts | tor8: mmm | 11:37.56 |
| paulgardiner: Or we can wait until you have a need to use it. | 11:38.19 |
tor8 | trying to get mupdf to register itself as a handler for pdf and xps files, with no luck and zero useful error messages | 11:38.33 |
paulgardiner | Robin_Watts: I was just thinking exactly that. It would be less abstract if I actually had something I wanted to commit. The bits of the forms work needs a little tidying first. | 11:39.18 |
Robin_Watts | paulgardiner: OK, we'll wait. But know that it's there and ready to be used. | 11:39.47 |
paulgardiner | Ok thanks | 11:40.00 |
| Actually, I have a few quick questions that I could ask now. | 11:40.33 |
Robin_Watts | go for it. | 11:40.40 |
paulgardiner | You've mentioned a "cluster push", I think (maybe I misheard). Is that like a git push with checks? | 11:41.53 |
Robin_Watts | No. | 11:42.03 |
| http://ghostscript.com/regression/ | 11:42.19 |
| Bookmark that link. | 11:42.24 |
| That's the dashboard - it shows what the cluster is doing. | 11:42.40 |
| Let me run a job... | 11:42.57 |
| The top left set of things are the list of machines in the cluster, and what they are doing. | 11:43.38 |
| The list underneath is the job queue. | 11:43.51 |
| There are 2 types of jobs; user jobs, and git jobs. | 11:44.20 |
| Whenever we do a commit (to the gs tree or to the mupdf tree), a job is scheduled. | 11:44.48 |
paulgardiner | I see your gx job | 11:44.48 |
Robin_Watts | Right. It's a simple traffic light system; the jobs that have run (and are now stopped) are shown in red. | 11:45.21 |
| the job that is running is shown in green. | 11:45.33 |
| any queued jobs waiting to start are shown in yellow. | 11:45.45 |
| New jobs at the top, old jobs at the bottom | 11:45.51 |
| User jobs never go into the 'red' section at the bottom. | 11:46.12 |
| You can see the last job for each user by following the links on the right hand side. | 11:46.37 |
paulgardiner | All making sense so far. | 11:47.11 |
Robin_Watts | A git job builds the appropriate product (gs or mupdf or pcl or whatever) and runs a large number of test files through it, collecting the md5 sums. | 11:47.48 |
| We compare those to the previous git jobs results, and hence get a list of files that have changed. | 11:48.10 |
| IF you click on the "passed" link next to the "ghostpdl dd90a7 Memento tweaks..." thing, you'll see an example of the results page. | 11:49.00 |
| Sadly, this shows up a problem quite nicely; we have some files that give different results when nothing has really changed. We refer to these as 'indeterminisms' | 11:49.52 |
paulgardiner | Does one need to check those by eye to see if they are significant and more than just an indeterminism? | 11:51.12 |
Robin_Watts | After a while you get to know what's probably an indeterminism or not. | 11:51.33 |
| but... click on the 'deltas' link. | 11:51.41 |
| Then you'll see a magically confusing diagram. | 11:52.05 |
paulgardiner | Eek! | 11:52.11 |
Robin_Watts | This is my invention, so I apologise in advance. | 11:52.26 |
| I run through all the files tested finding the ones that changed results between the revisions in question. | 11:52.58 |
| and I display the state for the last 20 revisions or so on each line. | 11:53.24 |
| Red boxes are where we got an error code (SEGV or a timeout etc) | 11:53.42 |
paulgardiner | So that's consecutive revisiions across the top | 11:53.48 |
Robin_Watts | It is. | 11:53.52 |
| The first non erroring state is drawn in green, and numbered 0. | 11:54.10 |
| Each subsequent non erroring state is drawn in orange and given a new number. | 11:54.36 |
| So, you can see the top file is flipping backwards and forwards between giving an error, and giving the same result. | 11:55.08 |
| so that's probably a file which sometimes times out, but when it doesn't, it gives a consistent result. | 11:55.32 |
paulgardiner | Ah, so you can see when you cured a problem you caused earlier even though you didn't fix everything because of repeated numbers? | 11:55.36 |
| Oh right. I forgot about indeterminisms | 11:56.04 |
Robin_Watts | (sorry, phone went) | 11:56.40 |
| if you look further down, you can see things with lots of orange in them. | 11:56.58 |
| Those ones show the job gives different results almost every time it's run. | 11:57.28 |
| Thus it's truly indeterminate - so it's not something my commit broke. | 11:57.44 |
| If you see a line that's green all the way across and just changes to orange in the last column, then that's probably a job that I've really broken (or maybe really fixed) | 11:58.31 |
| You get the idea? | 11:58.40 |
paulgardiner | So more than one test has been performed on a single revision? | 11:58.40 |
Robin_Watts | No... | 11:58.51 |
| If the job has always been well behaved and never given errors, and given stable results, then it will appear as green all the way across. | 11:59.31 |
paulgardiner | It's likely that something that is flipping every change would flip even if there wasn't a change? | 11:59.37 |
Robin_Watts | and if I've broken it with my commit, it will differ in just the last column. | 11:59.53 |
kens | Lunch, back in a bit | 12:00.10 |
Robin_Watts | Stuff that gives essentially random results (perhaps by relying on uninitialised memory) will change most revisions, yes. | 12:00.50 |
paulgardiner | So has orange been chosen for some by human interaction or automatically? | 12:01.25 |
Robin_Watts | Orange = Different MD5 sum than the green one. | 12:01.56 |
paulgardiner | Red was SEGV | 12:02.17 |
| ? | 12:02.19 |
Robin_Watts | Red is error. (SEGV or timeout etc) | 12:02.35 |
| Basically we run a command to render a test file to a bitmap. | 12:03.06 |
| If that fails to complete, we have a RED square. | 12:03.15 |
| If that completes, then we md5 the bitmap. | 12:03.29 |
| and I give each unique md5 sum a unique number on a line. | 12:04.07 |
paulgardiner | Ah right. Like it. | 12:04.19 |
Robin_Watts | For ease of visuals, I give 0 green, and all others amber. | 12:04.21 |
| hence a solid line of green is "well behaved file, that hasn't changed" | 12:04.40 |
paulgardiner | And can you redifine green if you fix a problem that has never before been correct? | 12:04.59 |
Robin_Watts | paulgardiner: green is not 'defined' to be anything. | 12:05.16 |
| green is just "the first md5 sum I meet while calculating a line of the deltas page" | 12:05.35 |
| If someone made a genuine change that affected a file 10 revisions ago, I'd get 10 blocks of green, then 10 blocks of orange. | 12:06.16 |
| even though the orange blocks were the new 'correct' ones. | 12:06.31 |
paulgardiner | Ok. Got it. | 12:06.43 |
Robin_Watts | The cluster itself (independent of my deltas page) treats "the last md5 sum I got when running this file through a git commit" as being 'correct'. | 12:07.15 |
| OK, so back to the dashboard. | 12:07.35 |
| click on 'logs' | 12:07.41 |
paulgardiner | This tells us about changes. We then have to decide which might be good or bad by other means | 12:07.43 |
Robin_Watts | right. | 12:07.52 |
| The hope is that by the time you've committed to git, you're already confident that you haven't broken anything. | 12:08.14 |
| logs just opens (and closes) a set of links that let you get to the different logs from each machine. | 12:08.52 |
| Sometimes it's useful to see what else appeared on stdout, or the exact command that was run, or the build output etc. | 12:09.14 |
| warnings opens another set of links to the different warnings given during building on different machines (different versions of gcc etc) | 12:10.01 |
| So, that's all the 'Job queue' explained - happy? | 12:10.25 |
| tor8: http://vark.mine.nu/jogu.45 | 12:10.59 |
paulgardiner | I don't seem to be able to open the warnings. I get a relative url followed by "was not on this server" | 12:11.48 |
Robin_Watts | oh, I'll look into that. | 12:12.19 |
tor8 | Robin_Watts: thanks. I've just managed to get one type recognized. apple's docs for this are spread all over the place... | 12:12.30 |
Robin_Watts | those links are to marcosw's machine, so may be broken. | 12:12.46 |
| OK. In the User link section... click on 'robin' | 12:13.07 |
paulgardiner | How do you start a user job? And can user job's obtain the same results as the git jobs? | 12:13.30 |
Robin_Watts | You start a job by clusterpushing it. I'll come to that. | 12:13.56 |
tor8 | Robin_Watts: those docs you linked to all miss one important section that has to be in the Info.plist... | 12:14.01 |
Robin_Watts | tor8: I'm just passing on a link from a friend - I haven't read them myself. | 12:14.26 |
tor8 | Robin_Watts: yeah. I've wasted hours on this already :( | 12:14.39 |
Robin_Watts | I tested a broken tree, hence all the errors in the report. | 12:14.50 |
| Normally the user report should look exactly like the normal jobs report (see mvrhels report for example) | 12:15.19 |
| The '+' sign on the far right opens/closes the same logging links as before. | 12:15.43 |
| The one difference between the user report and the git report is that the user report tells you not only the difference between your current results and the trunk, but also the current results and your previous user results. | 12:16.44 |
| You can't get deltas for a user report, but you can get a 'bmpcmp'. | 12:17.13 |
| A bmpcmp is an html page that contains excerpts from the jobs that failed; bitmaps are grouped in 3's: candidate, reference, diff. | 12:18.18 |
| so you can see what differences your commit made. | 12:18.33 |
| but you can't see that yet because we need to get you a password for that section of the website. | 12:18.50 |
| So... how do you trigger jobs? | 12:19.00 |
| We use casper, our server on the net (aka ghostscript.com) as the "cluster master". | 12:19.38 |
| We 'clusterpush' the code to be tested to that server, and that passes it out around to all the cluster nodes. They all build it. Then the master parcels out jobs and reads back results. | 12:20.17 |
| It's a clever system that leverages existing unix utilities and scripting. Or it's a hellish mess or rsync, perl, cron, tail, join etc depending on the way you want to look at it. | 12:21.44 |
| :) | 12:21.58 |
| It used to be that you had to clusterpush from linux. | 12:22.51 |
paulgardiner | I hope I never need to think it in either way, rather as a mysterious command that does useful stuff in a way I don't need to understand. :-) | 12:23.00 |
Robin_Watts | but it was fairly simple to make it so that you can clusterpush from within cygwin. | 12:23.23 |
| but there are lots of people that have trouble with cygwin, so I did a crufty hack (on top of the other crufty hacks) to make it so that you can clusterpush using git. | 12:23.55 |
| So I do: "git cluster mupdf" and it runs my job. Then I do "git cluster bmpcmp" and it generates me a bmpcmp from the results. | 12:24.54 |
| And we'll worry about setting that up for you when you have a need for it. | 12:25.11 |
| At the moment, the only testing we do in the cluster is to farm out commands that run and produce an md5 sum. | 12:25.59 |
| With the form filling/javascript stuff we may want to do more funky things (like the ATH scripts in ATS) | 12:26.38 |
| but that should be doable too without too many changes. | 12:27.34 |
| If we update mudraw to run javascript (or have another wrapper exe) then we can invoke that. | 12:28.47 |
| and feed it some pdf files with test javascript in etc. | 12:29.08 |
paulgardiner | Initially with the forms work, my main concern would be to ensure I wasn't breaking non-forms behaviour. | 12:29.23 |
Robin_Watts | Right, well the cluster will certainly give you that. | 12:29.38 |
| Very hippocratic. | 12:29.57 |
paulgardiner | Do no harm | 12:30.13 |
Robin_Watts | lunches - I think that's basically the cluster covered (apart from the mechanics of getting you set up on it) | 12:30.46 |
paulgardiner | Robin_Watts: Yeah. I have the basic idea now. Thanks. | 12:31.10 |
Robin_Watts | lunch delayed :( | 12:37.58 |
| I need a ray_laptop or an mvrhel. Or someone that speaks clist. | 15:36.00 |
ray_laptop | hi, Robin_Watts | 16:04.29 |
| are you looking at that clist SEGV ? | 16:04.50 |
Robin_Watts | ray_laptop: The one I gave you yesterday? no. | 16:15.08 |
| I'm looking at another one, but give me a couple of minutes, and I'll have a question or two for you if you don't mind. | 16:15.35 |
ray_laptop | kens: I have a really strange problem with ps2write that _doesn't_ happen with pdfwrite. I made a fairly simple change in pdf_draw.ps to skip doing the 'settransfer' if the /TR was already set to that value. With ps2write the SECOND page has missing bullets (ADOBE1-4.pdf) | 16:16.04 |
| Robin_Watts: go ahead with your (new) issue | 16:16.43 |
kens | have you compared the resulting files ? | 16:16.46 |
Robin_Watts | ray_laptop: Michael added some stuff to allow copy_planes to go through the clist. | 16:17.28 |
| I've found various problems with it (I think he was maybe only testing the 1bpp case, and possibly only the 1 plane case too) | 16:17.49 |
| but I've got most of them solved, I think. | 16:18.02 |
| (small tweaks was all that was required) | 16:18.10 |
ray_laptop | kens: yes. There are some /TR /Identity lines missing in a couple of EXTGState dicts on the 'bad' one | 16:18.36 |
Robin_Watts | but I'm hitting a problem now, where I'm writing a load of data into the clist, and when I come to read it, the data isn't all there. | 16:18.48 |
kens | I can't see why that would change text output... | 16:18.58 |
Robin_Watts | I suspect it may be that the data *is* in the clist, but just not enough has been read back into the buffer. | 16:19.17 |
| Can I walk through some code with you please? | 16:19.41 |
ray_laptop | Robin_Watts: most places that read large chunks of data do it in a loop | 16:19.57 |
Robin_Watts | ray_laptop: I suspect we need a chunk of code to 'refill the buffer' (or else the initial calculation of how full the buffer needs to be is wrong) | 16:20.48 |
ray_laptop | Robin_Watts: since there is a limit to how much data will be read or written. | 16:21.01 |
Robin_Watts | In gxclrast.c line 931 (for me at least): | 16:21.17 |
| case cmd_op_copy_mono_planes >> 4: | 16:21.24 |
ray_laptop | Robin_Watts: OK, I'm there | 16:22.03 |
Robin_Watts | We read plane_height from the list; if it's 0 then it's a copy_mono action. If it's non-zero, then it's a copy_planes | 16:22.05 |
ray_laptop | yes. | 16:22.46 |
Robin_Watts | (This is something that is wrong - we shouldn't write 'plane_height' into the buffer, as the plane_height on reading is unrelated to the plane_height on reading. It should just be a bool) | 16:22.50 |
| but lets not worry about that for now. | 16:22.57 |
| so we go down to the copy: label. | 16:23.15 |
| op&8 == 0, so into the second half of the if. | 16:23.29 |
ray_laptop | right, the 'else' | 16:23.53 |
Robin_Watts | And in there there is a for(pln =0; pln < planes; pln++) | 16:23.59 |
| the 'compression' value is sent as the lower 4 bits of the op for the first plane. | 16:24.27 |
| subsequent planes send a byte with the compression value in. | 16:24.38 |
| hence the if (pln) compression = *cbp++; | 16:24.50 |
| And that's where I hit a problem. I get to about pln == 12, and I find that I'm reading off the end of the buffer, so I get an invalid compression value. | 16:25.35 |
| but we probably should follow through on the pln == 0 case to understand what's going on ? | 16:25.53 |
ray_laptop | Robin_Watts: are these fairly small tiles ? | 16:26.20 |
Robin_Watts | 119x1 (8bpp) | 16:26.38 |
| with 14 planes. | 16:26.48 |
| so the first plane takes 128 bytes, subsequent ones take 120 each. | 16:27.14 |
| compression == 0 always for this example. | 16:28.15 |
ray_laptop | OK, fairly small. There is a check for (planes * bytes > cbuf_size) that throws an error in debug mode | 16:28.16 |
Robin_Watts | ray_laptop: Indeed, and that's NOT being triggered (though I have altered that test here, becuase it needs to be planes * height * raster for planes > 0 rather than planes * bytes. | 16:29.01 |
ray_laptop | but it may be that you should have a call to top_up_cbuf | 16:29.11 |
Robin_Watts | ray_laptop: Right. How much data are we guaranteed to have in the buffer on entry ? | 16:29.38 |
ray_laptop | just because all the planes _could_ fit in the cbuf, doesn't mean that they are all in the cbuf (unless you top_up) | 16:29.52 |
Robin_Watts | Right. | 16:30.01 |
| Presumably there is code somewhere that ensures there is enough data in here for a copy_mono operation? | 16:30.24 |
| (i.e. before michael fiddled with it, the code was correct?) | 16:30.39 |
| Or maybe it's that cmd_read_short_bits and cmd_read do the top up thing themselves, and we are merely guaranteed enough bytes in the clist on entry to read the header? | 16:31.53 |
| No, cmd_read_short_bits doesn't do the top up... | 16:32.39 |
| Unless sgets does the topup? | 16:33.02 |
ray_laptop | Robin_Watts: it looks like 'cmd_read_data' does some fiddling with the cbuf that may include a 'sgets' | 16:33.30 |
Robin_Watts | Yes, maybe thats it. | 16:33.33 |
ray_laptop | Robin_Watts: a top_up moves the data remaining in the cbuf to the front of the buffer area, then backfills with sgets | 16:35.08 |
| when you get to the just before the for (pln... can you check what the (pcb->end - cbp) is. Is it big enough for all the planes of data ? | 16:37.00 |
Robin_Watts | I'm sure it's not. | 16:37.12 |
| But the thing that confuses me is that I am (presumably) calling cmd_read_short_bits and getting a new cbp back. I would have expected that cbp should never be more than 1 byte after the end of the data that was read from the file. | 16:38.48 |
| Time for some more debugging. Thanks, ray. | 16:39.01 |
ray_laptop | Robin_Watts: cmd_read_data may read from the stream (the sgets) directly into the place you tell it to put the data, but in that case it returns with the pointer set to the end of the cbuf area. If it does, you need to top_up_cbuf | 16:39.11 |
Robin_Watts | Ah! | 16:39.24 |
| That's probably it. | 16:39.29 |
ray_laptop | if it cmd_read_data reads from the stream, it effectively returns an empty buffer | 16:39.50 |
| Robin_Watts: we probably could put the top_up_cbuf call into cmd_read_data for whenever we consume the entire buffer (everytime after the sgets, and conditionally before the other 'return' | 16:44.25 |
Robin_Watts | ray_laptop: We could do, but I've got past that problem now, thanks. | 16:44.50 |
ray_laptop | Robin_Watts: OK. | 16:44.56 |
Robin_Watts | I'm now hitting the case where the debug code goes off saying the bitmap size is larger than the buffer. | 16:45.11 |
| gxclrast.c line 997ish | 16:46.44 |
| Why does the bitmap need to fit into the cbuf ? | 16:47.06 |
| We copy from the clist into our data buffer 'through' cbuf, but we don't actually need to hold the whole bitmap in cbuf at any point do we ? | 16:48.04 |
ray_laptop | Robin_Watts: no, we shouldn't | 16:50.24 |
Robin_Watts | so is that test bogus? | 16:50.43 |
ray_laptop | Robin_Watts: (sorry phone) Yes, I think so | 16:53.29 |
Robin_Watts | OK. What I DO need is that data_bits_size should be at least: planes * plane_height * raster bytes in size. | 16:54.08 |
| where plane_height = rectangle height. | 16:54.22 |
ray_laptop | Robin_Watts: as long as the prefix up to where we read the data is in the buffer, it should be fine | 16:54.30 |
Robin_Watts | I'm typically seeing planes=14, height = 1, raster = 440 | 16:55.17 |
| which requires 6160 bytes of data_bits_size, but data_bits_size is 4096. | 16:55.35 |
| Presumably in here there must be some mechanism for things to ensure that data_bits_size is large enough? | 16:56.07 |
ray_laptop | Robin_Watts: OK, so the total data_bits_size should match up with the amount for all planes. That makes sense | 16:56.11 |
| Robin_Watts: data_bits_size is the same as cbuf_size | 16:56.40 |
Robin_Watts | data_bits != cbuf though, right ? | 16:56.53 |
kens | is heading off now, will be back on Monday. Look forward to seeing everyone on Tuesday. | 16:57.20 |
Robin_Watts | Night kens. Have a good weekend. | 16:57.30 |
ray_laptop | Robin_Watts: but I don't think you need that (unless it is needed for the decompression case) | 16:57.35 |
| bye, kens.. See you Tue | 16:57.49 |
Robin_Watts | ray_laptop: So... the position I'm in now is that data_bits_size = 4096, but I need it to be larger. Otherwise I haven't got enough space in there to reconstruct the bitmap to call copy_planes with. | 17:00.41 |
ray_laptop | Robin_Watts: it looks lilke (if I'm reading it correctly) that when we are decompressing, we only do so from the buffer, NOT the underlying 's' (clist data stream) | 17:01.13 |
Robin_Watts | Either there must be a mechanism for making it larger, or the clist writer must be expected to restrict the size of the bitmaps it sends. | 17:01.33 |
| Th if (compression) stuff ? | 17:01.53 |
| yes, I see what you mean. | 17:02.45 |
| So the compression case requires that all the compressed data fits in the cbuf. | 17:03.01 |
| which (given that compression is only used if the data is really smaller) makes the debug test make sense. | 17:03.47 |
ray_laptop | Robin_Watts: it looks like, yes | 17:03.52 |
Robin_Watts | (if the decompressed image fits in the cbuf size, then the compressed version must too) | 17:04.18 |
| OK. But this still gets me back to the same basic problem. My bitmap doesn't fit in the size available to me. | 17:04.40 |
| Is the data_bits_size fixed? | 17:05.04 |
| yes, it's a #define. | 17:05.23 |
ray_laptop | Robin_Watts: data_bits is just a constant (#define cbuf_size) | 17:05.28 |
Robin_Watts | So presumably at the writer side, we need to ensure that the data is split up so that it fits. | 17:05.49 |
ray_laptop | Robin_Watts: but for the multiple planes case, since you are in a loop, as long as each plane fits, you are OK. | 17:06.31 |
| Robin_Watts: on the writer side (iirc) it breaks up copy_ operations to less height if needed | 17:07.09 |
Robin_Watts | ray_laptop: No. | 17:07.38 |
| I need all the planes to fit. | 17:07.46 |
| because we then do a copy_planes() thing with all the planes at once. | 17:07.58 |
| so I need to modify the split up operation. | 17:08.10 |
| (Apparently Henry and Sabrina have just arrived at the pub. Let's see how long Henry takes to get on line...) | 17:08.39 |
mvrhel | hi Robin_Watts | 17:09.31 |
ray_laptop | Robin_Watts: you would need to allocate a buffer big enough to hold all of the planes data_bits is only data_bits_size | 17:10.14 |
Robin_Watts | I can split up the writing so it writes less than that at a time. | 17:10.52 |
| mvrhel: Hi. | 17:10.55 |
ray_laptop | if you need all the planes, you need to allocate something big enough, and then you don't have to modify the writer | 17:11.07 |
mvrhel | Robin_Watts: so I take it that you are pushing forward? | 17:11.38 |
henrys | Robin_Watts:I've made it tried to text Helen but I don't know if it went through. | 17:11.57 |
ray_laptop | sure didn't take henrys long to get online | 17:12.17 |
| henrys: how's the trip so far ? | 17:12.32 |
Robin_Watts | henrys: Yup. Arnie phoned to tell me you'd arrive.d | 17:13.46 |
ray_laptop | Robin_Watts: since you need all the planes in memory for the copy_planes, if the total is bigger than data_buf_size, you'll need a new allocation | 17:14.13 |
Robin_Watts | How are you and Sabrina doing? Do you want some time before I come get you? | 17:14.22 |
ray_laptop | thinks Robin_Watts is hoping henry will say "come right now" so he can stop working ;-) | 17:14.53 |
Robin_Watts | ray_laptop: I can send the first 'n' lines of each plane in one hit, then the next 'n' lines etc. | 17:15.02 |
mvrhel | Robin_Watts: based on the IRC logs, I am guessing you don't want me working on the copy_planes stuff | 17:15.14 |
Robin_Watts | mvrhel: I'm almost there... | 17:15.37 |
mvrhel | great! | 17:15.43 |
ray_laptop | Robin_Watts: yes, but I thought the splitting was already done in the writer | 17:15.48 |
Robin_Watts | ray_laptop: It is, but the logic probably hasn't been updated for the multiple plane case. | 17:16.10 |
ray_laptop | it may be that the calculation of the acceptable height is wrong for multiple planes | 17:16.17 |
| Robin_Watts: the invariant expected is line 982: | 17:17.29 |
| /* copy_mono and copy_color/alpha */ | 17:17.31 |
| /* ensure that the bits will fit in a single buffer, */ | 17:17.33 |
henrys | okay found him, might not have good internet here | 17:19.25 |
Robin_Watts | ray_laptop: The splitting logic is done when a cmd_put_bits won't fit into a cbuf. | 17:20.02 |
| and I want to make many cmd_put_bits fit into a cbuf. | 17:20.27 |
| so that is indeed the problem. | 17:20.37 |
| And I think the fix isn't *too* painful. | 17:20.45 |
| henrys: I'll check for reception on my phone when I pick you up - the dongle may be a better bet. | 17:22.38 |
henrys | Ahh I'll be fine - I don't need no stinkin' internet... | 17:24.49 |
Robin_Watts | Has your wife just explained that to you? I get told that at the start of each holiday, then I get bugged to let her on facebook. | 17:25.35 |
henrys | she does say that. | 17:26.56 |
Robin_Watts | mvrhel: I'm going to have to stop soon. | 17:39.26 |
| Do you want me to mail you what I have, or are you happy to wait until... monday? possibly the meeting ? | 17:39.58 |
| Got it! | 17:45.26 |
| mvrhel: http://ghostscript.com/~robin/0001-Michaels-patch-copy-planes-fixes.patch | 17:48.24 |
| That's your patch + the copy_planes stuff. | 17:48.55 |
| gotta run. | 17:49.06 |
mvrhel | Robin_Watts: are you gone? | 18:12.00 |
| henrys`: are you still at the pub? | 18:12.20 |
| or maybe that should be henrys ^^ | 18:13.25 |
| never sure about the accent mark | 18:13.43 |
| Robin_Watts: so is this patch complete then? | 18:53.52 |
Yoshi47 | I am trying to use mupdf to open a pdf directly hosted on the web, but it doesn't seem to work, does any one know if its possible or not? | 19:28.46 |
| mupdf .9 | 19:28.59 |
| maybe 1 has it | 19:29.02 |
sebras | Yoshi47: no, mupdf does not support http. | 20:33.45 |
| Yoshi47: you have to download the pdf-file to your computer first. | 20:33.59 |
Robin_Watts | mvrhel: That patch should hopefully solve the copy_planes and clist issues, yes. | 20:49.33 |
| I may tweak the code a bit, but it should be functionally complete. | 20:49.51 |
| mvrhel: henrys is back at the pub now. | 20:50.27 |
| Yoshi47: We could put an http fetcher into the mupdf app, but as mupdf doesn't currently support opening PDFs that are still downloading, you'd still have to wait for it all to arrive first. | 20:51.27 |
| So there would be no benefit. | 20:51.37 |
| Actually, we should put linearised pdf support on the agenda. | 20:51.51 |
mvrhel | Robin_Watts: ok thanks | 20:51.58 |
| I am frantically trying to get packed and ready as I will have no time tomorrow | 20:52.43 |
Robin_Watts | I haven't clusterpushed it (ran out of time). Have you ? | 20:52.44 |
mvrhel | I have not even applied it | 20:52.52 |
| I have to be careful | 20:52.55 |
Robin_Watts | I'll clusterpush it now then. | 20:53.06 |
mvrhel | since I did a change since I gave you my patch | 20:53.08 |
Robin_Watts | Ah. | 20:53.14 |
| mvrhel: If I was you, I'd make a new branch rooted at 1 commit back from your current head. | 20:53.50 |
| then apply my patch onto that. | 20:53.55 |
| Then you can diff it. | 20:54.00 |
mvrhel | well, I have not commited my change yet | 20:54.37 |
| and I think the files are disjoint from your changes | 20:55.12 |
Robin_Watts | You may be able to stash, reset, apply mine and stash pop. | 20:55.14 |
mvrhel | yes | 20:55.21 |
| I may copied the files too just to be safe... | 20:55.32 |
Robin_Watts | but going via another branch is probably easier to back out of if it all goes wrong. | 20:55.36 |
mvrhel | ok. I will fool with that later tonight | 20:57.08 |
| if you cluster push and let me know that all is well with your change that would be great | 20:57.23 |
Robin_Watts | I've cluster pushed it, and queued a bmpcmp. | 20:57.29 |
| (so murphys law says the compile will now fail :) ) | 20:57.46 |
mvrhel | ah great | 20:57.46 |
Robin_Watts | If I don't speak to you before, have a safe flight. | 20:58.03 |
mvrhel | thanks | 20:58.09 |
| trying to make sure I have everything I need on my laptop now | 20:58.45 |
| oh need to get power adapter too | 21:00.18 |
Robin_Watts | If you have a compact US extension cord, bring it, cos that way we can go from UK -> US and lots of people can plug in (albeit at 240V) | 21:02.04 |
mvrhel | oh good point | 21:02.25 |
Robin_Watts | I think alexcher said he'd bring one, but we may need 2. | 21:02.25 |
mvrhel | trying to remember which adapter is for the uk | 21:02.41 |
Robin_Watts | 3 rectangular pins in a triangle formation. | 21:02.58 |
| (Well done robin, what other formation could they be in?) | 21:03.09 |
mvrhel | the one I have seems to be heavy duty | 21:03.51 |
Robin_Watts | UK plugs are pretty heavy duty :) | 21:04.07 |
mvrhel | with a 13amp fuse for some reason | 21:04.12 |
Robin_Watts | They don't pull out by accident like the US ones. | 21:04.23 |
| 13amps is the rated limit for mains sockets here. | 21:04.37 |
mvrhel | ok yes this thing looks like it would run a washing machine in the us | 21:04.46 |
Robin_Watts | You have a different sort of plug for washing machines? | 21:05.11 |
mvrhel | well they are just big. actually I have a separate circuit here for the dryer | 21:05.36 |
| some large appliances here in the us are 220V | 21:05.55 |
Robin_Watts | Ah, in the UK everything goes in the same plugs (except for electric cookers which are wired into their own circuit cos they get 30amps) | 21:06.15 |
mvrhel | ok | 21:06.33 |
Robin_Watts | anyhow, the tests are running, and I'll leave you to pack. night. | 21:06.52 |
mvrhel | thanks! | 21:07.30 |
| strange bmpcmp is having to restart | 21:59.12 |
| cool just need to add my patch and rerun | 22:16.07 |
Robin_Watts | mvrhel: Excellent. Down to 2 SEGVs - and those are the ones you've fixed? | 22:51.28 |
| mvrhel: http://www.theregister.co.uk/2012/04/27/james_may_ar_app/ <- If you go to the science museum. Download the app before you go, cos of the data charges in the UK here. | 22:58.47 |
mvrhel | Robin_Watts: no those happen in the trunk too I believe | 23:35.24 |
| Robin_Watts: thanks for the tip | 23:35.41 |
Robin_Watts | mvrhel: The ones I've fixed happen on the trunk too. | 23:41.16 |
| Oh, no, sorry. | 23:41.42 |
| Bug692217 is the one I simplified and passed to ray. | 23:42.22 |
sebras | Robin_Watts: wow... http://cherokee.mirror.garr.it/mirrors/AppuntiLinux/a2/a2.pdf | 23:54.32 |
| just wow... | 23:54.35 |
| at least that pdf serves as a good testcase, because some pages taake 40+ seconds to render if I render a few thousand other pages first, but less than a second if I render only those specific pages... | 23:58.55 |
| Forward 1 day (to 2012/04/28)>>> | |