|
Rank: Member Groups: Member
Joined: 12/1/2014 Posts: 13
|
Hi,
I have tried with following two approaches to convert multiple html pages to pdf
1.For 100 records, convert each html to pdf and then merge them incrementally in memory (Deep merge) into single pdfDocument object, it’s taking almost 7 min to export final PDF. 2.For 100 records, convert each html pages to physical file and then merge them into single pdf file (Increment merge), it’s taking almost 3 min to export final PDF.
Can you Please tell me why there is much difference in performance between these two approaches, what exactly eo.pdf does in each cases?
Also memory consumed by #1 approach is much higher than #2 approaches. Can you please tell me the reason behind that?
Thanks! Rahul Pulekar
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,229
|
Hi,
The incremental merge is much faster than deep merge because incremental merge does not modify the "base file". For example, if you incremental merge B to A, then pretty much all it does is to append B to the end of A, then append a newer version of the PDF file "root" object at the end of the file. PDF file is designed in such a way that if the same objects are being encountered more than once, then the later one is the effective one. This way the result file will have two "root" objects. One is the one that A already has. Another one is the new one that got appended to the end of the result file. The second one is the one that will be used when a PDF Viewer tries to render the file. The same goes for other resources in the file: for example, if both file uses the same font and the font data are embedded, them in a incremental merge the result file will have two duplicate font data blocks.
Deep merge on the other hand scans the whole document and merge everything as much as possible. In this case there will be a single result root object. The result file will have a completely new layout regenerated and nothing has to match the original file except for the contents should still be the same. A deep merge usually produces smaller and also "cleaner" result file --- the downside is what you have already noticed: it's more CPU and memory intensive than an incremental merge.
Thanks!
|
|
Rank: Member Groups: Member
Joined: 12/1/2014 Posts: 13
|
HI ,
Is eo.pdf having memory management issue , beacuse on my dev enviorment , when i tried to export 100 records (merged pdf size appprox : 8 MB) continusly (three four times )its gives system.OutOfMemory exception.
we are exporting 100 records in small chuncks (multithreaded)and then merged them into single file as per your suggestion.but still we get this out of memory error.
we found out that ConvertTohtml function uses large part of memory after some analysis.
my dev machine has 4 gb ram .
is there any thing we can do to avoid this error ?
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,229
|
Hi,
You can try to set HtmlToPdf.Options.RetrieveNodeText to false and that should reduce memory usage. If that still does not work, you can try to:
1. Reduce the size/complexity of your HTML file; 2. Run your conversion inside a separate AppDomain and then restart that AppDomain whenever you get an out of memory exception;
The reason for #2 is memory fragmentation. After a while even if every chunk of memory that is no longer needed is freed, you will still receive out of memory exception if your usage memory space becomes fragmented and there is no single continuous block that is big enough to satisfy the allocation request. In that case you will receive out of memory exception even if you still have enough available memory --- it's just that they are divided into numerous small pieces. In that case the most effective way is to restart the AppDomain the converter runs.
Thanks!
|
|
Rank: Member Groups: Member
Joined: 12/1/2014 Posts: 13
|
Hi ,
we are performing load testing to verify the how much load our server can take to while exporting pdf .
we have used parallel programming to save multiple chunks of pdf using multiple thread .
we are getting following exception while perform load testing
Convertion failed. All workers are busy. Please increase HtmlToPdf.MaxConcurrentTaskCount.
do we need to increase MaxConcurrentTaskCount value or ew just need to decrease load of server for performing exporting pdf .
Please give me your feedback on this .
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,229
|
Hi,
You can try to increase MaxConcurrenTaskCount and see if you get into other problems. If you increase it and get other problems such as OutOfMemoryException, or conversion time out, or simply that it decreases the overall performance rather than increasing it, then you are overloading your system and you should decrease your parallel level (decrease number of worker thread, add wait period between taskes, etc).
Thanks!
|
|
Rank: Newbie Groups: Member
Joined: 12/14/2016 Posts: 2
|
I am Having MEMORY_MANAGEMENT problem in my PC
|
|
Rank: Newbie Groups: Member
Joined: 12/14/2016 Posts: 2
|
Here’s the mail I got recently for my problem The MEMORY_MANAGEMENT error caused the blue screen and in turn the computer shut itself down to prevent further damage to your computer which in turn caused this error to happen. As the log says "This error could be caused if the system stopped responding, crashed, or lost power unexpectedly" proves that the sole reason of this error was from the MEMORY_MANAGEMENT error and is not the MEMORY_MANAGEMENT error log itself. This error is not as severe as the MEMORY_MANAGEMENT error and you shouldn't need to fix it either. How To Fix MEMORY_MANAGEMENT – 0x0000001A?
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,229
|
john012 wrote:Here’s the mail I got recently for my problem The MEMORY_MANAGEMENT error caused the blue screen and in turn the computer shut itself down to prevent further damage to your computer which in turn caused this error to happen. As the log says "This error could be caused if the system stopped responding, crashed, or lost power unexpectedly" proves that the sole reason of this error was from the MEMORY_MANAGEMENT error and is not the MEMORY_MANAGEMENT error log itself. This error is not as severe as the MEMORY_MANAGEMENT error and you shouldn't need to fix it either. How To Fix MEMORY_MANAGEMENT – 0x0000001A? This usually is hardware issue. Our product does not contain kernel level code (such as drivers) that can corrupt the OS. If the OS runs low on memory due to your application that uses our product, the OS will be able to warm you promptly and the application, not the OS will crash.
|
|