|
Rank: Newbie Groups: Member
Joined: 5/29/2020 Posts: 6
|
Hi, I thought Id share our experiences with deployment on Azure AppServices, as we, like others, have had issues with EO.PDF crashing - mostly due to child process not being ready (killed) ( see https://www.essentialobjects.com/doc/common/child_process_error.aspx ). I say mostly, because there was not enough consistent information in our logs to provide a precise description. Our solution is an angular site with a .net core backend. The PDF generation is done in a .NET Standard library. The issue of child processes seemed to persist, despite using eowp.exe and enabling this in out StartUp. We also had (and to some extent still have) issues of warm-up, but we have not yet been able to determine conclusively whether this is due to Azure warmup or EO.PDF warmup. To alleviate the issue of "first PDF delay", we enabled Application Insights on our Azure AppService and set up a simple ping availability test with a timer set to every 5 min. This makes a request to our app service, that calls HtmlToPdf.ConvertHtml("", new PdfDocument()); This had a minor effect. A question regarding the need for warmup (of EO.PDF), do you have any indication as to how often this might be required from your own Azure testing? After taking latest release (twice, latest being 20.1.45) and making sure to download and install EO as well, to deploy the same version of eowp.exe, alongside the "alive" ping test, we seem to have finally reached some stability. As a safety measure, we introduced retries with log message if activated - could be an error in the code, but so far, all pdf generation request seem to have been successfully completed on the first try. We have no experinced the child process issue - this being the case when doing stress tests and when doing tests "after idle period" (ranging from 20 min. to 10 hours).
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,217
|
Thank you very much for sharing! This is extremely helpful.
We have been working on the stability issue for a while. Both build .35 and build .45 includes fixes/optimization related to conversion engine and AppDomain unload, which seems to be the root of these issues. In the current build the idle time out is 10 minutes. In our next build we will add HtmlToPdf.EngineIdleTimeout property that you would allow you to customize this value. Theoratically, you should be able to set it to TimeSpan.MaxValue to completely disable engine unload so that you do not have to ping your server.
Glad to hear that it's working OK for you. Please feel free to let us know if you still run into any issue.
|
|
Rank: Newbie Groups: Member
Joined: 5/29/2020 Posts: 6
|
Hmm, only partial success Im afraid.
Early Sunday afternoon (Europe) the App Services from the same plan all restarted. Of those, two are identical (used in conjunction with other services as parts of test environment setups) "PDF-services" using EO.PDF. Both restarted fine seemingly, but only one was reachable. This both from testing through another system and verifying with logs. All the subequent requests on the one appservice end abrubtly with a "Child process exited unexpectedly"-message and the stacktrace below. Could this somehow be due to two appservices on the same plan is not possible? Had both our appservice-deployments failed, I would never think in this direction, but seeing as one service using EO.PDF IS running as expected, Im beginning to this may not be relation EO.Pdf in an appservice, but rather the app service plan itself.
I hope you somehow can assist in pointing me in the right direction, using the stacktrace below, as Im investigating our Azure logs and metrics.
Kind Regard, Kevin
" at EO.Internal.gwuy.mhkk(Exception mcw, Boolean mcx)\r\n at EO.Internal.gwuy.mhkj(gwra mct)\r\n at EO.Internal.gwuy.mhju(Boolean& mbq, gwvb[] mbr, String mbs, String mbt)\r\n at EO.Internal.gwuy.dceg(gwvb[] mbn, String mbo, String mbp)\r\n at EO.Internal.gwuz.dceg(String mgl, String mgm)\r\n at EO.Internal.swda.htoy()\r\n at EO.Internal.swda.gpgu.lwpu()\r\n at EO.Internal.gwqs.jopn(Action klg)\r\n at EO.Internal.swda.laxx(WindowsIdentity fx)\r\n at EO.WebEngine.Engine.Start(WindowsIdentity user)\r\n at EO.WebEngine.Engine.Start()\r\n at EO.Internal.sreh.kfag()\r\n at EO.Internal.srei.kfag(sreh& bsb)\r\n at EO.Internal.srej.kfag(swcv bsf, sreh& bsg)\r\n at EO.Internal.srem.scxk()\r\n at EO.Internal.srem..ctor(swcv bsk, HtmlToPdfOptions bsl)\r\n at EO.Pdf.HtmlToPdfSession.lllt(HtmlToPdfOptions yn)\r\n at EO.Pdf.HtmlToPdfSession..ctor(HtmlToPdfOptions yl, HtmlToPdfSession ym)\r\n at EO.Pdf.HtmlToPdfSession.Create(HtmlToPdfOptions options)\r\n at EO.Pdf.HtmlToPdf.pvwp.xduv()\r\n at EO.Internal.srej.escb[a](gwpu`1 bsj)\r\n at EO.Pdf.HtmlToPdf.ConvertHtml(String html, PdfDocument doc, HtmlToPdfOptions options)\r\n at EO.Pdf.HtmlToPdf.ConvertHtml(String html, PdfDocument doc)\r\n at seges.digital.pdfservice.Logic.Services.PDFGeneratorService.WarmupService() in C:\\B\\work\\75f929cc8f98a5c4\\src\\Logic\\Services\\PDFGeneratorService.cs:line 33\r\n at Web.Controllers.Api.HtmlToPdfController.Alive() in C:\\B\\work\\75f929cc8f98a5c4\\src\\Web\\Controllers\\Api\\HtmlToPdfController.cs:line 48\r\n at lambda_method(Closure , Object )\r\n at Microsoft.Extensions.Internal.ObjectMethodExecutorAwaitable.Awaiter.GetResult()\r\n at Microsoft.AspNetCore.Mvc.Internal.ActionMethodExecutor.AwaitableResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)\r\n at System.Threading.Tasks.ValueTask`1.get_Result()\r\n at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeActionMethodAsync()\r\n at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeNextActionFilterAsync()\r\n at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.Rethrow(ActionExecutedContext context)\r\n at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)\r\n at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeInnerFilterAsync()\r\n at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeNextResourceFilter()\r\n at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.Rethrow(ResourceExecutedContext context)\r\n at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)\r\n at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeFilterPipelineAsync()\r\n at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeAsync()\r\n at Microsoft.AspNetCore.Routing.EndpointMiddleware.Invoke(HttpContext httpContext)\r\n at Microsoft.AspNetCore.Routing.EndpointRoutingMiddleware.Invoke(HttpContext httpContext)\r\n at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)\r\n at Microsoft.AspNetCore.Localization.RequestLocalizationMiddleware.Invoke(HttpContext context)\r\n at Microsoft.AspNetCore.Server.IIS.Core.IISHttpContextOfT`1.ProcessRequestAsync()", "RemoteStackTraceString": "", "RemoteStackIndex": -1, "HResult": -2146233088, "HelpURL": null
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,217
|
Thanks for the additional information. We will try to test two appservice in the same plan on our end and see if we can find anything.
|
|
Rank: Newbie Groups: Member
Joined: 6/4/2020 Posts: 1
|
We are having the exact same issue in our Azure webapp. After some time (I guess a day or so) the functionality stops working and we are receiving the following stacktrace:
EO.Internal.gwuy+etpd: Child process not ready. at EO.Internal.gwuy.mhkk(Exception mcw, Boolean mcx) at EO.Internal.gwuy.mhkj(gwra mct) at EO.Internal.gwuy.mhju(Boolean& mbq, gwvb[] mbr, String mbs, String mbt) at EO.Internal.gwuy.dceg(gwvb[] mbn, String mbo, String mbp) at EO.Internal.gwuz.dceg(String mgl, String mgm) at EO.Internal.swda.htoy() at EO.Internal.swda.gpgu.lwpu() at EO.Internal.gwqs.jopn(Action klg) at EO.Internal.swda.laxx(WindowsIdentity fx) at EO.WebEngine.Engine.Start(WindowsIdentity user) at EO.WebEngine.Engine.Start() at EO.Internal.sreh.kfag() at EO.Internal.srei.kfag(sreh& bsb) at EO.Internal.srej.kfag(swcv bsf, sreh& bsg) at EO.Internal.srem.scxk() at EO.Internal.srem..ctor(swcv bsk, HtmlToPdfOptions bsl) at EO.Pdf.HtmlToPdfSession.lllt(HtmlToPdfOptions yn) at EO.Pdf.HtmlToPdfSession..ctor(HtmlToPdfOptions yl, HtmlToPdfSession ym) at EO.Pdf.HtmlToPdfSession.Create(HtmlToPdfOptions options) at EO.Pdf.HtmlToPdf.pvwp.xduv() at EO.Internal.srej.escb[a](gwpu`1 bsj) at EO.Pdf.HtmlToPdf.ConvertHtml(String html, PdfDocument doc, HtmlToPdfOptions options) at EO.Pdf.HtmlToPdf.ConvertHtml(String html, Stream stream, HtmlToPdfOptions options)
We have multiple webapps running in the same App Service Plan, but only one off the apps does PDF conversion. The EO.Pdf version is the current latest version (20.1.45)
|
|
Rank: Newbie Groups: Member
Joined: 5/29/2020 Posts: 6
|
Unfortunately it doesnt seem to be the only problem. The service we had deployed on a seperate app service plan (and seperate Azure subscription in fact) started getting "child process not ready" warnings which eventually results in a http 500 result code. We have another service on that same plan, which has run smoothly across the time when the pdf service ran into trouble. I had expected to see some restart log entries for both services, but that doesnt seem to be the case. In fact Im pretty much without clues as to why the issue suddenly reappeared. Ill let you know if I find the silver bullet.
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,217
|
Hi,
This is just to let you know that we are still working on this issue. We will rely here again when we have an update.
Thanks!
|
|
Rank: Newbie Groups: Member
Joined: 5/29/2020 Posts: 6
|
I just noticed in the process explorer that multiple eowp are active and using quite a bit of RAM. Could this simply be a question of cleaning after an ended call?
After a while, the service seems to kill the processes. After this a pure html conversion went great, no active eowp.exe in the process explorer. Then I tried with a convertUrl and the service froze again; and again a bunch of eowp.exe processes active.
In my processviewer I can see two w3wp process, one of which can be expanded. Sub to this are two eowp.exe - one cannot be expanded and has a thread count of 4 and a mem. footprint of 1.85MB and 4.39MB (working/private respectively). The other eowp.exe can be expanded and has 4 more eowp.exe as such:
Parent proc: Thread Count: 23 Working mem: 15.25MB Private mem: 114.71MB
proc 1 Thread Count: 16 Working mem: 23.01MB Private mem: 121.33MB
proc 2 Thread Count: 11 Working mem: 24.80MB Private mem: 118.35MB
proc 3 Thread Count: 16 Working mem: 54.91MB Private mem: 132.33MB
proc 4 Thread Count: 15 Working mem: 12.36MB Private mem: 108.50MB
Is this to be expected? Do you observe something similar when viewing Process Explorer?
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,217
|
Hi,
This is expected. The freeze seems to be caused by one of our process did not handle new connections correctly after Windows (apparently only observed on Azure) has "deactivated" an idle connection after the whole App Service has been idle for a while. Our process should discard this deactivated connection and start a new "clean" connection in this case but that did not seem to have occurred. We are still trying to verify this with logs. Once we have more information we will reply again.
Thanks!
|
|
Rank: Newbie Groups: Member
Joined: 11/4/2019 Posts: 3
|
Any updates on this? We are in the process of migrating one of our apps to an Azure AppService and this is the only issue we've identified so far with the migration.
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,217
|
Hi,
We already have an internal build that should resolve this issue. Several users who have this problem have been trying this internal build and so far the feedback has been good. Right now we are in the process of creating an official build with this fix and the official build should be out in about a week.
Thanks!
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,217
|
Hi,
This is just to let you know that we have posted the official build that should resolve this issue. You can download the new build from our download page. Please take a look and let us know how it goes.
Thanks!
|
|
Rank: Newbie Groups: Member
Joined: 11/4/2019 Posts: 3
|
Thanks for the update! We upgraded to this version about a week ago and it seems to be working fine. We haven't had any issues since then.
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,217
|
Great. Thanks for confirming!
|
|