|
Rank: Member Groups: Member
Joined: 8/8/2015 Posts: 12
|
Hi All,
I have an application that is working with webpages that have large html files. I am attempting to load existing html with WebView.LoadHtml and retrieving html with WebView.GetHtml.
This issue is that these methods become painfully slow, and may break down at times, with increasing html string size.
I am coming from Awesomium, which had many flaws but handled my html loading/retrieval needs with great speed.
Any advice/tips here? I am really hoping that EO is the solution to my needs. So far, I like how the software is architected. However, it is an unfortunate deal breaker for me if it takes 3 minutes to retrieve a large html file.
Thanks!
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,229
|
Hi, If you just want to load the HTML (without getting it back). You can try using Custom Resource handler and seek if it helps: http://www.essentialobjects.com/doc/webbrowser/advanced/resource_handler.aspxIf that still does not help, you may need to cut your HTML string down into small parts. The reason that EO.WebBrowser is slower is probably due to the multi-process architecture. First, the browser engine runs outside of your application process. This isolates the browser engine from your application and gives us a number of benefits. For example, if a buffer overrun occurred inside the browser engine, it will not mysteriously corrupt managed code in your own application. Additionally, chrome browser engine itself uses multiple child process for different roles. For example, GPU is running in a separate process, plug-in is running a separate process, render is running in a separate process, etc.. Chrome originally supported single process model, but that has been removed many years ago. So if you use a reasonably updated Chrome engine, then it would only run in multi-process mode. This double process isolation works great for most cases, but unfortunately when you have huge HTML strings, it results in a lot of memory copying. So it will be the best for you to split it into small parts. Thanks!
|
|
Rank: Member Groups: Member
Joined: 8/8/2015 Posts: 12
|
Thanks for your reply, support.
Can you elaborate a little on your recommendation regarding the Resource Handler? Are you suggesting that I cancel all requests in order to remove the time associated with finding and loading resources? If I'm catching your drift, this would be an issue as I would like to be able to get the HTML back at some point. If I stop the HTML from loading I have no access to it, correct?
I'm certainly willing to break the HTML up into smaller strings, if need be. Can you make any general recommendations about how small of a string would be required in order to achieve 'fast' (<5 seconds) loading times? This is, of course, subjective as it is relative to the speed of the internet connection. However, there is definitely a bottleneck happening within the EO WebBrowser that I'm seeking to mitigate here.
Also, can you provide some comments on how to achieve the loading of small strips of HTML into a single WebBrowser? This seems like it would require many appending operations to HTML that has previously loaded. I don't recall seeing a WebView method that would perform this task.
Thanks much.
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,229
|
Hi,
You usually do not have to handle all resource requests with custom resource handler. When you use a custom resource handler, you would override its Match method to decide which request you want to handle. If you return true there, then you are responsible to feed the Resonse object for that Request object. If you return false, then the browser engine will continue with its default handler. So you are not canceling any resources at all. You are basically just converting a LoadHtml into a LoadUrl. For example, you can call LoadUrl("http://myhugehtml") and then check that Url in your resource handler. You would then feed your huge html to the Response object in your custom resource handler.
We do not have a clear value on how much it achieve fast loading time since it depends on the computer. However you would want to keep in mind that a web browser is designed to display "normal" size pages. So for example, a web page that takes less than 10 pages to print out would probably be consider normal, but a page that takes 100 pages when printed would be on the "not normal" side. While displaying such page is possible, you should be aware that performance optimizations for pages of this size would never be a priority of the browser engine team. So from a practical point of view, you definitely would not want to push the boundary too far.
Thanks
|
|
Rank: Member Groups: Member
Joined: 8/8/2015 Posts: 12
|
Great - thanks for all of your help, support.
|
|