|
Rank: Advanced Member Groups: Member
Joined: 6/26/2015 Posts: 98
|
Hello, We are trying to use a WebView in ThreadRunner in order to render a single image for long websites instead of making several individual captures, but we need to preserve the content that a user has expanded on page dynamically in a WebView that is used in the UI. In order to do this, we are taking the HTML and URL and passing it to the WebView that runs in the background, resize the WebView to the page size, and capture. This seems to work for the most part, but we are having issues with it on sites like Facebook. If you try to do this on Bill Gate's page on Facebook (https://www.facebook.com/BillGates), you'll notice that the left content section and the timeline on the right does render at all on the background WebView. Below is a code snippet of what we are doing.
Code: C#
tr = new ThreadRunner("background");
var wv = tr.CreateWebView();
tr.Send(() => {
wv.Resize(1900, 1000);
wv.LoadCompleted += Wv_LoadCompleted;
wv.LoadHtml(ActiveBrowser.WebView.GetHtml(), ActiveBrowser.WebView.Url);
}, 10000);
private void Wv_LoadCompleted(object sender, LoadCompletedEventArgs e)
{
var wv = (WebView)sender;
var size = wv.GetPageSize();
size.Width += 20;
wv.Resize(size);
do
{
WebView.DoEvents(10000);
} while (wv.IsLoading);
var rect = new Rectangle(0, 0, size.Width, size.Height);
var image = wv.Capture(rect);
if (image == null)
{
MessageBox.Show("Capture failed");
}
else
{
image.Save(@"\\files\temp\snapshot.png");
MessageBox.Show("Capture complete");
}
wv.Dispose();
}
|
|
Rank: Administration Groups: Administration
Joined: 5/27/2007 Posts: 24,229
|
Hi,
You won't be able to reliably capture screenshot this way. The reason is the screenshot capturing works by capturing the contents from the GPU frame buffer directly, and GPU frame buffer is designed to prepare contents to show on screen, so they all have some kind of internal size limitations. Thus capturing multiple images will always have a higher chance of getting an acurate image.
We are working on a capturing feature that would work through the printing feature. That can give you multiple continuous "page images" which theoratically you can use a large page size to produce a single image. However the result image files will be huge if you do not apply some kind of zoom level. So even with that you can still run into issues. So we think a more reasonable approach would be to re-consider your requirement that you must capture a single full long image of the whole page. Since most people uses browser engine to produce a "windowed" or "paged" rendering of the page, rendering the full page in a single image will never be a priority for the browser engine so it will always have a higher chance of hitting some kind of internal limits thus makes it both a challenge to implement for the developer and an unreliable feature for the end user.
Thanks
|
|
Rank: Advanced Member Groups: Member
Joined: 6/26/2015 Posts: 98
|
Hello, We would be fine with using a "windowed" approach as we know there would be a limitation, we would just like to create captures that are longer than the typical screen height. The problem that we are seeing, though, is that even a small screen capture is missing HTML elements. I have uploaded an image that shows these missing elements on a Facebook page, where the capture is okay, but HTML elements are missing completely in the render. You can see here the left and right side are completely missing HTML elements.
|
|