From: Michael Kelley <mhklinux@outlook.com>
To: Thomas Zimmermann <tzimmermann@suse.de>,
Simona Vetter <simona.vetter@ffwll.ch>
Cc: David Hildenbrand <david@redhat.com>,
"simona@ffwll.ch" <simona@ffwll.ch>,
"deller@gmx.de" <deller@gmx.de>,
"haiyangz@microsoft.com" <haiyangz@microsoft.com>,
"kys@microsoft.com" <kys@microsoft.com>,
"wei.liu@kernel.org" <wei.liu@kernel.org>,
"decui@microsoft.com" <decui@microsoft.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"weh@microsoft.com" <weh@microsoft.com>,
"hch@lst.de" <hch@lst.de>,
"dri-devel@lists.freedesktop.org"
<dri-devel@lists.freedesktop.org>,
"linux-fbdev@vger.kernel.org" <linux-fbdev@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>
Subject: RE: [PATCH v3 3/4] fbdev/deferred-io: Support contiguous kernel memory framebuffers
Date: Wed, 11 Jun 2025 23:18:02 +0000 [thread overview]
Message-ID: <SN6PR02MB41579713B557A32674252865D475A@SN6PR02MB4157.namprd02.prod.outlook.com> (raw)
In-Reply-To: <SN6PR02MB4157F630284939E084486AFED46FA@SN6PR02MB4157.namprd02.prod.outlook.com>
From: Michael Kelley Sent: Thursday, June 5, 2025 10:39 AM
>
> From: Thomas Zimmermann <tzimmermann@suse.de> Sent: Thursday, June 5, 2025
> 8:36 AM
> >
> > Hi
> >
> > Am 04.06.25 um 23:43 schrieb Michael Kelley:
> > [...]
> > > Nonetheless, there's an underlying issue. A main cause of the difference
> > > is the number of messages to Hyper-V to update dirty regions. With
> > > hyperv_fb using deferred I/O, the messages are limited 20/second, so
> > > the total number of messages to Hyper-V is about 480. But hyperv_drm
> > > appears to send 3 messages to Hyper-V for each line of output, or a total of
> > > about 3,000,000 messages (~90K/second). That's a lot of additional load
> > > on the Hyper-V host, and it adds the 10 seconds of additional elapsed
> > > time seen in the guest. There also this ugly output in dmesg because the
> > > ring buffer for sending messages to the Hyper-V host gets full -- Hyper-V
> > > doesn't always keep up, at least not on my local laptop where I'm
> > > testing:
> > >
> > > [12574.327615] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> > *ERROR* Unable to send packet via vmbus; error -11
> > > [12574.327684] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> > *ERROR* Unable to send packet via vmbus; error -11
> > > [12574.327760] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> > *ERROR* Unable to send packet via vmbus; error -11
> > > [12574.327841] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> > *ERROR* Unable to send packet via vmbus; error -11
> > > [12597.016128] hyperv_sendpacket: 6211 callbacks suppressed
> > > [12597.016133] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> > *ERROR* Unable to send packet via vmbus; error -11
> > > [12597.016172] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> > *ERROR* Unable to send packet via vmbus; error -11
> > > [12597.016220] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> > *ERROR* Unable to send packet via vmbus; error -11
> > > [12597.016267] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm]
> > *ERROR* Unable to send packet via vmbus; error -11
> > >
> > > hyperv_drm could be fixed to not output the ugly messages, but there's
> > > still the underlying issue of overrunning the ring buffer, and excessively
> > > hammering on the host. If we could get hyperv_drm doing deferred I/O, I
> > > would feel much better about going full-on with deprecating hyperv_fb.
> >
> > I try to address the problem with the patches at
> >
> > https://lore.kernel.org/dri-devel/20250605152637.98493-1-
> tzimmermann@suse.de/
> >
> > Testing and feedback is much appreciated.
> >
>
> Nice!
>
> I ran the same test case with your patches, and everything works well. The
> hyperv_drm numbers are now pretty much the same as the hyperv_fb
> numbers for both elapsed time and system CPU time -- within a few percent.
> For hyperv_drm, there's no longer a gap in the elapsed time and system
> CPU time. No errors due to the guest-to-host ring buffer being full. Total
> messages to Hyper-V for hyperv_drm are now a few hundred instead of 3M.
> The hyperv_drm message count is still a little higher than for hyperv_fb,
> presumably because the simulated vblank rate in hyperv_drm is higher than
> the 20 Hz rate used by hyperv_fb deferred I/O. But the overall numbers are
> small enough that the difference is in the noise. Question: what is the default
> value for the simulated vblank rate? Just curious ...
>
FYI, I'm seeing this message occasionally when running with your simulated
vblank code and hyperv_drm:
[90920.128278] hyperv_drm 5620e0c7-8062-4dce-aeb7-520c7ef76171: [drm] vblank timer overrun
"Occasionally" is about a dozen occurrences over the last day or so. I can't
yet correlate to any particular activity in the VM. The graphics console has
not been very busy.
Michael
next prev parent reply other threads:[~2025-06-11 23:18 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-23 16:15 [PATCH v3 0/4] fbdev: Add deferred I/O support for " mhkelley58
2025-05-23 16:15 ` [PATCH v3 1/4] mm: Export vmf_insert_mixed_mkwrite() mhkelley58
2025-05-23 16:15 ` [PATCH v3 2/4] fbdev: Add flag indicating framebuffer is allocated from kernel memory mhkelley58
2025-05-23 16:15 ` [PATCH v3 3/4] fbdev/deferred-io: Support contiguous kernel memory framebuffers mhkelley58
2025-05-24 7:28 ` kernel test robot
2025-05-26 6:54 ` Christoph Hellwig
2025-06-02 9:47 ` David Hildenbrand
2025-06-03 1:49 ` Michael Kelley
2025-06-03 6:25 ` Thomas Zimmermann
2025-06-03 17:50 ` Michael Kelley
2025-06-04 8:12 ` Thomas Zimmermann
2025-06-04 14:45 ` Simona Vetter
2025-06-04 21:43 ` Michael Kelley
2025-06-05 7:55 ` Thomas Zimmermann
2025-06-05 15:35 ` Thomas Zimmermann
2025-06-05 17:38 ` Michael Kelley
2025-06-06 7:05 ` Thomas Zimmermann
2025-06-11 23:18 ` Michael Kelley [this message]
2025-06-12 7:25 ` Thomas Zimmermann
2025-06-03 7:55 ` David Hildenbrand
2025-06-03 17:24 ` Michael Kelley
2025-06-04 21:58 ` Michael Kelley
2025-06-05 8:10 ` David Hildenbrand
2025-05-23 16:15 ` [PATCH v3 4/4] fbdev: hyperv_fb: Fix mmap of framebuffers allocated using alloc_pages() mhkelley58
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=SN6PR02MB41579713B557A32674252865D475A@SN6PR02MB4157.namprd02.prod.outlook.com \
--to=mhklinux@outlook.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=decui@microsoft.com \
--cc=deller@gmx.de \
--cc=dri-devel@lists.freedesktop.org \
--cc=haiyangz@microsoft.com \
--cc=hch@lst.de \
--cc=kys@microsoft.com \
--cc=linux-fbdev@vger.kernel.org \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=simona.vetter@ffwll.ch \
--cc=simona@ffwll.ch \
--cc=tzimmermann@suse.de \
--cc=weh@microsoft.com \
--cc=wei.liu@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox