From: Jason Wang <jasowang@redhat.com>
To: Jerome Glisse <jglisse@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
peterx@redhat.com, linux-mm@kvack.org, aarcange@redhat.com
Subject: Re: [RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Date: Fri, 8 Mar 2019 16:58:44 +0800 [thread overview]
Message-ID: <43408100-84d9-a359-3e78-dc65fb7b0ad1@redhat.com> (raw)
In-Reply-To: <20190307191720.GF3835@redhat.com>
On 2019/3/8 上午3:17, Jerome Glisse wrote:
> On Thu, Mar 07, 2019 at 12:56:45PM -0500, Michael S. Tsirkin wrote:
>> On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote:
>>> On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote:
>>>> +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = {
>>>> + .invalidate_range = vhost_invalidate_range,
>>>> +};
>>>> +
>>>> void vhost_dev_init(struct vhost_dev *dev,
>>>> struct vhost_virtqueue **vqs, int nvqs, int iov_limit)
>>>> {
>>> I also wonder here: when page is write protected then
>>> it does not look like .invalidate_range is invoked.
>>>
>>> E.g. mm/ksm.c calls
>>>
>>> mmu_notifier_invalidate_range_start and
>>> mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range.
>>>
>>> Similarly, rmap in page_mkclean_one will not call
>>> mmu_notifier_invalidate_range.
>>>
>>> If I'm right vhost won't get notified when page is write-protected since you
>>> didn't install start/end notifiers. Note that end notifier can be called
>>> with page locked, so it's not as straight-forward as just adding a call.
>>> Writing into a write-protected page isn't a good idea.
>>>
>>> Note that documentation says:
>>> it is fine to delay the mmu_notifier_invalidate_range
>>> call to mmu_notifier_invalidate_range_end() outside the page table lock.
>>> implying it's called just later.
>> OK I missed the fact that _end actually calls
>> mmu_notifier_invalidate_range internally. So that part is fine but the
>> fact that you are trying to take page lock under VQ mutex and take same
>> mutex within notifier probably means it's broken for ksm and rmap at
>> least since these call invalidate with lock taken.
>>
>> And generally, Andrea told me offline one can not take mutex under
>> the notifier callback. I CC'd Andrea for why.
> Correct, you _can not_ take mutex or any sleeping lock from within the
> invalidate_range callback as those callback happens under the page table
> spinlock. You can however do so under the invalidate_range_start call-
> back only if it is a blocking allow callback (there is a flag passdown
> with the invalidate_range_start callback if you are not allow to block
> then return EBUSY and the invalidation will be aborted).
>
>
>> That's a separate issue from set_page_dirty when memory is file backed.
> If you can access file back page then i suggest using set_page_dirty
> from within a special version of vunmap() so that when you vunmap you
> set the page dirty without taking page lock. It is safe to do so
> always from within an mmu notifier callback if you had the page map
> with write permission which means that the page had write permission
> in the userspace pte too and thus it having dirty pte is expected
> and calling set_page_dirty on the page is allowed without any lock.
> Locking will happen once the userspace pte are tear down through the
> page table lock.
Can I simply can set_page_dirty() before vunmap() in the mmu notifier
callback, or is there any reason that it must be called within vumap()?
Thanks
>
>> It's because of all these issues that I preferred just accessing
>> userspace memory and handling faults. Unfortunately there does not
>> appear to exist an API that whitelists a specific driver along the lines
>> of "I checked this code for speculative info leaks, don't add barriers
>> on data path please".
> Maybe it would be better to explore adding such helper then remapping
> page into kernel address space ?
>
> Cheers,
> Jérôme
next prev parent reply other threads:[~2019-03-08 8:58 UTC|newest]
Thread overview: 78+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-06 7:18 [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap() Jason Wang
2019-03-06 7:18 ` [RFC PATCH V2 1/5] vhost: generalize adding used elem Jason Wang
2019-03-06 7:18 ` [RFC PATCH V2 2/5] vhost: fine grain userspace memory accessors Jason Wang
2019-03-06 10:45 ` Christophe de Dinechin
2019-03-07 2:38 ` Jason Wang
2019-03-06 7:18 ` [RFC PATCH V2 3/5] vhost: rename vq_iotlb_prefetch() to vq_meta_prefetch() Jason Wang
2019-03-06 7:18 ` [RFC PATCH V2 4/5] vhost: introduce helpers to get the size of metadata area Jason Wang
2019-03-06 10:56 ` Christophe de Dinechin
2019-03-07 2:40 ` Jason Wang
2019-03-06 18:43 ` Souptick Joarder
2019-03-07 2:42 ` Jason Wang
2019-03-06 7:18 ` [RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address Jason Wang
2019-03-06 16:31 ` Michael S. Tsirkin
2019-03-07 2:45 ` Jason Wang
2019-03-07 15:34 ` Michael S. Tsirkin
2019-03-07 19:09 ` Jerome Glisse
2019-03-07 19:38 ` Andrea Arcangeli
2019-03-07 20:17 ` Jerome Glisse
2019-03-07 21:27 ` Andrea Arcangeli
2019-03-08 9:13 ` Jason Wang
2019-03-08 19:11 ` Andrea Arcangeli
2019-03-11 7:21 ` Jason Wang
2019-03-11 14:45 ` Jan Kara
2019-03-08 8:31 ` Jason Wang
2019-03-07 15:47 ` Michael S. Tsirkin
2019-03-07 17:56 ` Michael S. Tsirkin
2019-03-07 19:16 ` Andrea Arcangeli
2019-03-08 8:50 ` Jason Wang
2019-03-08 14:58 ` Jerome Glisse
2019-03-11 7:18 ` Jason Wang
2019-03-08 19:48 ` Andrea Arcangeli
2019-03-08 20:06 ` Jerome Glisse
2019-03-11 7:40 ` Jason Wang
2019-03-11 12:48 ` Michael S. Tsirkin
2019-03-11 13:43 ` Andrea Arcangeli
2019-03-12 2:56 ` Jason Wang
2019-03-12 3:51 ` Michael S. Tsirkin
2019-03-12 2:52 ` Jason Wang
2019-03-12 3:50 ` Michael S. Tsirkin
2019-03-12 7:15 ` Jason Wang
2019-03-07 19:17 ` Jerome Glisse
2019-03-08 2:21 ` Michael S. Tsirkin
2019-03-08 2:55 ` Jerome Glisse
2019-03-08 3:16 ` Michael S. Tsirkin
2019-03-08 3:40 ` Jerome Glisse
2019-03-08 3:43 ` Michael S. Tsirkin
2019-03-08 3:45 ` Jerome Glisse
2019-03-08 9:15 ` Jason Wang
2019-03-08 8:58 ` Jason Wang [this message]
2019-03-08 12:56 ` Michael S. Tsirkin
2019-03-08 15:02 ` Jerome Glisse
2019-03-08 19:13 ` Andrea Arcangeli
2019-03-08 14:12 ` [RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap() Christoph Hellwig
2019-03-11 7:13 ` Jason Wang
2019-03-11 13:59 ` Michael S. Tsirkin
2019-03-11 18:14 ` David Miller
2019-03-12 2:59 ` Jason Wang
2019-03-12 3:52 ` Michael S. Tsirkin
2019-03-12 7:17 ` Jason Wang
2019-03-12 11:54 ` Michael S. Tsirkin
2019-03-12 15:46 ` James Bottomley
2019-03-12 20:04 ` Andrea Arcangeli
2019-03-12 20:53 ` James Bottomley
2019-03-12 21:11 ` Andrea Arcangeli
2019-03-12 21:19 ` James Bottomley
2019-03-12 21:53 ` Andrea Arcangeli
2019-03-12 22:02 ` James Bottomley
2019-03-12 22:50 ` Andrea Arcangeli
2019-03-12 22:57 ` James Bottomley
2019-03-13 16:05 ` Christoph Hellwig
2019-03-13 16:37 ` James Bottomley
2019-03-14 10:42 ` Michael S. Tsirkin
2019-03-14 13:49 ` Jason Wang
2019-03-14 19:33 ` Andrea Arcangeli
2019-03-15 4:39 ` Jason Wang
2019-03-12 5:14 ` James Bottomley
2019-03-12 7:51 ` Jason Wang
2019-03-12 7:53 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=43408100-84d9-a359-3e78-dc65fb7b0ad1@redhat.com \
--to=jasowang@redhat.com \
--cc=aarcange@redhat.com \
--cc=jglisse@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=peterx@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox