From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by kanga.kvack.org (Postfix) with ESMTP id 42A8E8E0004 for ; Tue, 22 Jan 2019 03:22:50 -0500 (EST) Received: by mail-qk1-f199.google.com with SMTP id v64so21605116qka.5 for ; Tue, 22 Jan 2019 00:22:50 -0800 (PST) Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id 191si4509440qkl.225.2019.01.22.00.22.48 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 22 Jan 2019 00:22:49 -0800 (PST) Date: Tue, 22 Jan 2019 16:22:38 +0800 From: Peter Xu Subject: Re: [PATCH RFC 03/24] mm: allow VM_FAULT_RETRY for multiple times Message-ID: <20190122082238.GC14907@xz-x1> References: <20190121075722.7945-1-peterx@redhat.com> <20190121075722.7945-4-peterx@redhat.com> <20190121155536.GB3711@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20190121155536.GB3711@redhat.com> Sender: owner-linux-mm@kvack.org List-ID: To: Jerome Glisse Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hugh Dickins , Maya Gokhale , Johannes Weiner , Martin Cracauer , Denis Plotnikov , Shaohua Li , Andrea Arcangeli , Mike Kravetz , Marty McFadden , Mike Rapoport , Mel Gorman , "Kirill A . Shutemov" , "Dr . David Alan Gilbert" On Mon, Jan 21, 2019 at 10:55:36AM -0500, Jerome Glisse wrote: > On Mon, Jan 21, 2019 at 03:57:01PM +0800, Peter Xu wrote: > > The idea comes from a discussion between Linus and Andrea [1]. > > > > Before this patch we only allow a page fault to retry once. We achieved > > this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing > > handle_mm_fault() the second time. This was majorly used to avoid > > unexpected starvation of the system by looping over forever to handle > > the page fault on a single page. However that should hardly happen, and > > after all for each code path to return a VM_FAULT_RETRY we'll first wait > > for a condition (during which time we should possibly yield the cpu) to > > happen before VM_FAULT_RETRY is really returned. > > > > This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY > > flag when we receive VM_FAULT_RETRY. It means that the page fault > > handler now can retry the page fault for multiple times if necessary > > without the need to generate another page fault event. Meanwhile we > > still keep the FAULT_FLAG_TRIED flag so page fault handler can still > > identify whether a page fault is the first attempt or not. > > So there is nothing protecting starvation after this patch ? AFAICT. > Do we sufficient proof that we never have a scenario where one process > might starve fault another ? > > For instance some page locking could starve one process. Hi, Jerome, Do you mean lock_page()? AFAIU lock_page() will only yield the process itself until the lock is released, so IMHO it's not really starving the process but a natural behavior. After all the process may not continue without handling the page fault correctly. Or when you say "starvation" do you mean that we might return VM_FAULT_RETRY from handle_mm_fault() continuously so we'll looping over and over inside the page fault handler? Thanks, > > > > > > GUP code is not touched yet and will be covered in follow up patch. > > > > This will be a nice enhancement for current code at the same time a > > supporting material for the future userfaultfd-writeprotect work since > > in that work there will always be an explicit userfault writeprotect > > retry for protected pages, and if that cannot resolve the page > > fault (e.g., when userfaultfd-writeprotect is used in conjunction with > > shared memory) then we'll possibly need a 3rd retry of the page fault. > > It might also benefit other potential users who will have similar > > requirement like userfault write-protection. > > > > Please read the thread below for more information. > > > > [1] https://lkml.org/lkml/2017/11/2/833 > > > > Suggested-by: Linus Torvalds > > Suggested-by: Andrea Arcangeli > > Signed-off-by: Peter Xu > > --- Regards, -- Peter Xu