From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E448C433EF for ; Fri, 6 May 2022 20:28:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B9A926B0071; Fri, 6 May 2022 16:28:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B48DD6B0073; Fri, 6 May 2022 16:28:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0FF16B0074; Fri, 6 May 2022 16:28:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 92F5F6B0071 for ; Fri, 6 May 2022 16:28:03 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5B702899 for ; Fri, 6 May 2022 20:28:03 +0000 (UTC) X-FDA: 79436454846.18.BF3C67B Received: from mail-qk1-f181.google.com (mail-qk1-f181.google.com [209.85.222.181]) by imf14.hostedemail.com (Postfix) with ESMTP id 83C21100009 for ; Fri, 6 May 2022 20:28:00 +0000 (UTC) Received: by mail-qk1-f181.google.com with SMTP id 126so6723132qkm.4 for ; Fri, 06 May 2022 13:28:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=gJRviFU9XNPBvSIC8osynd2EBbBBsM0NjUDjDjJmf6k=; b=l/qQW8JfOrw4KrFiJhRr4bCeOVcZPnBUWW2hdDR3RqRoe0c1Eb4Wv+9hR68y8Pu+WE ITfeV4SAWHOUVjopZBfrbDulPcoG1lIBkPgNzB1T82Nt62nb2GRhT4fTPzqPatofx2F4 QQ8kyw2Tc1RGUBXCu17iz57vnBD2UJJRTOJaNJnG9hkJMiJavnFNCrF50vdlyBhp028B hlY5gZgGzUKv1GKa4HY+xrZC4PNO1/0D/i49Bda9+CXy2hysoD2ayM5Ntbz1zwoNCLUS TyiVCkVhW+k3VHmupL6QpiGcgDx9kmgOfo1f4DyRDgk27WU+nUtZPcwGpBtnkOnKFE3R qFSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=gJRviFU9XNPBvSIC8osynd2EBbBBsM0NjUDjDjJmf6k=; b=JfAEgZc2Zv5jp20iuORQdDTd8A8s2fHlNm6BQ5fCjEc4dgW2rwdYngtg8V6CPbeF33 PYTe9S/CJxdBc+uo+Mt3rjpAbUbPtl3i28AskeGQIQp7ODmc8smqjCmr6kfa3FlOp1nl L4Oittok/ioXxC+lsnSLtiIS4riHVBEBFirMFD/I/fTi3LM8dsOvy4UxLmpfCzTiXl0D amapP0Jjz3+AxrcTyhz0TQSlXP+pF5Y8qrcruOMBSlIT7y7L0Yn6e0cS3OWJ6mjrZXkg CmbOZwLyyi8IfB3taEKErA9231CoAoUQtSCKNAT+QaDpnSmghh7G7MPCYWAmCHdIUTFK B/Jg== X-Gm-Message-State: AOAM531N6WuXQ2DGU2F6OHo1ApOkWvyqVNIbFymYdpVDf8K+eCXVJkUj ju7V6mAdEX+2XTHyw7ANKkxB5A== X-Google-Smtp-Source: ABdhPJxJADx7Lnw1xbTgYnpRiF0dfcWdQucL7QaXHNOs1/+/tWJGzlZNm7XYpYGj0HJdHVSJgpJzrw== X-Received: by 2002:a05:620a:e06:b0:69e:e402:477d with SMTP id y6-20020a05620a0e0600b0069ee402477dmr3719167qkm.716.1651868881482; Fri, 06 May 2022 13:28:01 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:538c]) by smtp.gmail.com with ESMTPSA id z6-20020ac84546000000b002f39b99f6b4sm3069806qtn.78.2022.05.06.13.28.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 May 2022 13:28:01 -0700 (PDT) Date: Fri, 6 May 2022 16:27:06 -0400 From: Johannes Weiner To: Peter Xu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Hugh Dickins , Andrew Morton , David Hildenbrand , Andrea Arcangeli , Alistair Popple Subject: Re: [PATCH] mm: Avoid unnecessary page fault retires on shared memory types Message-ID: References: <20220505211748.41127-1-peterx@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220505211748.41127-1-peterx@redhat.com> X-Rspamd-Queue-Id: 83C21100009 X-Stat-Signature: g58a4u1pgxuck9ugb4issctxt9os97mp X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b="l/qQW8Jf"; spf=pass (imf14.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.181 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org X-Rspamd-Server: rspam09 X-HE-Tag: 1651868880-36881 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, May 05, 2022 at 05:17:48PM -0400, Peter Xu wrote: > I observed that for each of the shared file-backed page faults, we're very > likely to retry one more time for the 1st write fault upon no page. It's > because we'll need to release the mmap lock for dirty rate limit purpose > with balance_dirty_pages_ratelimited() (in fault_dirty_shared_page()). > > Then after that throttling we return VM_FAULT_RETRY. > > We did that probably because VM_FAULT_RETRY is the only way we can return > to the fault handler at that time telling it we've released the mmap lock. > > However that's not ideal because it's very likely the fault does not need > to be retried at all since the pgtable was well installed before the > throttling, so the next continuous fault (including taking mmap read lock, > walk the pgtable, etc.) could be in most cases unnecessary. > > It's not only slowing down page faults for shared file-backed, but also add > more mmap lock contention which is in most cases not needed at all. > > To observe this, one could try to write to some shmem page and look at > "pgfault" value in /proc/vmstat, then we should expect 2 counts for each > shmem write simply because we retried, and vm event "pgfault" will capture > that. > > To make it more efficient, add a new VM_FAULT_COMPLETED return code just to > show that we've completed the whole fault and released the lock. It's also > a hint that we should very possibly not need another fault immediately on > this page because we've just completed it. > > This patch provides a ~12% perf boost on my aarch64 test VM with a simple > program sequentially dirtying 400MB shmem file being mmap()ed: > > Before: 650980.20 (+-1.94%) > After: 569396.40 (+-1.38%) > > I believe it could help more than that. > > We need some special care on GUP and the s390 pgfault handler (for gmap > code before returning from pgfault), the rest changes in the page fault > handlers should be relatively straightforward. > > Another thing to mention is that mm_account_fault() does take this new > fault as a generic fault to be accounted, unlike VM_FAULT_RETRY. > > I explicitly didn't touch hmm_vma_fault() and break_ksm() because they do > not handle VM_FAULT_RETRY even with existing code, so I'm literally keeping > them as-is. > > Signed-off-by: Peter Xu The change makes sense to me, but the unlock/retry signaling is tricky... > @@ -1227,6 +1247,18 @@ int fixup_user_fault(struct mm_struct *mm, > return -EINTR; > > ret = handle_mm_fault(vma, address, fault_flags, NULL); > + > + if (ret & VM_FAULT_COMPLETED) { > + /* > + * NOTE: it's a pity that we need to retake the lock here > + * to pair with the unlock() in the callers. Ideally we > + * could tell the callers so they do not need to unlock. > + */ > + mmap_read_lock(mm); > + *unlocked = true; > + return 0; > + } unlocked can be NULL inside the function, yet you assume it's non-NULL here. This is okay because COMPLETED can only be returned if RETRY is set, and when RETRY is set unlocked must be non-NULL. It's correct but not very obvious. It might be cleaner to have separate flags for ALLOW_RETRY and ALLOW_UNLOCK, with corresponding VM_FAULT_RETRY and VM_FAULT_UNLOCKED? Even if not all combinations are used. > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2942,7 +2942,7 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf) > balance_dirty_pages_ratelimited(mapping); > if (fpin) { > fput(fpin); > - return VM_FAULT_RETRY; > + return VM_FAULT_COMPLETED; There is one oddity in this now. It completes the fault and no longer triggers a retry. Yet it's still using maybe_unlock_mmap_for_io() and subject to retry limiting. This means that if the fault already retried once, this code won't drop the mmap_sem to call balance_dirty_pages() - even though it safely could and should do so, without risking endless retries. Here too IMO the distinction between ALLOW_RETRY|TRIED and ALLOW_UNLOCK would make things cleaner and more obvious.