From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DB2CC001E0 for ; Wed, 2 Aug 2023 15:27:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ACF272801B6; Wed, 2 Aug 2023 11:27:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7F322801AA; Wed, 2 Aug 2023 11:27:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 947062801B6; Wed, 2 Aug 2023 11:27:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 824882801AA for ; Wed, 2 Aug 2023 11:27:08 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 3FB338082C for ; Wed, 2 Aug 2023 15:27:08 +0000 (UTC) X-FDA: 81079542936.01.AF6D374 Received: from outbound-smtp28.blacknight.com (outbound-smtp28.blacknight.com [81.17.249.11]) by imf30.hostedemail.com (Postfix) with ESMTP id 0CD7A8002C for ; Wed, 2 Aug 2023 15:27:05 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.11 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690990026; a=rsa-sha256; cv=none; b=cX9Vy7g1Kdro6nqCoGZ0IdUD7gVDDbCdM7Yz83qiyFA7y1b3HHZiuSoF/ptbYPdUSQVEpq xabaNgCPvmRW9BEluDvj8aLUeoV1TLQ0OQ+J8jZaBvJuyc2xoyimkz0ATtpucwFKxKipJV 9FrufftzhVV9j7WnkF8G77uBzYLr1fI= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; spf=pass (imf30.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.11 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690990026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=58UeHBXMhPcR+XGWYBMQEvCP72uwq4I8LWr4zRlqoZk=; b=C9rPDlqn/9M+/WZwV1XxuGwnmzpref+/CCFq50lcRLxiziM/A6g54EQlF5CrAVgYSQQCg3 AQM0M/W9BjaSrgDAoQdjp47wTTOxpTSVflUaRUW7NtavEgsY5gQvF/SxFP6rbyCZpmEXjr MqnQmtWT0AFx56bpPlDD4Z+onObAuYk= Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10]) by outbound-smtp28.blacknight.com (Postfix) with ESMTPS id 49212CCCC9 for ; Wed, 2 Aug 2023 16:27:04 +0100 (IST) Received: (qmail 2300 invoked from network); 2 Aug 2023 15:27:04 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.20.191]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 2 Aug 2023 15:27:04 -0000 Date: Wed, 2 Aug 2023 16:27:02 +0100 From: Mel Gorman To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Andrew Morton , Linus Torvalds , liubo , Peter Xu , Matthew Wilcox , Hugh Dickins , Jason Gunthorpe , John Hubbard , Mel Gorman , Shuah Khan , Paolo Bonzini Subject: Re: [PATCH v2 3/8] kvm: explicitly set FOLL_HONOR_NUMA_FAULT in hva_to_pfn_slow() Message-ID: <20230802152702.wamtroy3zm7nbtvs@techsingularity.net> References: <20230801124844.278698-1-david@redhat.com> <20230801124844.278698-4-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20230801124844.278698-4-david@redhat.com> X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0CD7A8002C X-Stat-Signature: ft7kco6mx9t5ko6pr59j1ygbyutgzx3a X-HE-Tag: 1690990025-836850 X-HE-Meta: U2FsdGVkX19awWCF10g+TrFURdJs2Pj0iZPMmqQO5G5wZhe1DateP/ntRfdYb4YOxlNFywkcJB6XJWKBR0KpSTDG4G6RJJhUO0wyvzOeBHSnTdNns6rFvvqzNcC4/stJJCiM4dt0kzAdAkfHZpSTNZ6c9iOWQuvZ1znovvf5epzQflohe123PzsdJ/X3Xmw75KnuSzm8bTAD9PVTc0D+3FStseZjD9fIU17ArWgtuWssVoXIwRI6j+QEaasYn8Sp5ER+hUyl/gLCdcryfvdLjejbi1c23p9BzG6zBY2/DnRU8vsjXRiPySshhWlqg7RZ9oIJlntkkwkIRUz42HsuFMZrP2lJY7/nQ+eNjMBvKMZ0Ulg6Uw6/F36CFEDnK1xRg39qGz0DfThTdepwS+gUeKkBdFKhn0hV96m0fscRgy9dmW/RDsGDr4tKfRiNHgkij/jcD4v0bvpi1JCFjGIk4tNh6afgbzukSV+tl2IQz430uVqoInT0MxcdLk0YJqnVugrfyU1c5UTG3uJoUTUDI4B8aMlsvG3j7L5bc5UkyAgHwWsVVkSnc+KGXl9cn5k/6gbbrUNqYfL/I6mgExNU3bkfpnMM1LIWrKvwsNVQsPmanX1XxTRHXH+NTrh7wjcmnSElYOwtcFrx24sXwCZTR187ghExVMocMCiG99qEi4HYvkZtihklqxNptn9f4XCgn0Z0QyBrrGQz0ST9bbusyt+zkjvA8DGtrG+YBSxQiGrPhwSZ93TL9ikd/Z4pL3yrx0o5onQVTGAPyasrXWwN8kowKHXQDAsl505a1v0JTR9oSaQumUrGdPCu/DpDVUoKX6LzSaaTt7YwNT+MB58ofFWOCNORhkm7lXgmX1BNnosZbwGAdz5kneP32LqJMs54mC6PwwJWd8mp7RZtGXhepoBSVnmD1zbZDQRfk2UExCIGTGH7KisMZ3PLZJJ8dUz2qs8uwvCg/u9Wvl7piE0 kDodkvLh /vsgZRJkkJ04zLcnQcJyAZEH93ilElaZr810t2xyPHlyn/56jtc9qqj5zlrap7yX++PzjaYOt8KN2UGUuB42jD0v6GVj0sfYKH8+2r0q4zvW0KcE8+wnnZtiFTVp4DwsmMFwl/6bNj2js40giwzKjpEDotV32IJZc9v7jCaA5T/pBaOf4h542GXV5ySr4/Wyo23amUFm81RpsbAKLohKmVHb7UfLGZXwt1xU6rX0Lstd5J78= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 01, 2023 at 02:48:39PM +0200, David Hildenbrand wrote: > KVM is *the* case we know that really wants to honor NUMA hinting falls. > As we want to stop setting FOLL_HONOR_NUMA_FAULT implicitly, set > FOLL_HONOR_NUMA_FAULT whenever we might obtain pages on behalf of a VCPU > to map them into a secondary MMU, and add a comment why. > > Do that unconditionally in hva_to_pfn_slow() when calling > get_user_pages_unlocked(). > > kvmppc_book3s_instantiate_page(), hva_to_pfn_fast() and > gfn_to_page_many_atomic() are similarly used to map pages into a > secondary MMU. However, FOLL_WRITE and get_user_page_fast_only() always > implicitly honor NUMA hinting faults -- as documented for > FOLL_HONOR_NUMA_FAULT -- so we can limit this change to a single location > for now. > > Don't set it in check_user_page_hwpoison(), where we really only want to > check if the mapped page is HW-poisoned. > > We won't set it for other KVM users of get_user_pages()/pin_user_pages() > * arch/powerpc/kvm/book3s_64_mmu_hv.c: not used to map pages into a > secondary MMU. > * arch/powerpc/kvm/e500_mmu.c: only used on shared TLB pages with userspace > * arch/s390/kvm/*: s390x only supports a single NUMA node either way > * arch/x86/kvm/svm/sev.c: not used to map pages into a secondary MMU. > > This is a preparation for making FOLL_HONOR_NUMA_FAULT no longer > implicitly be set by get_user_pages() and friends. > > Signed-off-by: David Hildenbrand Seems sane but I don't know KVM well enough to know if this is the only relevant case so didn't ack. -- Mel Gorman SUSE Labs