From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F02BC3ABDA for ; Wed, 14 May 2025 23:45:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E5EE16B00E5; Wed, 14 May 2025 19:44:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E0FF46B00E6; Wed, 14 May 2025 19:44:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C62C06B00E7; Wed, 14 May 2025 19:44:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A3E736B00E5 for ; Wed, 14 May 2025 19:44:07 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E94311611F8 for ; Wed, 14 May 2025 23:44:08 +0000 (UTC) X-FDA: 83443144176.12.A24C4D0 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by imf28.hostedemail.com (Postfix) with ESMTP id 28028C0003 for ; Wed, 14 May 2025 23:44:06 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=kTe97m37; spf=pass (imf28.hostedemail.com: domain of 3xSolaAsKCPYYaicpjcwrleemmejc.amkjglsv-kkitYai.mpe@flex--ackerleytng.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3xSolaAsKCPYYaicpjcwrleemmejc.amkjglsv-kkitYai.mpe@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747266247; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7kvSCkXJQhrKVgucxSGtwwGYhLWVqw+CwFYL4UvJquE=; b=yCUaUaEKtxPf5j8/mz9PJrvdun+c+ZaN02rFxWQ4qtGToFD9emJNBxqCVKJNbLI0B0BZ/Z 7MVyLNPsI7QsKXGQa5WmuxER1JVp4cISCGbT5KsVlmBQY27rAup649tdUPusGAETO19xE+ Fly34XF1IoCEd6cFIefW30DLMmvn0yc= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=kTe97m37; spf=pass (imf28.hostedemail.com: domain of 3xSolaAsKCPYYaicpjcwrleemmejc.amkjglsv-kkitYai.mpe@flex--ackerleytng.bounces.google.com designates 209.85.216.74 as permitted sender) smtp.mailfrom=3xSolaAsKCPYYaicpjcwrleemmejc.amkjglsv-kkitYai.mpe@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747266247; a=rsa-sha256; cv=none; b=TLqeO2XmitD2rlQEfPSpgZTkxdwQPiCVM1lb1t+nDbP6SFpH6SGrCkiEH9aAw90jXZ7CLk CPbjjVy7bsUX4cIXZNmAocuSdy7foUs1Wy/38xSEPzm/S+idJq9/QDvvq8WrEbT9ZlwXmA XhZkdlLbu3heeg/9pZUruaujPgT2pcA= Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-30ac9abbd4bso590570a91.3 for ; Wed, 14 May 2025 16:44:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747266246; x=1747871046; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7kvSCkXJQhrKVgucxSGtwwGYhLWVqw+CwFYL4UvJquE=; b=kTe97m37j7oaAqVzFQyQ5K72Tgr1a/rOqEirHU1JQwuM8cJ1TnvBfve0zbnrAUJiRe XY920HLtJjQpgsS8X8t+NxBfgiE5kDtCVgw6IDxtBfs5oc9UJCukHUEQ1ExFWHSUerQ6 gf+TUVESH5XaaBXK9t7A/lkGF4jOameWHipElyIjUeyIIKmgck5JVfA4qbhFbnM0vtWu DcQErN1sosxY8cvZs0+JJcjhSzDAs2ZCZWGAl1fqy1C5YFIG0shqGYY+G9XEkgqGNoHa EHFcLZG/VvABvVIPpd8rJUvDQ9lUv1Vz/lBdRE4sLmJsV0rvZFCoa7PHUJBlpL94PzUg O/Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747266246; x=1747871046; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7kvSCkXJQhrKVgucxSGtwwGYhLWVqw+CwFYL4UvJquE=; b=RLn93g1MsQRwpc9vA8XPcSHI6/nfGf9ESCLZyxhRtzLWFcqNxARLCgP4SaJ9b7o3UA 6Q2knoS7BPh9KKHrVisSy+PFBgwHSLOGduBUn7XgzbrjqhpwjfrbAqC+gvDfaem6x+nT saCvEx/YXDjvz3Lkc3XFS6miJXA1JJ1TlBR6WnGIzKgNf2RHvrLNX86HCmg4NiihHHfX j7nkB2ZVr5+scq1mSmRlBHGNWdsfCuxARGlBGBBhNlOoo95BqcLu9+uicAtK0Rd3DBb6 i0QHryViPV9BbSKv5lnHSTaKDhulqWT/bVGnvMxDBTzZ1kB4TJqYP+yOC8kghwE2ImAI V2Pw== X-Forwarded-Encrypted: i=1; AJvYcCXhVhvs1lse/wxpxpfaUJrzETTnN4i9A/GOvturjZLe00G2ciF+L2qUwiHs66myh5jzXYh7aZq90A==@kvack.org X-Gm-Message-State: AOJu0Yx1+xOmBJMG/EtUG5fUWvXtwlfnO05oOc0dC0v2JU1GT0/j8D+7 s5LPFohDxmVYUkOmxPL8chW2AoR2my5eOvjbf0Ejd8tFKzNT70aitiRtQ6c9ehyWG34nAHcyCDK muBc+Hba0X8HD4gsZD44/Sw== X-Google-Smtp-Source: AGHT+IE20STsLMarzTx6jgJoU5OziT731+lq67F3PvF/P+tS/GIdT+Mf8wt3BdWWvNKEn5ln5kbUV5Kg3XvNRqtHxw== X-Received: from pjbcz5.prod.google.com ([2002:a17:90a:d445:b0:2fa:a101:755]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:d88c:b0:2f2:a664:df20 with SMTP id 98e67ed59e1d1-30e51570e69mr753738a91.7.1747266245969; Wed, 14 May 2025 16:44:05 -0700 (PDT) Date: Wed, 14 May 2025 16:42:23 -0700 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: Subject: [RFC PATCH v2 44/51] KVM: selftests: Test truncation paths of guest_memfd From: Ackerley Tng To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 28028C0003 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: ubs6x6o6t78dxbtfo7y7roy6uzd88mfe X-HE-Tag: 1747266246-238827 X-HE-Meta: U2FsdGVkX1/2K3K8GjdoSDMXh8M72NTmnRrv9HwhTCWAr3ln9XGAaYzrjW/IgayHjj5pVs+x7wM/kNyZvugayjNHY1sCjW5FiZcJD02GJXGdZD1zVDfkUHsXSHLl0u4377KZ3oyQPPAd1VorQi9ElHfT7Nuyp3s8Dk8GiXSZ+VMGSkszPwl1Sh+20qvoDNK/iLmrYwAAIHJYcr+h+cl2+r82VmOIciftJo0+2RC3wbeTfZezwM3BLy1PIVpVpQ8pvbCZulvQii2IB7gsBP5zQrfGHNgqlGIwaSvd3esdXGz9kp+aRqrE3tJHwdBilVIw0aMWl/gF4K1cSA1FZ7EdvbkL5VkX59oK9SayVWDjSGF5Q5zsIf/fiwnddybCRBJSBf9bX3JNvCqvnRoCpKahSp11vT8JWMpAxkEIsiFnG1/SKB+P42aa2GtZj/01vCYft9bDyk0YMKgZmjNrRsXxDnW/1iaDmH+FqFkFy5Cy6Nn4wPSQ9c6jPJcwilNh4I1WHuxtKLCHZvanYpHZ3/pDgiHl6wyDD99r97xmAHGRX8liNkcx6iJhAfDa9tLCqrau8KeQoG6m9Gl1keUmy4e0lBuYa3EFFuol6lD3vOtXtrZiCQ2vXfwdvjtNKJKdnuPUlcvUpW2EGBmWfPhp8CxP/rD/cAmrPQyvebrzVpLJ40UyI6dvZOmNp1U/IKK6v2yFVxVK8zW8IVv70PDwUQKC6DXk8ijTd+BOqtA+ZmH3mriK+NoC2wy7bhgkA2Y3ClaVRd5MZNNsALTur9oNm7VvGwpR7XvRawK2zfmLUE3/od1WykYlFxPBzdr1tP6zyMitukFKKhvsUAZT+wfKOYOOymYTAyzHhIu8lyMKTzXBhZY0m7zcF4KmRv38rNPE9wRqJt5azOC3u2h44hzmbzZ5xqGfv7Q4Ci5CNbB0LZcej2l8xksr7gTEsE/KecgsBRnGyFebFWfyMgxPKORzpTS JZl74c9D riiEJjz+agd1z186w+279ii97KN8VfATfEq23E6/slia5MpduV9krpDApRpjFudjjvxGDhQTT3HPhML6RielfRtw/9LnVzHAoEn/ADo8QrGghRP38YR/7MWDYzxrlxYU2/VDnRLQp/0mud2c93NOsszkUQy6MkoudspaZY+K1uyIao+SuUhUX4vp9ry5ktFKEjMEkye5AkvwaVw/JZdEr20SSqG/NAqW20kTIOQkeRELTAJKmSmZ5+YsG/weqbztkKIQ7M8rE/0CeAwhQJ98/rFV43PcOkgz2gmpJby/6SpxbBElkpw3qMR7Rk93R1nrYVEU2YIsl1OsRlQxmq/7deL7HFK9IgsutLotaJs0j3H7eKa/iWwWTuCH64GrOG/612/EayxvGLcRpkoZY2SkpZhlhRNuiTeuOEpu7jSCd6BgSO37sd8oGQXslSVr/C8KwnfrJPlJumntQwttxH8+fvILid+2QkNblJL7aQUABBNfwVzLt3RCrFOi42en7qqmIqroo0qWnEvNxV+zcD2mcYhSDZRlncBCaHzd4jVAzt9+v5CgjLCIcg8vXYw6uO9/kmHpO/WT/+iohtUa6fSktlzRVKT+Knt0JhLyj+S9RRbzkJ4v1/dGGZg/NZx7Zj/oMEGIdgOU/fL1IPtVgmqBdQG600jIpMo79j/nFNVyQOXnBHtnLu1wvFNr7eTdFsfoLZqRBm5ryAqdPWqM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When guest_memfd folios are truncated, if pages are split, they have to be merged. For truncations, userspace will get an error if there are unexpected refcounts on the folios. For truncation on closing, kernel will handle the merging even if there are unexpected refcounts on the folios. This patch tests the above two scenarios. Change-Id: I0f0c619763f575605fab8b3c453858960e43ed71 Signed-off-by: Ackerley Tng --- .../kvm/guest_memfd_conversions_test.c | 95 +++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/tools/testing/selftests/kvm/guest_memfd_conversions_test.c b/tools/testing/selftests/kvm/guest_memfd_conversions_test.c index 22126454fd6b..435f91424d5f 100644 --- a/tools/testing/selftests/kvm/guest_memfd_conversions_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_conversions_test.c @@ -4,6 +4,7 @@ * * Copyright (c) 2024, Google LLC. */ +#include #include #include #include @@ -580,6 +581,97 @@ static void test_fault_type_independent_of_mem_attributes(size_t test_page_size) cleanup_test(test_page_size, vm, guest_memfd, mem); } +static void test_truncate_shared_while_pinned(size_t test_page_size) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + int guest_memfd; + char *mem; + int ret; + + vm = setup_test(test_page_size, /*init_private=*/false, &vcpu, + &guest_memfd, &mem); + + ret = fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE, 0, test_page_size); + TEST_ASSERT(!ret, "fallocate should have succeeded"); + + pin_pages(mem, test_page_size); + + ret = fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, + 0, test_page_size); + if (test_page_size == PAGE_SIZE) { + TEST_ASSERT(!ret, "truncate should have succeeded since there is no need to merge"); + } else { + TEST_ASSERT(ret, "truncate should have failed since pages are pinned"); + TEST_ASSERT_EQ(errno, EAGAIN); + } + + unpin_pages(); + + ret = fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, + 0, test_page_size); + TEST_ASSERT(!ret, "truncate should succeed now that pages are unpinned"); + + cleanup_test(test_page_size, vm, guest_memfd, mem); +} + +static void test_truncate_private(size_t test_page_size) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + int guest_memfd; + char *mem; + int ret; + + vm = setup_test(test_page_size, /*init_private=*/true, &vcpu, + &guest_memfd, &mem); + + ret = fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE, 0, test_page_size); + TEST_ASSERT(!ret, "fallocate should have succeeded"); + + ret = fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, + 0, test_page_size); + TEST_ASSERT(!ret, "truncate should have succeeded since there is no need to merge"); + + cleanup_test(test_page_size, vm, guest_memfd, mem); +} + +static void __test_close_with_pinning(size_t test_page_size, bool init_private) +{ + struct kvm_vcpu *vcpu; + struct kvm_vm *vm; + int guest_memfd; + char *mem; + int ret; + + vm = setup_test(test_page_size, init_private, &vcpu, &guest_memfd, &mem); + + ret = fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE, 0, test_page_size); + TEST_ASSERT(!ret, "fallocate should have succeeded"); + + if (!init_private) + pin_pages(mem, test_page_size); + + cleanup_test(test_page_size, vm, guest_memfd, mem); + + if (!init_private) + unpin_pages(); + + /* + * Test this with ./guest_memfd_wrap_test_check_hugetlb_reporting.sh to + * check that the HugeTLB page got merged and returned to HugeTLB. + * + * Sleep here to give kernel worker time to do the merge and return. + */ + sleep(1); +} + +static void test_close_with_pinning(size_t test_page_size) +{ + __test_close_with_pinning(test_page_size, true); + __test_close_with_pinning(test_page_size, false); +} + static void test_with_size(size_t test_page_size) { test_sharing(test_page_size); @@ -590,6 +682,9 @@ static void test_with_size(size_t test_page_size) test_truncate_should_not_change_mappability(test_page_size); test_conversions_should_fail_if_memory_has_elevated_refcount(test_page_size); test_fault_type_independent_of_mem_attributes(test_page_size); + test_truncate_shared_while_pinned(test_page_size); + test_truncate_private(test_page_size); + test_close_with_pinning(test_page_size); } int main(int argc, char *argv[]) -- 2.49.0.1045.g170613ef41-goog