From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17402C3ABD8 for ; Wed, 14 May 2025 23:46:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2CEDF6B00E6; Wed, 14 May 2025 19:44:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 261356B00AE; Wed, 14 May 2025 19:44:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 039266B00E6; Wed, 14 May 2025 19:44:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C89E36B009E for ; Wed, 14 May 2025 19:44:10 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1AF171411E9 for ; Wed, 14 May 2025 23:44:11 +0000 (UTC) X-FDA: 83443144344.03.0979EA3 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) by imf13.hostedemail.com (Postfix) with ESMTP id 3182420003 for ; Wed, 14 May 2025 23:44:10 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=T68Zfdjg; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf13.hostedemail.com: domain of 3ySolaAsKCPocemgtng0vpiiqqing.eqonkpwz-oomxcem.qti@flex--ackerleytng.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3ySolaAsKCPocemgtng0vpiiqqing.eqonkpwz-oomxcem.qti@flex--ackerleytng.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747266250; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LAqx7PB78yrm1J6fClisMvqbdgg3BicIqBxSbwrmM4g=; b=vBiwZRo4wXPHu/mHq8ZSgIzLtEOW+vc5oNJ59rdob8ztJ8hjA+GxceUQJlkK370UIvHDb8 aUBQTf0y8MUcYxvqLXLHZmrfHOMGWljjMv6CFhM/OaQtWDUEmMaRBFdRgLiynrOrT2vcKc KLET5LLpqDQn6WoMy053JC1901Mqn7w= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747266250; a=rsa-sha256; cv=none; b=P27cFInHvEWOzm7iP9+Peb1gHF2Km2CnyuGKRTeOdkd73uoksUIvLmY/IpRqzadj5P76dj ue4A9ivtS5RWFssirls6l/ikGBMwRRab9AXDTITAJrgSGd4R9Lu4lap+7DBvQkbpVDzCoM Z8aUTEjuYwRIO3alGZId2BXq1dPGhnw= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=T68Zfdjg; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf13.hostedemail.com: domain of 3ySolaAsKCPocemgtng0vpiiqqing.eqonkpwz-oomxcem.qti@flex--ackerleytng.bounces.google.com designates 209.85.216.73 as permitted sender) smtp.mailfrom=3ySolaAsKCPocemgtng0vpiiqqing.eqonkpwz-oomxcem.qti@flex--ackerleytng.bounces.google.com Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-30e2bd11716so359831a91.3 for ; Wed, 14 May 2025 16:44:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747266249; x=1747871049; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LAqx7PB78yrm1J6fClisMvqbdgg3BicIqBxSbwrmM4g=; b=T68Zfdjg4Frx8g2f5N90p88LYUibB1qVM3ccY2uSJGpwgiAnchJ/5n5pRiwjgthtL7 o6mAUBzrLeeg1+jgkfeiQMU+iJv9XH5XV3kDp2k/5rBo1VkbP6PAlnGxsDBZZLP5ZYhc OL48Tg1ohsxCm91sjMdyY3gRPIb7ldm+n95vR5d0Jtbvi/Enjzd0XCNUNcrEuMNhru68 UKxrmGMBaIZ9r3Id9Tp9cJFyK2Bm/6+yBo07wIecpN3TGCgRzGqXpS8WmjiEYPQd9/PQ 5cSwBrydl2icJ+03+UrMjxh5T7ffpDwdGZy+SwEqm9aZHh5yOYTMMgBjd//dZ2TwZ1i8 RZxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747266249; x=1747871049; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LAqx7PB78yrm1J6fClisMvqbdgg3BicIqBxSbwrmM4g=; b=HkUJY562+6eU6wP4XZbuKTYaMZgnQBPhGoTsGl2JFkT64El1s25UcUvf0eTNmgTKFc a3q4FzBtqGiKGD64JYhoGuZmKp7nbcJXgs/a4XfREXQENGqqJ213kl27hmLcWOm6H62N ViIP4KhucO7ji0bsJOGpi3SAUcByyO8AJXkhJ2OhyIv7Ox20+2ZNMcjmXJh4TI4Mrwl6 s/U52ykpcD8dX3CCDOGHx3FRuhMLxkgnFlPi4+uoZFmN+Yy6n0RPG5qAXNUDkexvtxN6 WXeenM/oMl6PTUvxbT6w4nrlR4vZzIiwlIDc8KWxMHoUvbuGAnql8McObojHknPN+Uq7 fLAQ== X-Forwarded-Encrypted: i=1; AJvYcCXhn/Iom6fImM954WlOPQrf2wxYDSniV5NGpmdFQjtJu2QhxTGcY1um7uu4JyLNyidzJwaT8fR+6w==@kvack.org X-Gm-Message-State: AOJu0YyscuOXX380HP2NHtNqBH9JxVpI7cHZPoWY50qvtYiMlHFHUIiK XcDsC1jlsycZESdYloeqVE7WzREF63jzYmpX/CPCqMAuc2KDU9Ahywarc3CSANjYWgMftGuf67r jtMbg3YxlYcKUArDdnyhiIg== X-Google-Smtp-Source: AGHT+IHTYGGI34KSQ5XCdFQVJvWJnhJg093+O7Y4UZIaW3vTNyanshiaJtG95L6+3kUV6GJKWdEgPBz38RMwUlzgHQ== X-Received: from pjbpw18.prod.google.com ([2002:a17:90b:2792:b0:2fe:800f:23a]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:498c:b0:301:1bce:c26f with SMTP id 98e67ed59e1d1-30e5156ed71mr757330a91.3.1747266249098; Wed, 14 May 2025 16:44:09 -0700 (PDT) Date: Wed, 14 May 2025 16:42:25 -0700 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <56149cfab1ab08d73618fd3914addd51dd42193a.1747264138.git.ackerleytng@google.com> Subject: [RFC PATCH v2 46/51] KVM: selftests: Test that guest_memfd usage is reported via hugetlb From: Ackerley Tng To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: 597f6qguxm93w6iz61hqbpbjbw7ye1pp X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 3182420003 X-HE-Tag: 1747266250-460561 X-HE-Meta: U2FsdGVkX19dm1QWuVEsM+3BQy+RLTVA3oVMuNuUEgv1HuamRsQzQ+/cTuMF+UHiQiMS4/Pm7ErgPBxNvzMu6csmUajAA9HQaDBSOv+CQJSZajvIj1fs4x7QfD0r3cfUkIa9/CnWUL92pzU5OfTXnl5Dil3+xJwMRbSEvpyiL7gHVpmG/VxJs3flTe/ffy0pb6xA8VAcbdOct12ODMk3ifYj0nS5R8IYpMGPLTFzaENy1lk3hZOLS2sfAV8JMLPV6SWytbQg0hAz0G+y4BVhTJAmZ8pf44MtoDhBqfMu0jspvOIyGuoK65Ik/G9jHlZHV/k5XrZsKFkqidra914qOF4YwczKisw4UvHAKf3EAps/22YJlFDx6vWc6+bxGZe626FVnqQmdsD6yMeju6gIX3ElGwLUw4YuUD6xrz3Thxh8wiS8qRPh164ZMctPgV11g79efRHjHBd6Veb+hEdP9YjjKQ7Y/5K4/rz8A3YaCHdbU5JoPn4ff86tOekuLWlSSumbtOj89HlwWYAdBUMbrxEuakqTPjB/8K/5zxLPtSR2m1cOX8ZKMgdIXQwXUw3Y8p0VO/owU68GNkTB6WQRk4xRpiGvC7a58wdwqA0oCQ4J+hZh4TqCvPS6uJspYdJ9n4Lgo/7xVfA0Ls7eNWNCO2MeCQESNwiPyWwtqyb3CCAMkbUdC5HxtlEcL9o8CfY8tWlGZ0XJtnLJN+Dh8s1PJMoIu+AcxD2AzaFLnGTT4jvxchZ6qN9zi6vd1CzCc+hB1WioPS13rvTCvXg5CM3B9K5vYiY1CBxea/GBOE4tgHdakpdQ4DijNWP22Udx61b1lk9PS0lgJVb9up5/cfnClD/ed1GCNhWie/gkBpk4JurKXUfTTKSa89qE6ITn9FWbixV8s7014xlELs9hzqRD6q1V0I2yYWn0yWH7cs0Hq9NhD40SVWFMMYjUYsHwKirSYAJF+6go4eeLWLlIpeG WJkym8tn 8bvzppAUuy0hMKRPmU3OsEc2VhLIUtGZ7I/zJ42lvqVjB7LHM8/l+XheB2+QyeHxIljexOKhX7ORuiwCvKMtS/AqEAYnIotqPfcstpOpJEmLy1TgJGcX47ha/j9tl1u1yBJb1jJto11llgM5KcbdJV9qWs2uBHdw9L0OJLIQrblr7L0PKOTGq0cH+8soyJtPffX7aWNNpJgpa5SPb3Nql7/S5QXmpyF0TjlDhXmv36P73yo4NEUXmS/NCN/hz5AH+Poarby1YN4+0Ij+Sgfmzw87zHlP46pE/wVONpVOUvyhjkD7UoDPHrtiUA0XcM44wajhT90v7m3bQ7hpSZEJspZZje/NzgGAzSACsC35yflxQKcrMIk4OW61M9cO86RH8H5tmvYLgcehKpp0M33l4sPGxWxPP9PnrsZ07diRpH0uh27j7fzDXYoZLgdYruh8B8c4TDw9QFo4GFBe/OQlzm0SNuQjgX+PnGsLf8qIPOMtDBuTMeXPuoohpJoyNqiy2CwVpYY/v1qSAk90APOT48FWNtrkylFxx5FDBC6rwfEoGdFAfmabRev/S+HX8jfbz24OqhaZ1/bO0ZZ0VIceleD1P0IA10plUT8mwtoIGStsxqJB2lnN9tCHiH8p5nIp4ZxRyQ/CwXntfZ5CINpaTWB6drqbsksTSPbfz0CHu5v2MLLc2eU4QEEz5EY6VB62SzFL1esL31T/EkfU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Using HugeTLB as the huge page allocator for guest_memfd allows reuse of HugeTLB's reporting mechanism, hence HugeTLB stats must be kept up-to-date. This patch tests for the above. Signed-off-by: Ackerley Tng Change-Id: Ida3319b1d40c593d8167a03506c7030e67fc746b --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../kvm/guest_memfd_hugetlb_reporting_test.c | 384 ++++++++++++++++++ ...uest_memfd_provide_hugetlb_cgroup_mount.sh | 36 ++ 3 files changed, 421 insertions(+) create mode 100644 tools/testing/selftests/kvm/guest_memfd_hugetlb_reporting_test.c create mode 100755 tools/testing/selftests/kvm/guest_memfd_provide_hugetlb_cgroup_mount.sh diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index bc22a5a23c4c..2ffe6bc95a68 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -132,6 +132,7 @@ TEST_GEN_PROGS_x86 += coalesced_io_test TEST_GEN_PROGS_x86 += dirty_log_perf_test TEST_GEN_PROGS_x86 += guest_memfd_test TEST_GEN_PROGS_x86 += guest_memfd_conversions_test +TEST_GEN_PROGS_x86 += guest_memfd_hugetlb_reporting_test TEST_GEN_PROGS_x86 += hardware_disable_test TEST_GEN_PROGS_x86 += memslot_modification_stress_test TEST_GEN_PROGS_x86 += memslot_perf_test diff --git a/tools/testing/selftests/kvm/guest_memfd_hugetlb_reporting_test.c b/tools/testing/selftests/kvm/guest_memfd_hugetlb_reporting_test.c new file mode 100644 index 000000000000..8ff1dda3e02f --- /dev/null +++ b/tools/testing/selftests/kvm/guest_memfd_hugetlb_reporting_test.c @@ -0,0 +1,384 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Tests that HugeTLB statistics are correct at various points of the lifecycle + * of guest_memfd with 1G page support. + * + * Providing a HUGETLB_CGROUP_PATH will allow cgroup reservations to be + * tested. + * + * Either use + * + * ./guest_memfd_provide_hugetlb_cgroup_mount.sh ./guest_memfd_hugetlb_reporting_test + * + * or provide the mount with + * + * export HUGETLB_CGROUP_PATH=/tmp/hugetlb-cgroup + * mount -t cgroup -o hugetlb none $HUGETLB_CGROUP_PATH + * ./guest_memfd_hugetlb_reporting_test + * + * + * Copyright (C) 2025 Google LLC + * + * Authors: + * Ackerley Tng + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kvm_util.h" +#include "test_util.h" +#include "processor.h" + +static unsigned long read_value(const char *file_name) +{ + FILE *fp; + unsigned long num; + + fp = fopen(file_name, "r"); + TEST_ASSERT(fp != NULL, "Error opening file %s!\n", file_name); + + TEST_ASSERT_EQ(fscanf(fp, "%lu", &num), 1); + + fclose(fp); + + return num; +} + +enum hugetlb_statistic { + FREE_HUGEPAGES, + NR_HUGEPAGES, + NR_OVERCOMMIT_HUGEPAGES, + RESV_HUGEPAGES, + SURPLUS_HUGEPAGES, + NR_TESTED_HUGETLB_STATISTICS, +}; + +enum hugetlb_cgroup_statistic { + LIMIT_IN_BYTES, + MAX_USAGE_IN_BYTES, + USAGE_IN_BYTES, + NR_TESTED_HUGETLB_CGROUP_STATISTICS, +}; + +enum hugetlb_cgroup_statistic_category { + USAGE = 0, + RESERVATION, + NR_HUGETLB_CGROUP_STATISTIC_CATEGORIES, +}; + +static const char *hugetlb_statistics[NR_TESTED_HUGETLB_STATISTICS] = { + [FREE_HUGEPAGES] = "free_hugepages", + [NR_HUGEPAGES] = "nr_hugepages", + [NR_OVERCOMMIT_HUGEPAGES] = "nr_overcommit_hugepages", + [RESV_HUGEPAGES] = "resv_hugepages", + [SURPLUS_HUGEPAGES] = "surplus_hugepages", +}; + +static const char *hugetlb_cgroup_statistics[NR_TESTED_HUGETLB_CGROUP_STATISTICS] = { + [LIMIT_IN_BYTES] = "limit_in_bytes", + [MAX_USAGE_IN_BYTES] = "max_usage_in_bytes", + [USAGE_IN_BYTES] = "usage_in_bytes", +}; + +enum test_page_size { + TEST_SZ_2M, + TEST_SZ_1G, + NR_TEST_SIZES, +}; + +struct test_param { + size_t page_size; + int memfd_create_flags; + uint64_t guest_memfd_flags; + char *hugetlb_size_string; + char *hugetlb_cgroup_size_string; +}; + +const struct test_param *test_params(enum test_page_size size) +{ + static const struct test_param params[] = { + [TEST_SZ_2M] = { + .page_size = PG_SIZE_2M, + .memfd_create_flags = MFD_HUGETLB | MFD_HUGE_2MB, + .guest_memfd_flags = GUEST_MEMFD_FLAG_HUGETLB | GUESTMEM_HUGETLB_FLAG_2MB, + .hugetlb_size_string = "2048kB", + .hugetlb_cgroup_size_string = "2MB", + }, + [TEST_SZ_1G] = { + .page_size = PG_SIZE_1G, + .memfd_create_flags = MFD_HUGETLB | MFD_HUGE_1GB, + .guest_memfd_flags = GUEST_MEMFD_FLAG_HUGETLB | GUESTMEM_HUGETLB_FLAG_1GB, + .hugetlb_size_string = "1048576kB", + .hugetlb_cgroup_size_string = "1GB", + }, + }; + + return ¶ms[size]; +} + +static unsigned long read_hugetlb_statistic(enum test_page_size size, + enum hugetlb_statistic statistic) +{ + char path[PATH_MAX] = "/sys/kernel/mm/hugepages/hugepages-"; + + strcat(path, test_params(size)->hugetlb_size_string); + strcat(path, "/"); + strcat(path, hugetlb_statistics[statistic]); + + return read_value(path); +} + +static unsigned long read_hugetlb_cgroup_statistic(const char *hugetlb_cgroup_path, + enum test_page_size size, + enum hugetlb_cgroup_statistic statistic, + bool reservations) +{ + char path[PATH_MAX] = ""; + + strcat(path, hugetlb_cgroup_path); + + if (hugetlb_cgroup_path[strlen(hugetlb_cgroup_path) - 1] != '/') + strcat(path, "/"); + + strcat(path, "hugetlb."); + strcat(path, test_params(size)->hugetlb_cgroup_size_string); + if (reservations) + strcat(path, ".rsvd"); + strcat(path, "."); + strcat(path, hugetlb_cgroup_statistics[statistic]); + + return read_value(path); +} + +static unsigned long hugetlb_baseline[NR_TEST_SIZES] + [NR_TESTED_HUGETLB_STATISTICS]; + +static unsigned long + hugetlb_cgroup_baseline[NR_TEST_SIZES] + [NR_TESTED_HUGETLB_CGROUP_STATISTICS] + [NR_HUGETLB_CGROUP_STATISTIC_CATEGORIES]; + + +static void establish_baseline(const char *hugetlb_cgroup_path) +{ + const char *p = hugetlb_cgroup_path; + int i, j; + + for (i = 0; i < NR_TEST_SIZES; ++i) { + for (j = 0; j < NR_TESTED_HUGETLB_STATISTICS; ++j) + hugetlb_baseline[i][j] = read_hugetlb_statistic(i, j); + + if (!hugetlb_cgroup_path) + continue; + + for (j = 0; j < NR_TESTED_HUGETLB_CGROUP_STATISTICS; ++j) { + hugetlb_cgroup_baseline[i][j][USAGE] = + read_hugetlb_cgroup_statistic(p, i, j, USAGE); + hugetlb_cgroup_baseline[i][j][RESERVATION] = + read_hugetlb_cgroup_statistic(p, i, j, RESERVATION); + } + } +} + +static void assert_stats_at_baseline(const char *hugetlb_cgroup_path) +{ + const char *p = hugetlb_cgroup_path; + + /* Enumerate these for easy assertion reading. */ + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_2M, FREE_HUGEPAGES), + hugetlb_baseline[TEST_SZ_2M][FREE_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_2M, NR_HUGEPAGES), + hugetlb_baseline[TEST_SZ_2M][NR_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_2M, NR_OVERCOMMIT_HUGEPAGES), + hugetlb_baseline[TEST_SZ_2M][NR_OVERCOMMIT_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_2M, RESV_HUGEPAGES), + hugetlb_baseline[TEST_SZ_2M][RESV_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_2M, SURPLUS_HUGEPAGES), + hugetlb_baseline[TEST_SZ_2M][SURPLUS_HUGEPAGES]); + + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_1G, FREE_HUGEPAGES), + hugetlb_baseline[TEST_SZ_1G][FREE_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_1G, NR_HUGEPAGES), + hugetlb_baseline[TEST_SZ_1G][NR_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_1G, NR_OVERCOMMIT_HUGEPAGES), + hugetlb_baseline[TEST_SZ_1G][NR_OVERCOMMIT_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_1G, RESV_HUGEPAGES), + hugetlb_baseline[TEST_SZ_1G][RESV_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(TEST_SZ_1G, SURPLUS_HUGEPAGES), + hugetlb_baseline[TEST_SZ_1G][SURPLUS_HUGEPAGES]); + + if (!hugetlb_cgroup_path) + return; + + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, TEST_SZ_2M, LIMIT_IN_BYTES, USAGE), + hugetlb_cgroup_baseline[TEST_SZ_2M][LIMIT_IN_BYTES][USAGE]); + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, TEST_SZ_2M, MAX_USAGE_IN_BYTES, USAGE), + hugetlb_cgroup_baseline[TEST_SZ_2M][MAX_USAGE_IN_BYTES][USAGE]); + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, TEST_SZ_2M, USAGE_IN_BYTES, USAGE), + hugetlb_cgroup_baseline[TEST_SZ_2M][USAGE_IN_BYTES][USAGE]); + + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, TEST_SZ_1G, LIMIT_IN_BYTES, RESERVATION), + hugetlb_cgroup_baseline[TEST_SZ_1G][LIMIT_IN_BYTES][RESERVATION]); + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, TEST_SZ_1G, MAX_USAGE_IN_BYTES, RESERVATION), + hugetlb_cgroup_baseline[TEST_SZ_1G][MAX_USAGE_IN_BYTES][RESERVATION]); + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, TEST_SZ_1G, USAGE_IN_BYTES, RESERVATION), + hugetlb_cgroup_baseline[TEST_SZ_1G][USAGE_IN_BYTES][RESERVATION]); +} + +static void assert_stats(const char *hugetlb_cgroup_path, + enum test_page_size size, unsigned long num_reserved, + unsigned long num_faulted) +{ + size_t pgsz = test_params(size)->page_size; + const char *p = hugetlb_cgroup_path; + + TEST_ASSERT_EQ(read_hugetlb_statistic(size, FREE_HUGEPAGES), + hugetlb_baseline[size][FREE_HUGEPAGES] - num_faulted); + TEST_ASSERT_EQ(read_hugetlb_statistic(size, NR_HUGEPAGES), + hugetlb_baseline[size][NR_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(size, NR_OVERCOMMIT_HUGEPAGES), + hugetlb_baseline[size][NR_OVERCOMMIT_HUGEPAGES]); + TEST_ASSERT_EQ(read_hugetlb_statistic(size, RESV_HUGEPAGES), + hugetlb_baseline[size][RESV_HUGEPAGES] + num_reserved - num_faulted); + TEST_ASSERT_EQ(read_hugetlb_statistic(size, SURPLUS_HUGEPAGES), + hugetlb_baseline[size][SURPLUS_HUGEPAGES]); + + if (!hugetlb_cgroup_path) + return; + + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, size, LIMIT_IN_BYTES, USAGE), + hugetlb_cgroup_baseline[size][LIMIT_IN_BYTES][USAGE]); + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, size, MAX_USAGE_IN_BYTES, USAGE), + hugetlb_cgroup_baseline[size][MAX_USAGE_IN_BYTES][USAGE]); + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, size, USAGE_IN_BYTES, USAGE), + hugetlb_cgroup_baseline[size][USAGE_IN_BYTES][USAGE] + num_faulted * pgsz); + + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, size, LIMIT_IN_BYTES, RESERVATION), + hugetlb_cgroup_baseline[size][LIMIT_IN_BYTES][RESERVATION]); + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, size, MAX_USAGE_IN_BYTES, RESERVATION), + hugetlb_cgroup_baseline[size][MAX_USAGE_IN_BYTES][RESERVATION]); + TEST_ASSERT_EQ( + read_hugetlb_cgroup_statistic(p, size, USAGE_IN_BYTES, RESERVATION), + hugetlb_cgroup_baseline[size][USAGE_IN_BYTES][RESERVATION] + num_reserved * pgsz); +} + +/* Use hugetlb behavior as a baseline. guest_memfd should have comparable behavior. */ +static void test_hugetlb_behavior(const char *hugetlb_cgroup_path, enum test_page_size test_size) +{ + const struct test_param *param; + char *mem; + int memfd; + + param = test_params(test_size); + + assert_stats_at_baseline(hugetlb_cgroup_path); + + memfd = memfd_create("guest_memfd_hugetlb_reporting_test", + param->memfd_create_flags); + + assert_stats(hugetlb_cgroup_path, test_size, 0, 0); + + mem = mmap(NULL, param->page_size, PROT_READ | PROT_WRITE, + MAP_SHARED | MAP_HUGETLB, memfd, 0); + TEST_ASSERT(mem != MAP_FAILED, "Couldn't mmap()"); + + assert_stats(hugetlb_cgroup_path, test_size, 1, 0); + + *mem = 'A'; + + assert_stats(hugetlb_cgroup_path, test_size, 1, 1); + + munmap(mem, param->page_size); + + assert_stats(hugetlb_cgroup_path, test_size, 1, 1); + + madvise(mem, param->page_size, MADV_DONTNEED); + + assert_stats(hugetlb_cgroup_path, test_size, 1, 1); + + madvise(mem, param->page_size, MADV_REMOVE); + + assert_stats(hugetlb_cgroup_path, test_size, 1, 1); + + close(memfd); + + assert_stats_at_baseline(hugetlb_cgroup_path); +} + +static void test_guest_memfd_behavior(const char *hugetlb_cgroup_path, + enum test_page_size test_size) +{ + const struct test_param *param; + struct kvm_vm *vm; + int guest_memfd; + + param = test_params(test_size); + + assert_stats_at_baseline(hugetlb_cgroup_path); + + vm = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM); + + assert_stats(hugetlb_cgroup_path, test_size, 0, 0); + + guest_memfd = vm_create_guest_memfd(vm, param->page_size, + param->guest_memfd_flags); + + /* fd creation reserves pages. */ + assert_stats(hugetlb_cgroup_path, test_size, 1, 0); + + fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE, 0, param->page_size); + + assert_stats(hugetlb_cgroup_path, test_size, 1, 1); + + fallocate(guest_memfd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + param->page_size); + + assert_stats(hugetlb_cgroup_path, test_size, 1, 0); + + close(guest_memfd); + + /* + * Wait a little for stats to be updated in rcu callback. resv_hugepages + * is updated on truncation in ->free_inode, and ->free_inode() happens + * in an rcu callback. + */ + usleep(300 * 1000); + + assert_stats_at_baseline(hugetlb_cgroup_path); + + kvm_vm_free(vm); +} + +int main(int argc, char *argv[]) +{ + char *hugetlb_cgroup_path; + + hugetlb_cgroup_path = getenv("HUGETLB_CGROUP_PATH"); + + establish_baseline(hugetlb_cgroup_path); + + test_hugetlb_behavior(hugetlb_cgroup_path, TEST_SZ_2M); + test_hugetlb_behavior(hugetlb_cgroup_path, TEST_SZ_1G); + + test_guest_memfd_behavior(hugetlb_cgroup_path, TEST_SZ_2M); + test_guest_memfd_behavior(hugetlb_cgroup_path, TEST_SZ_1G); +} diff --git a/tools/testing/selftests/kvm/guest_memfd_provide_hugetlb_cgroup_mount.sh b/tools/testing/selftests/kvm/guest_memfd_provide_hugetlb_cgroup_mount.sh new file mode 100755 index 000000000000..4180d49771c8 --- /dev/null +++ b/tools/testing/selftests/kvm/guest_memfd_provide_hugetlb_cgroup_mount.sh @@ -0,0 +1,36 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0-only +# +# Wrapper that runs test, providing a hugetlb cgroup mount in environment +# variable HUGETLB_CGROUP_PATH +# +# Example: +# ./guest_memfd_provide_hugetlb_cgroup_mount.sh ./guest_memfd_hugetlb_reporting_test +# +# Copyright (C) 2025, Google LLC. + +script_dir=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) + +temp_dir=$(mktemp -d /tmp/guest_memfd_hugetlb_reporting_test_XXXXXX) +if [[ -z "$temp_dir" ]]; then + echo "Error: Failed to create temporary directory for hugetlb cgroup mount." >&2 + exit 1 +fi + +delete_temp_dir() { + rm -rf $temp_dir +} +trap delete_temp_dir EXIT + + +mount -t cgroup -o hugetlb none $temp_dir + + +cleanup() { + umount $temp_dir + rm -rf $temp_dir +} +trap cleanup EXIT + + +HUGETLB_CGROUP_PATH=$temp_dir $@ -- 2.49.0.1045.g170613ef41-goog