From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BFF6D31768 for ; Tue, 5 Nov 2024 17:11:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC4A56B0083; Tue, 5 Nov 2024 12:11:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E744A6B0085; Tue, 5 Nov 2024 12:11:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D3B946B0088; Tue, 5 Nov 2024 12:11:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B75B36B0083 for ; Tue, 5 Nov 2024 12:11:03 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4D20B141640 for ; Tue, 5 Nov 2024 17:11:03 +0000 (UTC) X-FDA: 82752680346.06.BAFFFE2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf10.hostedemail.com (Postfix) with ESMTP id 4AC41C000C for ; Tue, 5 Nov 2024 17:10:46 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RXTdt8Bi; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf10.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730826493; a=rsa-sha256; cv=none; b=oXHg1QkfdoW66hdjQTHk3eJHa4vvpnGlCzfYoSGi0TzAW1EDhDoAsm1sUjhdSi2+DEJSJx e4BwjotPa9u425dXenkzrNA5aRyPv9pJzp+lgnSUMtDKj+HtNch6jRlVa3Z2W0S4xS5keX 3YUf8J5TKkLM2FE8Ql7aUYTFVt55XoE= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RXTdt8Bi; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf10.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730826493; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X5ary5m+6+giBNSTWfRnA3AU+/fgGvJB2JlxwdSAVFI=; b=Y2a5N9xxqfQw88gHzGMZw4Ty3DmMjaBLbhFa2zpVlFClB7smzS/tVsA+AaWaA34zakAZKH ccVuWqAcICxbogN33yV3XoXgLRW/kwoI4ef7aZ5CBABRnKb7yvNFOOkgA0Y6H2sZKzUhzN B2EApXwAsTffxGX0DHa+c08/XBT5CMA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1730826660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=X5ary5m+6+giBNSTWfRnA3AU+/fgGvJB2JlxwdSAVFI=; b=RXTdt8Bi+mfLtPjfl+BfMyPp2PvctJTgPSi3GWwmI4jyQKKmqTYXHHZ5Uocc72y8JeUTy1 29pFtKp33weo/JK77x/3AsxHkTfIyYERB9I9HyCczbLBIYKSISjSTKCBmbjkZJK1QbVquF rrRvkOOLyP2PkOqlEcm+3fBvd0B4LO8= Received: from mail-yw1-f199.google.com (mail-yw1-f199.google.com [209.85.128.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-167-rtb4NinGO0CC_xR7G8uqAQ-1; Tue, 05 Nov 2024 12:10:59 -0500 X-MC-Unique: rtb4NinGO0CC_xR7G8uqAQ-1 Received: by mail-yw1-f199.google.com with SMTP id 00721157ae682-6e3705b2883so112497807b3.3 for ; Tue, 05 Nov 2024 09:10:59 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730826658; x=1731431458; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=X5ary5m+6+giBNSTWfRnA3AU+/fgGvJB2JlxwdSAVFI=; b=kKAFEC+alCnANdXyJtCi+sS3N53kbK4/9j25CVYuF8H9eNL0TPiXvAQBykKcHgu/+I A93n7ttOJK/1rMuhnOAFVc9VWwLSOzRtosuEFu5YTGHMvL1z08DHSbJWbiod7iSQa1vj QANLvI7xF6h6elN+H6sDmXZ9+li2zu7MfkNmCZ+jrhdVFo8DxE/lW+ku1TZGXCE/BwdZ zibCEQ2PsFtO6NqS6wh0eVkNfIfGugGmj9EmhKVT3O02BzmuS3kLymZf3H9YdzQr1uUL IcLIkD1DKFj/QieecC9QycMQGV3DtmJf7q/L1SMmGI2xOegXRv8+8DySO7tXuyY1eDS6 KbMg== X-Forwarded-Encrypted: i=1; AJvYcCV5Eq5O7LbufOVw1Tn7+yX5lPzuz1864m5GgEpn+VSMKQwkiBtTBjWE/vEA8VfJliuEQkWsWaoQgQ==@kvack.org X-Gm-Message-State: AOJu0YwRe224owbZJQf4eQFlAWrhB2MXEZLgPLTFJHpWXtBofaMuS9HQ KtlgQ5NtLhTU9Pn6PYt87yIdk4A0rUR+pAoX7+uRZ4SEm4qLxkKnJeiOAZMKpL1Tk45daSmNNJb RkLh/IXFloHf+KxlOZpkIaU9MifQHjh3Pk9XdeEa6zbWo0HTA X-Received: by 2002:a05:690c:620f:b0:6e3:3227:ec64 with SMTP id 00721157ae682-6ea64bb1d36mr170331627b3.35.1730826658575; Tue, 05 Nov 2024 09:10:58 -0800 (PST) X-Google-Smtp-Source: AGHT+IHX/FE7P3qC2CXMvuRrB7RjlcJ530vsCvz4/hDhCD92PXpxG3f295spbsi7aUlSyRqHCzUTZg== X-Received: by 2002:a05:690c:620f:b0:6e3:3227:ec64 with SMTP id 00721157ae682-6ea64bb1d36mr170331117b3.35.1730826658111; Tue, 05 Nov 2024 09:10:58 -0800 (PST) Received: from x1n (pool-99-254-114-190.cpe.net.cable.rogers.com. [99.254.114.190]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6d354177e29sm61863956d6.108.2024.11.05.09.10.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Nov 2024 09:10:57 -0800 (PST) Date: Tue, 5 Nov 2024 12:10:54 -0500 From: Peter Xu To: Ackerley Tng Cc: muchun.song@linux.dev, akpm@linux-foundation.org, rientjes@google.com, fvdl@google.com, jthoughton@google.com, david@redhat.com, isaku.yamahata@intel.com, zhiquan1.li@intel.com, fan.du@intel.com, jun.miao@intel.com, tabba@google.com, quic_eberman@quicinc.com, roypat@amazon.co.uk, jgg@nvidia.com, jhubbard@nvidia.com, seanjc@google.com, pbonzini@redhat.com, erdemaktas@google.com, vannapurve@google.com, pgonda@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [RFC PATCH 1/3] mm: hugetlb: Simplify logic in dequeue_hugetlb_folio_vma() Message-ID: References: MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Stat-Signature: xsgo57nodr795b5b53tqgdkmurzxowrp X-Rspamd-Queue-Id: 4AC41C000C X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1730826646-448734 X-HE-Meta: U2FsdGVkX18Nfoo/anNp8ahzMihDjv+s2+h9pJFSOF86Y2ztHGctkq/iLfGBGQDwnjQJMfEM+J2fEvvEhcjNl++sZoGCKes/EwddZyw6vg3Fg4RyWvs+ESe9J6vbFNnnbyiqeZlPiDgQJKYjq+ZsWMT4v9DIRqobg45w6nmgYWbPbhhAW4rTSJIeHEfTKCgiDzM53m9fgAtD0ITu1n9CVIh4YG5H2e/1EjcSlSSTqAytUKURM8/uM02/Hw39NWFoSEzsM+JMyS0Ti4SlEfuQPmNTkKHPd0FWJqkxk65q24rNfdDsg7+Nae6sW6o0mNHj7BFQyzOSFZ/YKOmXowNyNA4DrhrS443syzDJHJulRwPeK5CPYFqvq5hXtCxD+m9vUMzQKIPWfN+8z1Z+ZbktHIOLZ1F97ohgP2V2tinlJ2pAm1erZujCWgSYf0gT46b3RnWY+sY7Sl5T84fOGYa4zARaC3rwI+YlGJcQpIc9e8L65gU9q4ere5DLlN0CRkH3mnEVoiR0wzdnWvLQ8WY59tDTV03QzznVgvLaH63+4V9aBPz7Z41D2daox0w6KpLWB/iBcM1jJerbmwWSp6aUGIHcYqMyzWivkOyIeRFarvtsQPwCPklvwxMTU4kxu1cd4GaGbQBvRbkhdBL0CddYcM6mskfrhj4Wq7RuuepJPPyyvDmZ+i+tsuCDKa1xttpyO6KcgnihkJTMDbdVqMDsw/FRxZPVLeGbrmZzL+5GhGradTh5OvrCBoHeLY9T+svPMfiA4Ybd11bjH9r1DBXSlltzSpcgMlIr7fEBe97SBwUG8PhWSJAyetJWBfI/egsUt4MBxsAUhYHQW5bgJm8H9vb2eu5Yw7+BdF15C0fT0TTDr4qEl7sXrrWTyBYjZq3TXci9Y1ae+AD6IShCou2M6AJH6dMdsSMxjux0AwIVqyShgfbeXKApl4M3LbWVog+3z2kXIdGe2O08QAn6eKI Q02YHcVP Ltj+XL/yKA6ahemrCl3K4wyW2b+mWh3+GbNNZTwZqvmAuYZoO4Z25eQ5TY9ehUCrz74vXvty8C22cAs3SXVrABv8k8Y27JfJHgztte1TGtEZdpgnogv3FC5wFeXx2rPwlqz38TIz8Qpfs8JbqrmCkCbYSZ86wJLrRjIEQo2cDjhwkh4MfprwLIMzrPAs7yGkQ6VM3blEmFpUWNH4SNsBDE2MKcPhMQ+qoiuu3gumvapchaCpia9vxy3bW9l/3zKgLsr+2GjI0c2RWgAmFAprOKKI8dNQMUkboG0uLNsYq5oSFu51+grIO7Onu69118pfXxFWu24Y2fHe3jcfyX8Edwl1MZA/apNIUEJgbAlvDtWKk5rYNsiDaag6nkHzeRJnaX+w57+jl4jAv7uzsVji+Kp13CFj/jJH/1JL9YAnDPh9E9KDvootaX3adKQ7OwK79jtjAsyYFBfT2deUg7T3irnhzEA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Oct 11, 2024 at 11:22:36PM +0000, Ackerley Tng wrote: > Replace arguments avoid_reserve and chg in dequeue_hugetlb_folio_vma() > so dequeue_hugetlb_folio_vma() is more understandable. > > The new argument, use_hstate_resv, indicates whether the folio to be > dequeued should be taken from reservations in hstate. > > If use_hstate_resv is true, the folio to be dequeued should be taken > from reservations in hstate and hence h->resv_huge_pages is > decremented, and the folio is marked so that the reservation is > restored. > > If use_hstate_resv is false, then a folio needs to be taken from the > pool and hence there must exist available_huge_pages(h), failing > which, goto err. > > The bool use_hstate_resv can be reused within > dequeue_hugetlb_folio_vma()'s caller, alloc_hugetlb_folio(). > > No functional changes are intended. > > As proof, the original two if conditions > > !vma_has_reserves(vma, chg) && !available_huge_pages(h) > > and > > avoid_reserve && !available_huge_pages(h) > > can be combined into > > (avoid_reserve || !vma_has_reserves(vma, chg)) > && !available_huge_pages(h). > > Applying de Morgan's theorem on > > avoid_reserve || !vma_has_reserves(vma, chg) > > yields > > !avoid_reserve && vma_has_reserves(vma, chg), > > hence the simplification is correct. Some spacing is definitely good.. as Sean pointed out. > > Signed-off-by: Ackerley Tng > --- > mm/hugetlb.c | 33 +++++++++++---------------------- > 1 file changed, 11 insertions(+), 22 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 190fa05635f4..73165c670739 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1281,8 +1281,9 @@ static bool vma_has_reserves(struct vm_area_struct *vma, long chg) > } > > /* > - * Only the process that called mmap() has reserves for > - * private mappings. > + * Only the process that called mmap() has reserves for private > + * mappings. A child process with MAP_PRIVATE mappings created by their > + * parent have no page reserves. > */ > if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) { > /* > @@ -1394,8 +1395,7 @@ static unsigned long available_huge_pages(struct hstate *h) > > static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, > struct vm_area_struct *vma, > - unsigned long address, int avoid_reserve, > - long chg) > + unsigned long address, bool use_hstate_resv) Here "avoid_reserve" + "chg" is indeed confusing, especially with the prior "if (avoid_reserve) gbl_chg = 1;". The new flag can make it slightly easier to understand indeed for dequeue_hugetlb_folio_vma() alone. I still feel like there can be something more to be cleaned up here even after your patch 2-3, but I suppose this could be seen as a small-step forward, considering one patch change will be harder to review. Feel free to take: Acked-by: Peter Xu > { > struct folio *folio = NULL; > struct mempolicy *mpol; > @@ -1403,16 +1403,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, > nodemask_t *nodemask; > int nid; > > - /* > - * A child process with MAP_PRIVATE mappings created by their parent > - * have no page reserves. This check ensures that reservations are > - * not "stolen". The child may still get SIGKILLed > - */ > - if (!vma_has_reserves(vma, chg) && !available_huge_pages(h)) > - goto err; > - > - /* If reserves cannot be used, ensure enough pages are in the pool */ > - if (avoid_reserve && !available_huge_pages(h)) > + if (!use_hstate_resv && !available_huge_pages(h)) > goto err; > > gfp_mask = htlb_alloc_mask(h); > @@ -1430,7 +1421,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h, > folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask, > nid, nodemask); > > - if (folio && !avoid_reserve && vma_has_reserves(vma, chg)) { > + if (folio && use_hstate_resv) { > folio_set_hugetlb_restore_reserve(folio); > h->resv_huge_pages--; > } > @@ -2973,6 +2964,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, > struct mem_cgroup *memcg; > bool deferred_reserve; > gfp_t gfp = htlb_alloc_mask(h) | __GFP_RETRY_MAYFAIL; > + bool use_hstate_resv; > > memcg = get_mem_cgroup_from_current(); > memcg_charge_ret = mem_cgroup_hugetlb_try_charge(memcg, gfp, nr_pages); > @@ -3033,20 +3025,17 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, > if (ret) > goto out_uncharge_cgroup_reservation; > > + use_hstate_resv = !avoid_reserve && vma_has_reserves(vma, gbl_chg); > + > spin_lock_irq(&hugetlb_lock); > - /* > - * glb_chg is passed to indicate whether or not a page must be taken > - * from the global free pool (global change). gbl_chg == 0 indicates > - * a reservation exists for the allocation. > - */ > - folio = dequeue_hugetlb_folio_vma(h, vma, addr, avoid_reserve, gbl_chg); > + folio = dequeue_hugetlb_folio_vma(h, vma, addr, use_hstate_resv); > if (!folio) { > spin_unlock_irq(&hugetlb_lock); > folio = alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr); > if (!folio) > goto out_uncharge_cgroup; > spin_lock_irq(&hugetlb_lock); > - if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) { > + if (use_hstate_resv) { > folio_set_hugetlb_restore_reserve(folio); > h->resv_huge_pages--; > } > -- > 2.47.0.rc1.288.g06298d1525-goog > -- Peter Xu