From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34165FA373D for ; Fri, 21 Oct 2022 16:37:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43DB58E0019; Fri, 21 Oct 2022 12:37:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CC7F8E0001; Fri, 21 Oct 2022 12:37:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F1CFC8E0019; Fri, 21 Oct 2022 12:37:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D44028E0001 for ; Fri, 21 Oct 2022 12:37:39 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A569AAABBB for ; Fri, 21 Oct 2022 16:37:39 +0000 (UTC) X-FDA: 80045512638.01.04F0D16 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf06.hostedemail.com (Postfix) with ESMTP id 3BA9C18003F for ; Fri, 21 Oct 2022 16:37:38 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id m66-20020a257145000000b006c23949ec98so3762299ybc.4 for ; Fri, 21 Oct 2022 09:37:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kLa4FDL78/heGirMrV4t2jI2YXb52QMfSMVBDglb1pI=; b=Q5PTuLo7HRBrnlQ1Ey9ThRuaUPzdRht6lDvMahIeRZWgEby1HC2ivIu1k6cVhj8WmR oO/uNViSfO/2l5oua1WWbkZ0eHLvVgNvH7kU5i5kmpn4hbwV9AMGBGCX3YqInEpOxWcS OQI52pRTR+PnZRAz0BqQAhpT9WrDVcR13NBnsTM4rCctkW7Z+CR0eHFNnJ4WxMOCiJ5T SDYleezhNhwhzNFFFV+R09sJxih1+czcs9xyXQDbsEpbaRwdWjhHE0DoJ1w0cD/H3Kzs ktG3Exmi3FDbo2/KLeQiGqY7fizeM3zEOYgw8vPiHu8GPzznlIQ9RQAUhOcOYC4PvbiJ fpjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kLa4FDL78/heGirMrV4t2jI2YXb52QMfSMVBDglb1pI=; b=SyjB5U3S5rTxj+pkGOoWzoi53aoahtteLGGYIjBdga1JmUp3RGR0nwr2bj5QYfxaft MoQM0QHy5S9FTIvFEvny3idmupvbfGDBzyXd+xdLjXB9ydFomLNFua37gBBqY0e5rVb+ eC3NHMMgEKVMBOrm/jrk0ny7KhKKxWA+7O8Y3Eic9u+bUsmjskrjH3UfjWLDR6G0aqRi UGTgQdZdNLnaCjmsVdDGxgv+ac8XvaJ1kK8XDHZfaIxh8JARNon4P2G4vZiw50W23XDS WNO3opBuKKwuSyODhNCAipXGT7vKFmZoM0vjVv3UwDySj80ieD58D3jemqRiSSJyi8VK frzA== X-Gm-Message-State: ACrzQf1pDK056JPIXgzVeQW8Z2mM4FVugDoMJ+aA73Cy+idtxDcTRV2l 548actjOkvUNLCzL5inX/c/0CWk+GOcPR0Le X-Google-Smtp-Source: AMsMyM7h5gdYFH3DD5K/ZNJhJDsbg1FfZUc+C9HxdHoM0T7ZqKXkPe+MqKnLU+sCEA2oMl8JZ8rXkLKHBL6YNc9r X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a25:244e:0:b0:6ca:1972:f851 with SMTP id k75-20020a25244e000000b006ca1972f851mr10580295ybk.277.1666370258436; Fri, 21 Oct 2022 09:37:38 -0700 (PDT) Date: Fri, 21 Oct 2022 16:36:40 +0000 In-Reply-To: <20221021163703.3218176-1-jthoughton@google.com> Mime-Version: 1.0 References: <20221021163703.3218176-1-jthoughton@google.com> X-Mailer: git-send-email 2.38.0.135.g90850a2211-goog Message-ID: <20221021163703.3218176-25-jthoughton@google.com> Subject: [RFC PATCH v2 24/47] hugetlb: update page_vma_mapped to do high-granularity walks From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton Content-Type: text/plain; charset="UTF-8" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666370259; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kLa4FDL78/heGirMrV4t2jI2YXb52QMfSMVBDglb1pI=; b=7gC4maYF9gkW2WIF1zfxUsRh0yV/78Emmmh9ZJL969sXT/XHpXty0NqmIqLcN/eYBfrlMD vCkPVCyYZxWr39YNPvj63WGvJaNLU02Twkz5GLORoZVSQnm6dZnOif/buQ3/X3YfxtnrFf 183SGRF5x6mi2h6vMXF1Sag9DYfEMP0= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Q5PTuLo7; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 30spSYwoKCNM8I6DJ56IDC5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--jthoughton.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=30spSYwoKCNM8I6DJ56IDC5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--jthoughton.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666370259; a=rsa-sha256; cv=none; b=edsLSoxXf8g6kFTiZrz2yjeKa1xEsn2t+nESrvbv+vugC8BFW4JjJZPUj2NEmoL+ZbeY0l mtqu2LjPSl8s1+0ZPdfSXMbc2iQSMuQ/f2v/rOwi0wAOus0zEdMNlFqfcLbt+4yqC6BqM1 PxYvl/4DCDzkzwLwFpiuvr+EqPuEZGY= X-Rspam-User: X-Rspamd-Queue-Id: 3BA9C18003F X-Stat-Signature: kjfys8qmm339fj13oy7ad97ukwckkmqd Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Q5PTuLo7; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 30spSYwoKCNM8I6DJ56IDC5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--jthoughton.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=30spSYwoKCNM8I6DJ56IDC5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--jthoughton.bounces.google.com X-Rspamd-Server: rspam07 X-HE-Tag: 1666370258-708285 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This updates the HugeTLB logic to look a lot more like the PTE-mapped THP logic. When a user calls us in a loop, we will update pvmw->address to walk to each page table entry that could possibly map the hugepage containing pvmw->pfn. This makes use of the new pte_order so callers know what size PTE they're getting. Signed-off-by: James Houghton --- include/linux/rmap.h | 4 +++ mm/page_vma_mapped.c | 59 ++++++++++++++++++++++++++++++++++++-------- mm/rmap.c | 48 +++++++++++++++++++++-------------- 3 files changed, 83 insertions(+), 28 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index e0557ede2951..d7d2d9f65a01 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -13,6 +13,7 @@ #include #include #include +#include /* * The anon_vma heads a list of private "related" vmas, to scan if @@ -409,6 +410,9 @@ static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) pte_unmap(pvmw->pte); if (pvmw->ptl) spin_unlock(pvmw->ptl); + if (pvmw->pte && is_vm_hugetlb_page(pvmw->vma) && + hugetlb_hgm_enabled(pvmw->vma)) + hugetlb_vma_unlock_read(pvmw->vma); } bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 395ca4e21c56..1994b3f9a4c2 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -133,7 +133,8 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size) * * Returns true if the page is mapped in the vma. @pvmw->pmd and @pvmw->pte point * to relevant page table entries. @pvmw->ptl is locked. @pvmw->address is - * adjusted if needed (for PTE-mapped THPs). + * adjusted if needed (for PTE-mapped THPs and high-granularity--mapped HugeTLB + * pages). * * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page * (usually THP). For PTE-mapped THP, you should run page_vma_mapped_walk() in @@ -166,19 +167,57 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (unlikely(is_vm_hugetlb_page(vma))) { struct hstate *hstate = hstate_vma(vma); unsigned long size = huge_page_size(hstate); - /* The only possible mapping was handled on last iteration */ - if (pvmw->pte) - return not_found(pvmw); + struct hugetlb_pte hpte; + pte_t *pte; + pte_t pteval; + + end = (pvmw->address & huge_page_mask(hstate)) + + huge_page_size(hstate); /* when pud is not present, pte will be NULL */ - pvmw->pte = huge_pte_offset(mm, pvmw->address, size); - if (!pvmw->pte) + pte = huge_pte_offset(mm, pvmw->address, size); + if (!pte) return false; - pvmw->pte_order = huge_page_order(hstate); - pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte); - if (!check_pte(pvmw)) - return not_found(pvmw); + do { + hugetlb_pte_populate(&hpte, pte, huge_page_shift(hstate), + hpage_size_to_level(size)); + + /* + * Do a high granularity page table walk. The vma lock + * is grabbed to prevent the page table from being + * collapsed mid-walk. It is dropped in + * page_vma_mapped_walk_done(). + */ + if (pvmw->pte) { + if (pvmw->ptl) + spin_unlock(pvmw->ptl); + pvmw->ptl = NULL; + pvmw->address += PAGE_SIZE << pvmw->pte_order; + if (pvmw->address >= end) + return not_found(pvmw); + } else if (hugetlb_hgm_enabled(vma)) + /* Only grab the lock once. */ + hugetlb_vma_lock_read(vma); + +retry_walk: + hugetlb_hgm_walk(mm, vma, &hpte, pvmw->address, + PAGE_SIZE, /*stop_at_none=*/true); + + pvmw->pte = hpte.ptep; + pvmw->pte_order = hpte.shift - PAGE_SHIFT; + pvmw->ptl = hugetlb_pte_lock(mm, &hpte); + pteval = huge_ptep_get(hpte.ptep); + if (pte_present(pteval) && !hugetlb_pte_present_leaf( + &hpte, pteval)) { + /* + * Someone split from under us, so keep + * walking. + */ + spin_unlock(pvmw->ptl); + goto retry_walk; + } + } while (!check_pte(pvmw)); return true; } diff --git a/mm/rmap.c b/mm/rmap.c index 527463c1e936..a8359584467e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1552,17 +1552,23 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, flush_cache_range(vma, range.start, range.end); /* - * To call huge_pmd_unshare, i_mmap_rwsem must be - * held in write mode. Caller needs to explicitly - * do this outside rmap routines. - * - * We also must hold hugetlb vma_lock in write mode. - * Lock order dictates acquiring vma_lock BEFORE - * i_mmap_rwsem. We can only try lock here and fail - * if unsuccessful. + * If HGM is enabled, we have already grabbed the VMA + * lock for reading, and we cannot safely release it. + * Because HGM-enabled VMAs have already unshared all + * PMDs, we can safely ignore PMD unsharing here. */ - if (!anon) { + if (!anon && !hugetlb_hgm_enabled(vma)) { VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + /* + * To call huge_pmd_unshare, i_mmap_rwsem must + * be held in write mode. Caller needs to + * explicitly do this outside rmap routines. + * + * We also must hold hugetlb vma_lock in write + * mode. Lock order dictates acquiring vma_lock + * BEFORE i_mmap_rwsem. We can only try lock + * here and fail if unsuccessful. + */ if (!hugetlb_vma_trylock_write(vma)) { page_vma_mapped_walk_done(&pvmw); ret = false; @@ -1946,17 +1952,23 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, flush_cache_range(vma, range.start, range.end); /* - * To call huge_pmd_unshare, i_mmap_rwsem must be - * held in write mode. Caller needs to explicitly - * do this outside rmap routines. - * - * We also must hold hugetlb vma_lock in write mode. - * Lock order dictates acquiring vma_lock BEFORE - * i_mmap_rwsem. We can only try lock here and - * fail if unsuccessful. + * If HGM is enabled, we have already grabbed the VMA + * lock for reading, and we cannot safely release it. + * Because HGM-enabled VMAs have already unshared all + * PMDs, we can safely ignore PMD unsharing here. */ - if (!anon) { + if (!anon && !hugetlb_hgm_enabled(vma)) { VM_BUG_ON(!(flags & TTU_RMAP_LOCKED)); + /* + * To call huge_pmd_unshare, i_mmap_rwsem must + * be held in write mode. Caller needs to + * explicitly do this outside rmap routines. + * + * We also must hold hugetlb vma_lock in write + * mode. Lock order dictates acquiring vma_lock + * BEFORE i_mmap_rwsem. We can only try lock + * here and fail if unsuccessful. + */ if (!hugetlb_vma_trylock_write(vma)) { page_vma_mapped_walk_done(&pvmw); ret = false; -- 2.38.0.135.g90850a2211-goog