From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21ED7C433E0 for ; Tue, 12 Jan 2021 16:20:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A2FEF2312E for ; Tue, 12 Jan 2021 16:20:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A2FEF2312E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=Oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0E80D8D00DE; Tue, 12 Jan 2021 11:16:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C0E48D00ED; Tue, 12 Jan 2021 11:16:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC7858D00DE; Tue, 12 Jan 2021 11:16:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id D249F8D00ED for ; Tue, 12 Jan 2021 11:16:36 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 91D501EE6 for ; Tue, 12 Jan 2021 16:16:36 +0000 (UTC) X-FDA: 77697625992.19.woman51_40111b227516 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 51AA71AD1B1 for ; Tue, 12 Jan 2021 16:16:36 +0000 (UTC) X-HE-Tag: woman51_40111b227516 X-Filterd-Recvd-Size: 8110 Received: from userp2130.oracle.com (userp2130.oracle.com [156.151.31.86]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 Jan 2021 16:16:35 +0000 (UTC) Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 10CGFABI007219; Tue, 12 Jan 2021 16:16:28 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=corp-2020-01-29; bh=ZKsIcuCMcYq87BtH3ANq4oPVELPvkyXgJ+buGm3Kj9E=; b=wxfjGdJZpk4UzN84BAAqk/zOIkNsQWzblyFkOsKYP//lbuptIeg5+3Bx2a4sAFEfaLaR iMQDwRSzy/zexf1uS7AR7lv4q+7nE+TbSQIQG/rytiFwQuoPknHtgTX5sAZ6jVxjj0so RUzhzKCh4i0OnAhegGZLge8Yg688abgJDujLC0sD3Iv7Ld1HUstHimeNYYKwzzZt+w5N OXNAA+Jr0xzmSO9A1eIiK1YA4OkG16OUc1p2a1h7F3WihTWBaRgnbICoeLxP3uLoI+ko uOeCuzBuE5TqmlRHG8DRuMFApuKElcF6rNl+mo8rZ9kHh39wWTTbK69RGauDqk2QRKOc ig== Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80]) by userp2130.oracle.com with ESMTP id 360kvjy25y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 12 Jan 2021 16:16:28 +0000 Received: from pps.filterd (userp3030.oracle.com [127.0.0.1]) by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 10CGAaKK100407; Tue, 12 Jan 2021 16:14:27 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userp3030.oracle.com with ESMTP id 360keh7knh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 12 Jan 2021 16:14:27 +0000 Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 10CGEQKM030610; Tue, 12 Jan 2021 16:14:26 GMT Received: from revolver.jebus.ca (/23.233.25.87) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 12 Jan 2021 08:14:26 -0800 From: "Liam R. Howlett" To: maple-tree@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Song Liu , Davidlohr Bueso , "Paul E . McKenney" , Matthew Wilcox , Jerome Glisse , David Rientjes , Axel Rasmussen , Suren Baghdasaryan , Vlastimil Babka , Rik van Riel , Peter Zijlstra Subject: [PATCH v2 58/70] mm/mempolicy: Use maple tree iterators instead of vma linked list Date: Tue, 12 Jan 2021 11:12:28 -0500 Message-Id: <20210112161240.2024684-59-Liam.Howlett@Oracle.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20210112161240.2024684-1-Liam.Howlett@Oracle.com> References: <20210112161240.2024684-1-Liam.Howlett@Oracle.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9862 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 phishscore=0 spamscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101120092 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9862 signatures=668683 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0 lowpriorityscore=0 bulkscore=0 priorityscore=1501 malwarescore=0 clxscore=1015 impostorscore=0 spamscore=0 mlxscore=0 suspectscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101120093 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Liam R. Howlett --- mm/mempolicy.c | 33 ++++++++++++++++++++------------- 1 file changed, 20 insertions(+), 13 deletions(-) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 3ca4898f3f249..e0b8e658f18eb 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -404,9 +404,10 @@ void mpol_rebind_task(struct task_struct *tsk, const= nodemask_t *new) void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new) { struct vm_area_struct *vma; + MA_STATE(mas, &mm->mm_mt, 0, 0); =20 mmap_write_lock(mm); - for (vma =3D mm->mmap; vma; vma =3D vma->vm_next) + mas_for_each(&mas, vma, ULONG_MAX) mpol_rebind_policy(vma->vm_policy, new); mmap_write_unlock(mm); } @@ -671,7 +672,7 @@ static unsigned long change_prot_numa(struct vm_area_= struct *vma, static int queue_pages_test_walk(unsigned long start, unsigned long end, struct mm_walk *walk) { - struct vm_area_struct *vma =3D walk->vma; + struct vm_area_struct *next, *vma =3D walk->vma; struct queue_pages *qp =3D walk->private; unsigned long endvma =3D vma->vm_end; unsigned long flags =3D qp->flags; @@ -686,9 +687,10 @@ static int queue_pages_test_walk(unsigned long start= , unsigned long end, /* hole at head side of range */ return -EFAULT; } + next =3D vma_next(vma->vm_mm, vma); if (!(flags & MPOL_MF_DISCONTIG_OK) && ((vma->vm_end < qp->end) && - (!vma->vm_next || vma->vm_end < vma->vm_next->vm_start))) + (!next || vma->vm_end < next->vm_start))) /* hole at middle or tail of range */ return -EFAULT; =20 @@ -809,21 +811,22 @@ static int mbind_range(struct mm_struct *mm, unsign= ed long start, pgoff_t pgoff; unsigned long vmstart; unsigned long vmend; + MA_STATE(mas, &mm->mm_mt, start, start); =20 - vma =3D find_vma(mm, start); + vma =3D mas_find(&mas, ULONG_MAX); VM_BUG_ON(!vma); =20 - prev =3D vma->vm_prev; + prev =3D vma_mas_prev(&mas); if (start > vma->vm_start) prev =3D vma; =20 - for (; vma && vma->vm_start < end; prev =3D vma, vma =3D next) { - next =3D vma->vm_next; + mas_for_each(&mas, vma, end - 1) { + next =3D vma_next(mm, vma); vmstart =3D max(start, vma->vm_start); vmend =3D min(end, vma->vm_end); =20 if (mpol_equal(vma_policy(vma), new_pol)) - continue; + goto next; =20 pgoff =3D vma->vm_pgoff + ((vmstart - vma->vm_start) >> PAGE_SHIFT); @@ -832,7 +835,7 @@ static int mbind_range(struct mm_struct *mm, unsigned= long start, new_pol, vma->vm_userfaultfd_ctx); if (prev) { vma =3D prev; - next =3D vma->vm_next; + next =3D vma_next(mm, vma); if (mpol_equal(vma_policy(vma), new_pol)) continue; /* vma_merge() joined vma && vma->next, case 8 */ @@ -847,11 +850,14 @@ static int mbind_range(struct mm_struct *mm, unsign= ed long start, err =3D split_vma(vma->vm_mm, vma, vmend, 0); if (err) goto out; + mas_pause(&mas); } replace: err =3D vma_replace_policy(vma, new_pol); if (err) goto out; +next: + prev =3D vma; } =20 out: @@ -1072,6 +1078,7 @@ static int migrate_to_node(struct mm_struct *mm, in= t source, int dest, int flags) { nodemask_t nmask; + struct vm_area_struct *vma; LIST_HEAD(pagelist); int err =3D 0; struct migration_target_control mtc =3D { @@ -1087,8 +1094,9 @@ static int migrate_to_node(struct mm_struct *mm, in= t source, int dest, * need migration. Between passing in the full user address * space range and MPOL_MF_DISCONTIG_OK, this call can not fail. */ + vma =3D find_vma(mm, 0); VM_BUG_ON(!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))); - queue_pages_range(mm, mm->mmap->vm_start, mm->task_size, &nmask, + queue_pages_range(mm, vma->vm_start, mm->task_size, &nmask, flags | MPOL_MF_DISCONTIG_OK, &pagelist); =20 if (!list_empty(&pagelist)) { @@ -1217,13 +1225,12 @@ static struct page *new_page(struct page *page, u= nsigned long start) { struct vm_area_struct *vma; unsigned long address; + MA_STATE(mas, ¤t->mm->mm_mt, start, start); =20 - vma =3D find_vma(current->mm, start); - while (vma) { + mas_for_each(&mas, vma, ULONG_MAX) { address =3D page_address_in_vma(page, vma); if (address !=3D -EFAULT) break; - vma =3D vma->vm_next; } =20 if (PageHuge(page)) { --=20 2.28.0