From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2E10CD13DD for ; Mon, 18 Sep 2023 22:34:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5310E6B0459; Mon, 18 Sep 2023 18:34:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4BA1E6B045A; Mon, 18 Sep 2023 18:34:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 333556B045B; Mon, 18 Sep 2023 18:34:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1E7386B0459 for ; Mon, 18 Sep 2023 18:34:23 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A87F1B353B for ; Mon, 18 Sep 2023 22:34:22 +0000 (UTC) X-FDA: 81251173164.22.4B02519 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf17.hostedemail.com (Postfix) with ESMTP id E3CE240018 for ; Mon, 18 Sep 2023 22:34:20 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="cLwHHeY/"; spf=pass (imf17.hostedemail.com: domain of shy828301@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1695076461; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LUkr9LfbTyv6vcIyDNWH9tCnMEz/oCqap7Wj4eBnscg=; b=nnpZHKExgrw2bQFYJ1cxzr/zLaphGF/CNuqgH0uCsQ+ztpaIf62necijA9HRxu1OzwWhQR 4R8v2lej/9/oz/8kXceD1cCmWZ0deIqypVC2ZHmsudXHYVw/yBne+DQdGgTYqEmvahMCEw N0A/Mk6dlczx3ksZhRiWW9e35/UFqgY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1695076461; a=rsa-sha256; cv=none; b=p9NNgFkPyiXO0ff75C8KTz+AjX0hBDHTV3ErRTcDF5OtSJCAfrZwnC13aA34FFJfuwR1Ew uB7hr9Q9cCe5Zq5aWkRi4admHuPLHcB926hdwaoUXMx15WxavR8qacF57bUfNG6wVISJpO 7OvLBoF7EI3M7MuDehv+dMBAERIYetg= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="cLwHHeY/"; spf=pass (imf17.hostedemail.com: domain of shy828301@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1c59c40b840so756195ad.3 for ; Mon, 18 Sep 2023 15:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1695076459; x=1695681259; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=LUkr9LfbTyv6vcIyDNWH9tCnMEz/oCqap7Wj4eBnscg=; b=cLwHHeY/OMU0FaGGYs/qJvvJ9eiaU1bKBrT5D5fxkYEd8i41K6gWvrmc3FP6qSXT6j X3e8z/rAOxcu4v4lchE324ghORc4crFp7efiC4RVYlt6mBY4zfpV3wWsmDJzsASkB9mK hFi+2REe26qzqQaCPujG1mciMvW5vteJiwfpno2ESdMmg9muKILrBuQ1FWjKu6BDYYAb NlzHkHWMeWdobzZCPqfwEPOSl1v8qwzeMYjDTBn9bNBPAqDV80aphI7P6e+DTF6B8XEa pGWCy/F30I9Rh+fbuCCMArgxCeyQuxzdNALBOZ+1jWbFB32a4JOYlFjdetTgeqo6Dlnb lfkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695076459; x=1695681259; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LUkr9LfbTyv6vcIyDNWH9tCnMEz/oCqap7Wj4eBnscg=; b=p77P7CRz8l1M4w1w70Vn6gExi3lKq33BDPwP6GvH7mK3uO55vRA8aMgZC7gj5CjoyX 7Kd4bYDWI2MrgPfi8z6DAbb827N8RpEOp0pjwOwdTBxlgCL0Tk3Sf3VIq86vwvTK7uB+ E6ClgNEDM9NCraZSqXTcvrn6iJnfnUj3iHkSrV1lANJTDQvlnz2VrxHPhGuJVFDxbJwi zB7U4qHP8lSb4h5YlzUtJ+xk+H9Q2U5B7MwwgJqvNSfHCK0V09uvztUN37JkdnxPyDQY ABerVOxdBLu+tYZX4wjmqZ0JhaqN0H3ngSf4unbYf061qfNlT6TdQq5Jwzi0XFlLUd2g hLPw== X-Gm-Message-State: AOJu0YyFnSCLTZg5bqyTXfPOFZVKxQvJyU36Z/JAW01H1uS2xSG/Ndi+ 3fEXLSdnCh8+EU7OYKIP6iNa4tt3wZRyJUX/9vk= X-Google-Smtp-Source: AGHT+IERwUsdgGtGHFijFTCaNFIQcY3kmqKnVFV3mJUyG7jPZiDE692wFuWpaLlq/Sd9yDM8aCIYWhbaSKA9viF97bo= X-Received: by 2002:a17:902:ce82:b0:1c3:e40d:ffec with SMTP id f2-20020a170902ce8200b001c3e40dffecmr12296801plg.47.1695076459592; Mon, 18 Sep 2023 15:34:19 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Yang Shi Date: Mon, 18 Sep 2023 15:34:07 -0700 Message-ID: Subject: Re: [syzbot] [mm?] kernel BUG in vma_replace_policy To: Hugh Dickins Cc: Suren Baghdasaryan , Matthew Wilcox , Michal Hocko , Vlastimil Babka , syzbot , akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, syzkaller-bugs@googlegroups.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: E3CE240018 X-Rspam-User: X-Stat-Signature: sa8tqy1znykdb6z7gnmdxmmn6f9tks3s X-Rspamd-Server: rspam03 X-HE-Tag: 1695076460-693680 X-HE-Meta: U2FsdGVkX185EAzahEs971KBXContlhj0MopjrhaUSIoZUw/o5rCyB9u0gE6HjibcArprW5xNLKaW1SCPUTuGuSJzxMYRHdw1OUFyTet74ONQp73jlWv+0rbLB0y4nyiHRDl1BscxAhyIaZPsFwrrRrb3pbPzzcsDHejayBDILsfdZXh2wVUdniZl/uL61HhUcsN5rfV02UQv+OOFCqSkEr7obLJg7qqSf8oiLfIZh8R6q41ga6l8LXsq4WnoLexjdjrP9soDakD5/7byXLlvfX+XEpM4HQlyHrzhE2jpv5z1LC8HDLvPOT5MgM4itob54sMewnToFhMPhCXAQziL//JGwOHj83kOA6Dnme2XHTaBhlmpKf9ckRtCnCDKqadA0bFrHppDPHzFfekkN2eu4quSW+cekjuckD4ZQH7C0qcuDAN7hg6qYDur5APqLEBi23+FPYxhVV0Q07ylpxMWQjRTi2kjKlYcTfFKs9nhfaPpJ3Tx5+4DQUFOIdNMtc/ezKQAO+YU409ESadoYjvmVo0GDe3Hi52ELYwWsndMmE5MNneI9GeMid6Zyk5i80LD55iImbUh5Vhzw0qQaLacL53OV/eU6DjzRr31MqQukJMVjvT0U8F1i/sc659ZWLH9j6pCV3Jax5Gw4KOHdjjqsYVGUjeay0JgioJA9VAI3xeZLxb8bdRTLJdniKtXwfNMjhw6yC79sz59b31QoEYR1Usz0LeSw5G0epG2b6ISG52k8tVXDzXZrVfRdQBYPRbxZHPK3+KIUESiyu9uHKgqxviYhI04yQ4Rp/RHOE4NQ55RPNSw2AGXA6o8VpxKapUV6UBGQm/LotvIgL/14fLK26iHerBTs0aCDniWXb/f/5kNyCNPbu1eyYVGrsl8lnxlI0i+SBsiZS8GwDfTnd59G9FL4Rn+VxEJ1NrWfPvPOn2hcc0voTJ2kgV82ID2DDRnoXn7XqaD50W8kGIitA 9DC4AEmf r1jjlkz9DoZUsRDao1K204zpJ5ZUaCldZ8BsAcGO/92T0rETDQPO7gVwNMCZz5hJA8F+O48NWD73KP6mbM7fvahlMxLd050z6DVUmjpxrEkqvm1ezR0ogmF6phuL0WuNd4LXfmZgZ/KSpp94zsBq0+TG+2m/wcenGBizx/XYd/wwrvtrTcdfT9uDpqyt+fURuFWi0XmnvGq1nlcsSLJoUJSU/XlxacoGEHWYJVmXl4mdf1eYDELGvbW+N4+3nWiI914HiriUP/XGNCUr2Q0BXxS5ocYVxEMuLAMuWasEh86cERvlG7pRdeyenMUaGsImsh/b8LbtNjP9YfKvYDrly6Fet1+NVVYJdOCBTF2WWDDs+j7nKUoEFsd/nsj06aqe/flOmfKugBGprOZ4XubahWS3wWgmlTBDyBtsmFTVMnFlGZtmm6wOqglemmLuFYJJhDWgKpE+CodeqxXjTf0gJvmVSeQRINaugBdCNTfcjj6iY5Jc/DXJLfXUlch+++kS4FnEaU/U8LXjoN8rmQ8j6rmtHZKYEIr/y1ly4DiU/tcNMwlvSKXL6HSTM4w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Sep 15, 2023 at 8:57=E2=80=AFPM Hugh Dickins wro= te: > > On Fri, 15 Sep 2023, Yang Shi wrote: > > > > Hi Suren and Hugh, > > > > Thanks for figuring this out. The mbind behavior is a little bit messy > > and hard to follow. I tried my best to recall all the changes. > > Messy and confusing yes; and for every particular behavior, I suspect > that by now there exists some release which has done it that way. > > > > > IIUC, mbind did break the vma iteration early in the first place, then > > commit 6f4576e3687b ("mempolicy: apply page table walker on > > queue_pages_range()") changed the behavior (didn't break vma iteration > > early for some cases anymore), but it messed up the return value and > > caused some test cases failure, also violated the manual. The return > > value issue was fixed by commit a7f40cfe3b7a ("mm: mempolicy: make > > mbind() return -EIO when MPOL_MF_STRICT is specified"), this commit > > also restored the oldest behavior (break loop early). But it also > > breaks the loop early when MPOL_MF_MOVE|MOVEALL is set, kernel should > > actually continue the loop to try to migrate all existing pages per > > the manual. > > Oh, I missed that aspect in my description: yes, I think that's the > worst of it: MPOL_MF_STRICT alone could break out early because it had > nothing more to learn by going further, but it was simply a mistake for > the MOVEs to break out early (and arguable what MOVE|STRICT should do). > > I thought you and I were going to have a debate about this, but we > appear to be in agreement. And I'm not sure whether I agree with > myself about whether do_mbind() should apply the mbind_range()s > when STRICT queue_pages_range() found an unmovable - there are > consistency and regression arguments both ways. They will not be added into the migration list in the first place. Why waste time to try to migrate the unmovable? > > (I've been repeatedly puzzled by your comment in queue_folios_pte_range() > if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { > /* MPOL_MF_STRICT must be specified if we get her= e */ > if (!vma_migratable(vma)) { > Does that commment about MPOL_MF_STRICT actually belong inside the > !vma_migratable(vma) block? Sometimes I think so, but sometimes I > remember that the interaction of those flags, and the skipping arranged > by queue_pages_test_walk(), is subtler than I imagine.) It is because the below code snippet from queue_pages_test_walk(): if (!vma_migratable(vma) && !(flags & MPOL_MF_STRICT)) return 1; When queue_pages_test_walk() returns 1, queue_folios_pte_range() will be skipped. So if queue_folios_pte_range() sees unmigratable vma, it means MPOL_MF_STRICT must be set. > > > It sounds like a regression. I will take a look at it. > > Thanks! Please do, I don't have the time for it. > > > > > So the logic should conceptually look like: > > > > if (MPOL_MF_MOVE|MOVEALL) > > continue; > > if (MPOL_MF_STRICT) > > break; > > > > So it is still possible that some VMAs are not locked if only > > MPOL_MF_STRICT is set. > > Conditionally, I'll agree; but it's too easy for me to agree in the > course of trying to get an email out, but on later reflection come > to disagree. STRICT|MOVE behavior arguable. I thought the code should conceptually do: if (MPOL_MF_MOVE|MOVEALL) scan all vmas try to migrate the existing pages return success else if (MPOL_MF_MOVE* | MPOL_MF_STRICT) scan all vmas try to migrate the existing pages return -EIO if unmovable or migration failed else /* MPOL_MF_STRICT alone */ break early if meets unmovable and don't call mbind_range() at all So the vma scan will just be skipped when MPOL_MF_STRICT alone is specified and mbind_range() won't be called in this case. So Suren's fix may not be needed. > > I think the best I can do is send you (privately) my approx-v5.2 patch > for this (which I never got time to put into even a Google-internal > kernel, though an earlier version was there). In part because I did > more research back then, and its commit message cites several even > older commits than you cite above, which might help to shed more light > on the history (or might just be wrong). And in part because it may > give you some more ideas of what needs doing: notably qp->nr_failed, > because "man 2 migrate_pages" says "On success migrate_pages() returns > the number of pages that could not be moved", but we seem to have > lost sight of that (from which one may conclude that it's not very > important, but I did find it useful when testing); but of course > the usual doubts about the right way to count a page when compound. > > I'll check how easily that patch applies to a known base such as > v5.2, maybe trim it to fit better, then send it off to you. I'm thinking about the below fix (build test against the latest mm-unstable only): diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 42b5567e3773..c9b768a042a8 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -426,6 +426,7 @@ struct queue_pages { unsigned long start; unsigned long end; struct vm_area_struct *first; + bool has_unmovable; }; /* @@ -446,9 +447,8 @@ static inline bool queue_folio_required(struct folio *f= olio, /* * queue_folios_pmd() has three possible return values: * 0 - folios are placed on the right node or queued successfully, or - * special page is met, i.e. huge zero page. - * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were - * specified. + * special page is met, i.e. zero page, or unmovable page is found + * but continue walking (indicated by queue_pages.has_unmovable). * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an * existing folio was already on a node that does not follow the * policy. @@ -479,7 +479,7 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { if (!vma_migratable(walk->vma) || migrate_folio_add(folio, qp->pagelist, flags)) { - ret =3D 1; + qp->has_unmovable |=3D 1; goto unlock; } } else @@ -495,9 +495,8 @@ static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, * * queue_folios_pte_range() has three possible return values: * 0 - folios are placed on the right node or queued successfully, or - * special page is met, i.e. zero page. - * 1 - there is unmovable folio, and MPOL_MF_MOVE* & MPOL_MF_STRICT were - * specified. + * special page is met, i.e. zero page, or unmovable page is found + * but continue walking (indicated by queue_pages.has_unmovable). * -EIO - only MPOL_MF_STRICT was specified and an existing folio was alre= ady * on a node that does not follow the policy. */ @@ -538,10 +537,13 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, if (!queue_folio_required(folio, qp)) continue; if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { - /* MPOL_MF_STRICT must be specified if we get here */ + /* + * MPOL_MF_STRICT must be specified if we get here. + * Continue walking vmas due to MPOL_MF_MOVE* flags. + */ if (!vma_migratable(vma)) { - has_unmovable =3D true; - break; + qp->has_unmovable |=3D 1; + continue; } /* @@ -550,16 +552,13 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, * need migrate other LRU pages. */ if (migrate_folio_add(folio, qp->pagelist, flags)) - has_unmovable =3D true; + has_unmovable |=3D 1; } else break; } pte_unmap_unlock(mapped_pte, ptl); cond_resched(); - if (has_unmovable) - return 1; - return addr !=3D end ? -EIO : 0; } @@ -599,7 +598,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, * Detecting misplaced folio but allow migrating folios which * have been queued. */ - ret =3D 1; + qp->has_unmovable |=3D 1; goto unlock; } @@ -620,7 +619,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask, * Failed to isolate folio but allow migrating pages * which have been queued. */ - ret =3D 1; + qp->has_unmovable |=3D 1; } unlock: spin_unlock(ptl); @@ -756,12 +755,15 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, .start =3D start, .end =3D end, .first =3D NULL, + .has_unmovable =3D false, }; const struct mm_walk_ops *ops =3D lock_vma ? &queue_pages_lock_vma_walk_ops : &queue_pages_walk_ops; err =3D walk_page_range(mm, start, end, ops, &qp); + if (qp.has_unmovable) + err =3D 1; if (!qp.first) /* whole range in hole */ err =3D -EFAULT; @@ -1358,7 +1360,7 @@ static long do_mbind(unsigned long start, unsigned long len, putback_movable_pages(&pagelist); } - if ((ret > 0) || (nr_failed && (flags & MPOL_MF_STRICT))) + if (((ret > 0) || nr_failed) && (flags & MPOL_MF_STRICT)) err =3D -EIO; } else { up_out: > > Hugh