From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CF00C4338F for ; Tue, 17 Aug 2021 08:29:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AF09160F58 for ; Tue, 17 Aug 2021 08:29:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AF09160F58 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3E4F66B0073; Tue, 17 Aug 2021 04:29:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 393A58D0001; Tue, 17 Aug 2021 04:29:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 282DA6B0075; Tue, 17 Aug 2021 04:29:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 0DF3B6B0073 for ; Tue, 17 Aug 2021 04:29:01 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 98BDE18015871 for ; Tue, 17 Aug 2021 08:29:00 +0000 (UTC) X-FDA: 78483897240.17.BD5BE43 Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by imf13.hostedemail.com (Postfix) with ESMTP id 57DC81024E07 for ; Tue, 17 Aug 2021 08:29:00 +0000 (UTC) Received: by mail-qt1-f175.google.com with SMTP id z24so16468205qtn.8 for ; Tue, 17 Aug 2021 01:29:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=4uvnY/LMX/fosu9BxO1MG6ugTMEZPtMwQldGntwsVH4=; b=NjCpK1NRhKmstHnpGiLXqM2SKOhBWtRSCCvP/5Amp0gqEwqp6QT1o+8ceJjU6jfYcL rDI6S2AFNamPsB4rByVDzyJlXKw4KZXU6D0rbktDv/9QfD0J2t8oUAaB2sOJGLBryTbW Eg9ME3auwtSoigl65liufDjtNbWJvOmgaeLq0eHHCyxJsULVCFf7GfGUgrPKsbxSwVQA cmdDUSbsnv+SoYjurKcsqszQmy4FQpbClnq9Zw42AWXe3ivRgU+PvAoZN1q5lME2rVm4 TE3/faPz6Y+vVIFyvh3bn7WAE5/J5u/v0idpkWltxqrs69YoF8K/Vp1jMsvycr72ZwZl rKrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=4uvnY/LMX/fosu9BxO1MG6ugTMEZPtMwQldGntwsVH4=; b=Q9CgzO37zkArKqu2Vn9rg/YO7foafFX0SAG6boK2AhY02LDnVXGnStAZDRdYp+oU2v 9pso0gd4O1BQqzc6XI6Urh5KkpYwEAgJHr3Btz6ZqXctGHpzGcNfXgsdxpykRtrdrzx8 AL+EM+s0G1UNg2m9GAmYD1BPrN16jXIrrJiBTgwFIQcX0GffAvdGGjwl3/TAdtfOHmOg worrqy1Il2ZjfMjnZBzASyfm+R+JMLJ4u7dFI/8kzx+jvzgvxc43JQXMdwDUupukHpyR s5AL7ARuh+QVr1TZB0hKEZdEVclHDade3RN6GOJneauDt/F0ua3BJBMa2De5C3AoiJFD A3vw== X-Gm-Message-State: AOAM532Xdcyl4uuePLnF2yp7h5kMequjeo6WnhbAMOKoQ5HubeOnt2uW ke1sef4K8Xlc7JyFcMCM0HXNxQ== X-Google-Smtp-Source: ABdhPJztKNjW9cvImVzVkZCglVqcSZySr11aNl7DJSxBXfIwHrFeR9hnzWG21G8vh8C8WBnA9lcvAg== X-Received: by 2002:a05:622a:13d4:: with SMTP id p20mr2056913qtk.380.1629188939512; Tue, 17 Aug 2021 01:28:59 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id y26sm884220qkm.65.2021.08.17.01.28.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Aug 2021 01:28:58 -0700 (PDT) Date: Tue, 17 Aug 2021 01:28:56 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Hugh Dickins , Shakeel Butt , "Kirill A. Shutemov" , Yang Shi , Miaohe Lin , Mike Kravetz , Michal Hocko , Rik van Riel , Matthew Wilcox , Chris Wilson , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 9/9] shmem: shmem_writepage() split unlikely i915 THP In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: 57DC81024E07 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=NjCpK1NR; spf=pass (imf13.hostedemail.com: domain of hughd@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Stat-Signature: hktuwg9xgwwndbnx1xf1bdoss7ibu75e X-HE-Tag: 1629188940-808985 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: drivers/gpu/drm/i915/gem/i915_gem_shmem.c contains a shmem_writeback() which calls shmem_writepage() from a shrinker: that usually works well enough; but if /sys/kernel/mm/transparent_hugepage/shmem_enabled has been set to "always" (intended to be usable) or "force" (forces huge everywhere for easy testing), shmem_writepage() is surprised to be called with a huge page, and crashes on the VM_BUG_ON_PAGE(PageCompound) (I did not find out where the crash happens when CONFIG_DEBUG_VM is off). LRU page reclaim always splits the shmem huge page first: I'd prefer not to demand that of i915, so check and split compound in shmem_writepage(). Patch history: when first sent last year http://lkml.kernel.org/r/alpine.LSU.2.11.2008301401390.5954@eggly.anvils https://lore.kernel.org/linux-mm/20200919042009.bomzxmrg7%25akpm@linux-foundation.org/ Matthew Wilcox noticed that tail pages were wrongly left clean. This version brackets the split with Set and Clear PageDirty as he suggested: which works very well, even if it falls short of our aspirations. And recently I realized that the crash is not limited to the testing option "force", but affects "always" too: which is more important to fix. Fixes: 2d6692e642e7 ("drm/i915: Start writeback from the shrinker") Signed-off-by: Hugh Dickins Reviewed-by: Shakeel Butt Acked-by: Yang Shi --- mm/shmem.c | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/mm/shmem.c b/mm/shmem.c index b60a7abff27d..a1ba03f39eaa 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1349,7 +1349,19 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) swp_entry_t swap; pgoff_t index; - VM_BUG_ON_PAGE(PageCompound(page), page); + /* + * If /sys/kernel/mm/transparent_hugepage/shmem_enabled is "always" or + * "force", drivers/gpu/drm/i915/gem/i915_gem_shmem.c gets huge pages, + * and its shmem_writeback() needs them to be split when swapping. + */ + if (PageTransCompound(page)) { + /* Ensure the subpages are still dirty */ + SetPageDirty(page); + if (split_huge_page(page) < 0) + goto redirty; + ClearPageDirty(page); + } + BUG_ON(!PageLocked(page)); mapping = page->mapping; index = page->index; -- 2.26.2