From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6613DC43461 for ; Fri, 11 Sep 2020 01:25:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4AC8D20732 for ; Fri, 11 Sep 2020 01:25:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="d54Lpm+M" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4AC8D20732 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 71C5C6B0062; Thu, 10 Sep 2020 21:25:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CD0C6B006E; Thu, 10 Sep 2020 21:25:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E3296B0071; Thu, 10 Sep 2020 21:25:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0136.hostedemail.com [216.40.44.136]) by kanga.kvack.org (Postfix) with ESMTP id 46B546B0062 for ; Thu, 10 Sep 2020 21:25:43 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 05E408248047 for ; Fri, 11 Sep 2020 01:25:43 +0000 (UTC) X-FDA: 77249038566.15.day95_4e1379f270ea Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id CA3CF1814B0C1 for ; Fri, 11 Sep 2020 01:25:42 +0000 (UTC) X-HE-Tag: day95_4e1379f270ea X-Filterd-Recvd-Size: 4585 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Fri, 11 Sep 2020 01:25:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=kr7t1EAgDgUvFZS2/xTuWT9/abHdNk2l8lM3caz2P/E=; b=d54Lpm+ManfSXXGSWRpKdjtOST SiWZPhyi7Wh05QPbDjYgdaxRFvcW+BGnLMeungxdbuXgJx1VuPAzZMoKyURT/m7FCbVKttgB1NyNN 9BluXgMEudHHPEucYZL5iAyMen3tpG4zFL0q905JFX4PZw/P4kbhRf8Ewly/wxTHDDHGrH2QGZ4p2 EOG+YEe8sA6A4aJCTvrM1KtAI9tB9bFUsVboC/9n6GBY+b1YQgEDFfn7qE84zWTXZvLFi3ZBF563O TRuWCFgX34EJlkdenhDcqPiZnnc+TzenRb/3uvA259/nLGn2UlAJnPmojCW2AvTmETdG4DzWOybl9 55uksa4g==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kGXop-0006SE-Eq; Fri, 11 Sep 2020 01:25:39 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, "Kirill A. Shutemov" , William Kucharski Subject: [PATCH] mm/filemap: Fix filemap_map_pages for THP Date: Fri, 11 Sep 2020 02:25:32 +0100 Message-Id: <20200911012532.24761-1-willy@infradead.org> X-Mailer: git-send-email 2.21.3 MIME-Version: 1.0 X-Rspamd-Queue-Id: CA3CF1814B0C1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We dereference page->mapping and page->index directly after calling find_subpage() and these fields are not valid for tail pages. While commit 4101196b19d7 introduced the call to find_subpage(), the problem existed prior to this; I'm going to suggest all the way back to when THPs first existed. The user-visible effects of this are almost negligible. To hit it, you have to mmap a tmpfs file at an unaligned address and then it's only a disabled optimisation causing page faults to happen more frequently than they otherwise would. Fix this by keeping both head and page pointers and checking the appropriate one. We could use page_mapping() and page_to_index(), but that's higher overhead. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index cb5f49fe8029..c774c8154107 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2786,42 +2786,42 @@ void filemap_map_pages(struct vm_fault *vmf, pgoff_t last_pgoff =3D start_pgoff; unsigned long max_idx; XA_STATE(xas, &mapping->i_pages, start_pgoff); - struct page *page; + struct page *head, *page; unsigned int mmap_miss =3D READ_ONCE(file->f_ra.mmap_miss); =20 rcu_read_lock(); - xas_for_each(&xas, page, end_pgoff) { - if (xas_retry(&xas, page)) + xas_for_each(&xas, head, end_pgoff) { + if (xas_retry(&xas, head)) continue; - if (xa_is_value(page)) + if (xa_is_value(head)) goto next; =20 /* * Check for a locked page first, as a speculative * reference may adversely influence page migration. */ - if (PageLocked(page)) + if (PageLocked(head)) goto next; - if (!page_cache_get_speculative(page)) + if (!page_cache_get_speculative(head)) goto next; =20 /* Has the page moved or been split? */ - if (unlikely(page !=3D xas_reload(&xas))) + if (unlikely(head !=3D xas_reload(&xas))) goto skip; - page =3D find_subpage(page, xas.xa_index); + page =3D find_subpage(head, xas.xa_index); =20 - if (!PageUptodate(page) || + if (!PageUptodate(head) || PageReadahead(page) || PageHWPoison(page)) goto skip; - if (!trylock_page(page)) + if (!trylock_page(head)) goto skip; =20 - if (page->mapping !=3D mapping || !PageUptodate(page)) + if (head->mapping !=3D mapping || !PageUptodate(head)) goto unlock; =20 max_idx =3D DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); - if (page->index >=3D max_idx) + if (xas.xa_index >=3D max_idx) goto unlock; =20 if (mmap_miss > 0) @@ -2833,12 +2833,12 @@ void filemap_map_pages(struct vm_fault *vmf, last_pgoff =3D xas.xa_index; if (alloc_set_pte(vmf, page)) goto unlock; - unlock_page(page); + unlock_page(head); goto next; unlock: - unlock_page(page); + unlock_page(head); skip: - put_page(page); + put_page(head); next: /* Huge page is mapped? No need to proceed. */ if (pmd_trans_huge(*vmf->pmd)) --=20 2.28.0