From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4323AC07E94 for ; Fri, 4 Jun 2021 15:53:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DB22661406 for ; Fri, 4 Jun 2021 15:53:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DB22661406 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6B3366B00A1; Fri, 4 Jun 2021 11:53:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 689F16B00A3; Fri, 4 Jun 2021 11:53:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52AB16B00A4; Fri, 4 Jun 2021 11:53:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id 23EE96B00A1 for ; Fri, 4 Jun 2021 11:53:34 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AF3C310FE2 for ; Fri, 4 Jun 2021 15:53:33 +0000 (UTC) X-FDA: 78216486306.05.1311495 Received: from mail-lf1-f54.google.com (mail-lf1-f54.google.com [209.85.167.54]) by imf04.hostedemail.com (Postfix) with ESMTP id 8B5C72BE2 for ; Fri, 4 Jun 2021 15:53:19 +0000 (UTC) Received: by mail-lf1-f54.google.com with SMTP id r5so14842708lfr.5 for ; Fri, 04 Jun 2021 08:53:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=TnKirRT8gUEaS+Q1ln2ecqpD/vhYk0/SWSmMzyGGPnI=; b=Bx6EQYG0JsnQygCUhJjF4HE+jUEbqyPIRXPnHb3R5iPs3ls1JEjfMZSUnfaWTG12QQ ubC4QWKeQDhQf1j78hWZxDnVzH3CUGJEaCUaRAAFx1j10MaWeUdP157tggM5GfwRvZdy FIdREU/RcJW27IUalfqcRDkYxo8U0T+XJEoV6bN171QHk/9JeO38Ip8qguof4zeq45Ta 149/x1ok9HpP/xrRMcOPCZZYewCa+SqGTqqZsVCuvYwnaRx0bGbZRNmoeNBdC5Tq6xup o6q9TIh/cI5zfxZI0/2jrhTmtBnV1uZ+okgyqGqOLipGAitiAL+cKjbPy02qE7eVqhg9 P2tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=TnKirRT8gUEaS+Q1ln2ecqpD/vhYk0/SWSmMzyGGPnI=; b=p2FL46umOV+CdZOjD028dCWNmYCDtI5SKWvov7bdJ99LSgOf+MgM2ZvYYn4GpOCZWf 5s/n0uicUuufOBQ8MTZF0sDDOnWKhkNwSkgpsaB/9Ehb5a5KFjetlBIkr2sF/F3aqo3v 74I4LQXhG5V9DgosK2WhnnOP1Ce/2luQd78V/AHkJ345iZbeHIuwB2Khnzc5JLYntOpA bRHxezftszA1I04m4POpiJN6xMn6Hnq8f/NvQOIBoVumg64YB9ivZSq9yCFJH6aDQE3a hQyWTbJca2kMYkHBQXAuYsZnJitFwu5pgdCtBw8fYWsShgtfHSskNnXkhbO8VPK3koEm 5TyA== X-Gm-Message-State: AOAM530n1G6PUoTGFOYx9HvUFpaqdM7fw6ufLeGkqULTDpVuh7Y2WsIB 3u/rgVBtt1I42B0bRUKyEgU8XA== X-Google-Smtp-Source: ABdhPJzOsEg22+90pdmD9RIooxcQ31amNV/OFN+xJYCPar43Qhf3WP5Hn4EkNUX2giku+mVGd019xw== X-Received: by 2002:ac2:519a:: with SMTP id u26mr3213934lfi.639.1622821991157; Fri, 04 Jun 2021 08:53:11 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id r22sm742329ljp.129.2021.06.04.08.53.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 04 Jun 2021 08:53:10 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id EC1251027A9; Fri, 4 Jun 2021 18:53:22 +0300 (+03) Date: Fri, 4 Jun 2021 18:53:22 +0300 From: "Kirill A. Shutemov" To: Hugh Dickins Cc: Andrew Morton , Matthew Wilcox , "Kirill A. Shutemov" , Yang Shi , Wang Yugui , Naoya Horiguchi , Alistair Popple , Ralph Campbell , Zi Yan , Miaohe Lin , Minchan Kim , Jue Wang , Peter Xu , Jan Kara , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 3/7] mm/thp: fix vma_address() if virtual address below file offset Message-ID: <20210604155322.vl6wcen4fmngg27r@box.shutemov.name> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=shutemov-name.20150623.gappssmtp.com header.s=20150623 header.b=Bx6EQYG0; dmarc=none; spf=none (imf04.hostedemail.com: domain of kirill@shutemov.name has no SPF policy when checking 209.85.167.54) smtp.mailfrom=kirill@shutemov.name X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8B5C72BE2 X-Stat-Signature: ectc4doh97gzjp11nmw6iwtcias9ee99 X-HE-Tag: 1622821999-163267 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jun 03, 2021 at 02:40:30PM -0700, Hugh Dickins wrote: > Running certain tests with a DEBUG_VM kernel would crash within hours, > on the total_mapcount BUG() in split_huge_page_to_list(), while trying > to free up some memory by punching a hole in a shmem huge page: split's > try_to_unmap() was unable to find all the mappings of the page (which, > on a !DEBUG_VM kernel, would then keep the huge page pinned in memory). > > When that BUG() was changed to a WARN(), it would later crash on the > VM_BUG_ON_VMA(end < vma->vm_start || start >= vma->vm_end, vma) in > mm/internal.h:vma_address(), used by rmap_walk_file() for try_to_unmap(). > > vma_address() is usually correct, but there's a wraparound case when the > vm_start address is unusually low, but vm_pgoff not so low: vma_address() > chooses max(start, vma->vm_start), but that decides on the wrong address, > because start has become almost ULONG_MAX. > > Rewrite vma_address() to be more careful about vm_pgoff; move the > VM_BUG_ON_VMA() out of it, returning -EFAULT for errors, so that it can > be safely used from page_mapped_in_vma() and page_address_in_vma() too. > > Add vma_address_end() to apply similar care to end address calculation, > in page_vma_mapped_walk() and page_mkclean_one() and try_to_unmap_one(); > though it raises a question of whether callers would do better to supply > pvmw->end to page_vma_mapped_walk() - I chose not, for a smaller patch. > > An irritation is that their apparent generality breaks down on KSM pages, > which cannot be located by the page->index that page_to_pgoff() uses: as > 4b0ece6fa016 ("mm: migrate: fix remove_migration_pte() for ksm pages") > once discovered. I dithered over the best thing to do about that, and > have ended up with a VM_BUG_ON_PAGE(PageKsm) in both vma_address() and > vma_address_end(); though the only place in danger of using it on them > was try_to_unmap_one(). > > Sidenote: vma_address() and vma_address_end() now use compound_nr() on > a head page, instead of thp_size(): to make the right calculation on a > hugetlbfs page, whether or not THPs are configured. try_to_unmap() is > used on hugetlbfs pages, but perhaps the wrong calculation never mattered. > > Fixes: a8fa41ad2f6f ("mm, rmap: check all VMAs that PTE-mapped THP can be part of") > Signed-off-by: Hugh Dickins > Cc: Acked-by: Kirill A. Shutemov -- Kirill A. Shutemov