From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA143C432C3 for ; Mon, 25 Nov 2019 09:36:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 809942071E for ; Mon, 25 Nov 2019 09:36:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="pOGsDGS1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 809942071E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 33C476B05C2; Mon, 25 Nov 2019 04:36:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2EC1E6B05C4; Mon, 25 Nov 2019 04:36:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 202FC6B05CE; Mon, 25 Nov 2019 04:36:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id 0728F6B05C2 for ; Mon, 25 Nov 2019 04:36:06 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 94B5A181AEF21 for ; Mon, 25 Nov 2019 09:36:05 +0000 (UTC) X-FDA: 76194293490.02.part60_5237c984e8500 X-HE-Tag: part60_5237c984e8500 X-Filterd-Recvd-Size: 5621 Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Nov 2019 09:36:04 +0000 (UTC) Received: by mail-lj1-f195.google.com with SMTP id n21so14928329ljg.12 for ; Mon, 25 Nov 2019 01:36:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Lltk+ABKwMzN4Q3qE+TFYH7hNAzWoOTyrBpQNHINGWg=; b=pOGsDGS1zDV3Ji0L/1c0M+QXhoBqekXbOPlVWITZTZtMhTIefNFnDfdvq1VmiXPfbf teEowcVmzNnApS8P56bu43c43VqftJ+caG5ytLqALGFHSUZCOIRlV1A8/+RVEIBwuAEI cu/kBnw3wE2btTRTTWCz9VSZjeRAOeuskHgL4D1BbzdpssJDQyIeRK3FaZEIlFgoGOIm nzKJABybFIShwvGdy9JUFpJioZ0tQ5huegi1TvIpCsQUuX2J5pKSTnZHluPLlqg2Rs8j EL12ZUlwcre5plySoqzI3zcopFpHXNFi1WncaBkmxw0V9KkOWTGLJ/yCYwkp7e/3wyJ0 w27Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Lltk+ABKwMzN4Q3qE+TFYH7hNAzWoOTyrBpQNHINGWg=; b=tT8xbH+nGioVdLRbP3IXT6IRb1ZCKDLdPtkXDQimBhrocSbCOiJQk3/gG4O8YUXT8O m8IeJEQopSd2d+U6jUe+9kiDGzVMPiJFPkmeI64FCqITL0daE+LQPr51iEuMoUwkEBZ7 kjkDSb8m28j4Vl+VcLf21nYAF7CVgXrtQ3V/n7p8h+d509BIsPwqMgiM0sdlLmpxBqDX vEtt36cxm8BA7hyO0t4wSe9F5Vnr8h+X8qNfduM595dk0C5hVJay78dxjZ/yt+6nyb9B j8nebZSr31aQtmZmNJVB3+WTVDyi54FUmF9nf2alj6xU/yPZFiT0TzzmuCbHdkbB986Y gGZQ== X-Gm-Message-State: APjAAAX5t2TKyQ2dhGiXbS6NEbpp8pAz/SCT9PQjuOYIPnPzP8/1YPpu vfwQlaq58ZyEoYNnnLsGB+lKSA== X-Google-Smtp-Source: APXvYqxvw9ut+oWe1WULJjWqDhYOkJfFZmIR+tYdPozGStrfGODy3Zu4B71kR/bcLVYFsAgb14RoKA== X-Received: by 2002:a2e:84d0:: with SMTP id q16mr10206471ljh.48.1574674563554; Mon, 25 Nov 2019 01:36:03 -0800 (PST) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id u67sm3581116lja.78.2019.11.25.01.36.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Nov 2019 01:36:02 -0800 (PST) Received: by box.localdomain (Postfix, from userid 1000) id BBB581032C4; Mon, 25 Nov 2019 12:36:11 +0300 (+03) Date: Mon, 25 Nov 2019 12:36:11 +0300 From: "Kirill A. Shutemov" To: Yang Shi Cc: hughd@google.com, kirill.shutemov@linux.intel.com, aarcange@redhat.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH] mm: shmem: allow split THP when truncating THP partially Message-ID: <20191125093611.hlamtyo4hvefwibi@box> References: <1574471132-55639-1-git-send-email-yang.shi@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1574471132-55639-1-git-send-email-yang.shi@linux.alibaba.com> User-Agent: NeoMutt/20180716 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Nov 23, 2019 at 09:05:32AM +0800, Yang Shi wrote: > Currently when truncating shmem file, if the range is partial of THP > (start or end is in the middle of THP), the pages actually will just get > cleared rather than being freed unless the range cover the whole THP. > Even though all the subpages are truncated (randomly or sequentially), > the THP may still be kept in page cache. This might be fine for some > usecases which prefer preserving THP. > > But, when doing balloon inflation in QEMU, QEMU actually does hole punch > or MADV_DONTNEED in base page size granulairty if hugetlbfs is not used. > So, when using shmem THP as memory backend QEMU inflation actually doesn't > work as expected since it doesn't free memory. But, the inflation > usecase really needs get the memory freed. Anonymous THP will not get > freed right away too but it will be freed eventually when all subpages are > unmapped, but shmem THP would still stay in page cache. > > To protect the usecases which may prefer preserving THP, introduce a > new fallocate mode: FALLOC_FL_SPLIT_HPAGE, which means spltting THP is > preferred behavior if truncating partial THP. This mode just makes > sense to tmpfs for the time being. We need to clarify interaction with khugepaged. This implementation doesn't do anything to prevent khugepaged from collapsing the range back to THP just after the split. > @@ -976,8 +1022,31 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, > } > unlock_page(page); > } > +rescan_split: > pagevec_remove_exceptionals(&pvec); > pagevec_release(&pvec); > + > + if (split && PageTransCompound(page)) { > + /* The THP may get freed under us */ > + if (!get_page_unless_zero(compound_head(page))) > + goto rescan_out; > + > + lock_page(page); > + > + /* > + * The extra pins from page cache lookup have been > + * released by pagevec_release(). > + */ > + if (!split_huge_page(page)) { > + unlock_page(page); > + put_page(page); > + /* Re-look up page cache from current index */ > + goto again; > + } > + unlock_page(page); > + put_page(page); > + } > +rescan_out: > index++; > } Doing get_page_unless_zero() just after you've dropped the pin for the page looks very suboptimal. -- Kirill A. Shutemov