From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0AAEC388F7 for ; Fri, 23 Oct 2020 02:54:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 05F5D2158C for ; Fri, 23 Oct 2020 02:54:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GwO/S5qc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 05F5D2158C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F24D86B005D; Thu, 22 Oct 2020 22:54:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EAD916B0062; Thu, 22 Oct 2020 22:54:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D76056B0068; Thu, 22 Oct 2020 22:54:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id A2F326B005D for ; Thu, 22 Oct 2020 22:54:51 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4523D181AC9C6 for ; Fri, 23 Oct 2020 02:54:51 +0000 (UTC) X-FDA: 77401672782.06.wind11_0f1376c27256 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 22B70102D0F7C for ; Fri, 23 Oct 2020 02:54:51 +0000 (UTC) X-HE-Tag: wind11_0f1376c27256 X-Filterd-Recvd-Size: 7992 Received: from mail-ot1-f67.google.com (mail-ot1-f67.google.com [209.85.210.67]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Fri, 23 Oct 2020 02:54:50 +0000 (UTC) Received: by mail-ot1-f67.google.com with SMTP id m11so40060otk.13 for ; Thu, 22 Oct 2020 19:54:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=OhlO/LzneaK56JyPnno414NEIqe0lQnV0xWgz8QTSSs=; b=GwO/S5qckQ87lNn4nYQwzre4r+MI9DIp0YjQQkzmuurhzIVVOK2mBprOQhX3H6eLW6 OuFT5GJnMqf3BOF8gKjGlBvN39sLm8jCEqIxaJa+n+raWG9SX0m0lUSEVxUj1NBsPh+O iARFRKH4lCR8I5iLG0EQ4XikblpJU3ALJ7fYVR8OVI7fu9ad8xRcon7nDbFywIyyNrQx fvIWU2J8hd2JRl6HogFjAKjdjVKgwRNsXr/obqRfnSQdQfYYosM6QVKKcQEZBOTbx+pU 0v0BqwLu0VtfYne3eA3JmTWj/s30jMg/Mknz65rIPM1XYHlbOWGeWbtg+9xoqmeIRXeX nUuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=OhlO/LzneaK56JyPnno414NEIqe0lQnV0xWgz8QTSSs=; b=ZnpRkstfH7u0o8tdiJJ9+C+hsq73julH3DwEC3TTRbo2Y87bHqK06tHCEUZ6SJ3NLl G6ztYEZLPSHMJRV9q5kV+D4N1GhjMvBkqsL2JDpdpM4/PPGGylbT3v4jg7PRmWXPO8hl PNs4Tcc+WYGcZYVXUHxDYcjAaqjmOwalFYHVcTS00eSCAWL1ilvO7IEW86mbkE9acA7F EgzLmcod3sP6ruQSq84EtBONP48AcDtjxrYa7K9TSXVhcOs/yPqxQOQH6ttHiA1rQyHy Gtfhn2Cv7GqtGHFEhy7Tnj4+tKP2TgkmZygzTopaRLXby9e+1NgvP8sYUhHDE2NlsfKV xgiA== X-Gm-Message-State: AOAM532YB0jHRpFkUHgwwWW/6tj1Keaeq4BZ5c701UUElb5rh5md6Eda pWFqeMWzpE3nmCJmdzSgeS7/bQ== X-Google-Smtp-Source: ABdhPJyoNYb9JnNfDzXvSXmyfDXZJo5rmRr8ZofQa7TSvfGqhjrrUwafPizl541YRUq/vhiwFv1MCQ== X-Received: by 2002:a9d:6052:: with SMTP id v18mr78892otj.33.1603421689657; Thu, 22 Oct 2020 19:54:49 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id g22sm58652oti.26.2020.10.22.19.54.47 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Thu, 22 Oct 2020 19:54:48 -0700 (PDT) Date: Thu, 22 Oct 2020 19:54:34 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Rik van Riel cc: Hugh Dickins , Yu Xu , Andrew Morton , Mel Gorman , Andrea Arcangeli , Matthew Wilcox , Michal Hocko , Vlastimil Babka , "Kirill A. Shutemov" , linux-mm@kvack.org, kernel-team@fb.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] mm,thp,shmem: limit shmem THP alloc gfp_mask In-Reply-To: <20201022124511.72448a5f@imladris.surriel.com> Message-ID: References: <20201022124511.72448a5f@imladris.surriel.com> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 22 Oct 2020, Rik van Riel wrote: > The allocation flags of anonymous transparent huge pages can be controlled > through the files in /sys/kernel/mm/transparent_hugepage/defrag, which can > help the system from getting bogged down in the page reclaim and compaction > code when many THPs are getting allocated simultaneously. > > However, the gfp_mask for shmem THP allocations were not limited by those > configuration settings, and some workloads ended up with all CPUs stuck > on the LRU lock in the page reclaim code, trying to allocate dozens of > THPs simultaneously. > > This patch applies the same configurated limitation of THPs to shmem > hugepage allocations, to prevent that from happening. > > This way a THP defrag setting of "never" or "defer+madvise" will result > in quick allocation failures without direct reclaim when no 2MB free > pages are available. > > Signed-off-by: Rik van Riel NAK in its present untested form: see below. I'm open to change here, particularly to Yu Xu's point (in other mail) about direct reclaim - we avoid that here in Google too: though it's not so much to avoid the direct reclaim, as to avoid the latencies of direct compaction, which __GFP_DIRECT_RECLAIM allows as a side-effect. > --- > v2: move gfp calculation to shmem_getpage_gfp as suggested by Yu Xu > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > index c603237e006c..0a5b164a26d9 100644 > --- a/include/linux/gfp.h > +++ b/include/linux/gfp.h > @@ -614,6 +614,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask); > extern void pm_restrict_gfp_mask(void); > extern void pm_restore_gfp_mask(void); > > +extern gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma); > + > #ifdef CONFIG_PM_SLEEP > extern bool pm_suspended_storage(void); > #else > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 9474dbc150ed..9b08ce5cc387 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -649,7 +649,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, > * available > * never: never stall for any thp allocation > */ > -static inline gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma) > +gfp_t alloc_hugepage_direct_gfpmask(struct vm_area_struct *vma) > { > const bool vma_madvised = !!(vma->vm_flags & VM_HUGEPAGE); > > diff --git a/mm/shmem.c b/mm/shmem.c > index 537c137698f8..9710b9df91e9 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1545,8 +1545,8 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, > return NULL; > > shmem_pseudo_vma_init(&pvma, info, hindex); > - page = alloc_pages_vma(gfp | __GFP_COMP | __GFP_NORETRY | __GFP_NOWARN, > - HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(), true); > + page = alloc_pages_vma(gfp, HPAGE_PMD_ORDER, &pvma, 0, numa_node_id(), > + true); Commendably neat so far. > shmem_pseudo_vma_destroy(&pvma); > if (page) > prep_transhuge_page(page); > @@ -1802,6 +1802,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > struct page *page; > enum sgp_type sgp_huge = sgp; > pgoff_t hindex = index; > + gfp_t huge_gfp; > int error; > int once = 0; > int alloced = 0; > @@ -1887,7 +1888,8 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > } > > alloc_huge: > - page = shmem_alloc_and_acct_page(gfp, inode, index, true); > + huge_gfp = alloc_hugepage_direct_gfpmask(vma); Still looks nice: but what about the crash when vma is NULL? It may work for shmem_fault() (though I'll probably disagree on the details): but tmpfs is a filesystem, so most if not all of the system calls which arrive here have no vma to offer. Michal is right to remember pushback before, because tmpfs is a filesystem, and "huge=" is a mount option: in using a huge=always filesystem, the user has already declared a preference for huge pages. Whereas the original anon THP had to deduce that preference from sys tunables and vma madvice. I certainly found it a lot easier to ignore all the shifting sandmaze of the anon THP tunables, and I think Kirill followed me on that. But it's likely that they have accumulated some defrag wisdom, which tmpfs can take on board - but please accept that in using a huge mount, the preference for huge has already been expressed, so I don't expect anon THP alloc_hugepage_direct_gfpmask() choices will map one to one. > + page = shmem_alloc_and_acct_page(huge_gfp, inode, index, true); > if (IS_ERR(page)) { > alloc_nohuge: > page = shmem_alloc_and_acct_page(gfp, inode, > Hugh