From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC53BC433E0 for ; Mon, 29 Jun 2020 08:03:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ABB4020768 for ; Mon, 29 Jun 2020 08:03:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ABB4020768 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4C3056B0003; Mon, 29 Jun 2020 04:03:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 472A46B0005; Mon, 29 Jun 2020 04:03:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38B296B0006; Mon, 29 Jun 2020 04:03:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 2147D6B0003 for ; Mon, 29 Jun 2020 04:03:54 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 85C98181AC9C6 for ; Mon, 29 Jun 2020 08:03:53 +0000 (UTC) X-FDA: 76981510746.06.bears29_4906eea26e6d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 596631003CA83 for ; Mon, 29 Jun 2020 08:03:53 +0000 (UTC) X-HE-Tag: bears29_4906eea26e6d X-Filterd-Recvd-Size: 6374 Received: from mail-ed1-f68.google.com (mail-ed1-f68.google.com [209.85.208.68]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Mon, 29 Jun 2020 08:03:52 +0000 (UTC) Received: by mail-ed1-f68.google.com with SMTP id dm19so5760857edb.13 for ; Mon, 29 Jun 2020 01:03:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=tU/p6/XQD1UtMhsPATQwXv6wMmo7z9qcgbgQtemfgDk=; b=rP8/Duuc8fYV3kjdotsuf2cO52ofSN3MorDTT3CHN8kiSxtYH5wiciW+pHONqBuh7H gKJP1C8OoW61VvUZPjLPux9xZQ4gHYr9vvBAs68DCfcYeRgpkEGc7+uSyOIKqhMrEEoD AYdkPc+LUsovF0fXb00BqigAa2yVaPxLMFfcrtwHDZb0UHkMVsBjjKDjx3tzhz3awx97 UuenDdxu4jbRCOMhtKzRjn3KptKOIinPmjFcXEAYgeh+hSgwkzHxkD7vKszD1+rhME9D zuGscPyNYLOQmQQsYdoVeKHT3owG7GHMvjFrAAMfiJ6cci+YCkH5acw0kuI/Yn0Vqx9K R7Iw== X-Gm-Message-State: AOAM533QY9UHI6SVMwxRKQkwt1Rek4HMS11GbF+PK6ludMIjzLFW9gQC 3qOV+oLde4SO6Zj2AtDLjPp3Cew+ X-Google-Smtp-Source: ABdhPJz4TRSsBtthWM7Gwrj+nINEZzKzoZdqTqYqd/hjV00j8U3cZeGjwf0qOk5kiSsg/aHoLbtMCQ== X-Received: by 2002:aa7:c98d:: with SMTP id c13mr8016803edt.188.1593417831943; Mon, 29 Jun 2020 01:03:51 -0700 (PDT) Received: from localhost (ip-37-188-168-3.eurotel.cz. [37.188.168.3]) by smtp.gmail.com with ESMTPSA id y21sm23452236ejp.32.2020.06.29.01.03.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jun 2020 01:03:51 -0700 (PDT) Date: Mon, 29 Jun 2020 10:03:50 +0200 From: Michal Hocko To: Joonsoo Kim Cc: Andrew Morton , Linux Memory Management List , LKML , kernel-team@lge.com, Vlastimil Babka , Christoph Hellwig , Roman Gushchin , Mike Kravetz , Naoya Horiguchi , Joonsoo Kim Subject: Re: [PATCH v3 5/8] mm/migrate: make a standard migration target allocation function Message-ID: <20200629080350.GB32461@dhcp22.suse.cz> References: <1592892828-1934-1-git-send-email-iamjoonsoo.kim@lge.com> <1592892828-1934-6-git-send-email-iamjoonsoo.kim@lge.com> <20200625120550.GF1320@dhcp22.suse.cz> <20200626073342.GU1320@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 596631003CA83 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon 29-06-20 15:41:37, Joonsoo Kim wrote: > 2020=EB=85=84 6=EC=9B=94 26=EC=9D=BC (=EA=B8=88) =EC=98=A4=ED=9B=84 4:3= 3, Michal Hocko =EB=8B=98=EC=9D=B4 =EC=9E=91=EC=84=B1: > > > > On Fri 26-06-20 14:02:49, Joonsoo Kim wrote: > > > 2020=EB=85=84 6=EC=9B=94 25=EC=9D=BC (=EB=AA=A9) =EC=98=A4=ED=9B=84= 9:05, Michal Hocko =EB=8B=98=EC=9D=B4 =EC=9E=91=EC=84= =B1: > > > > > > > > On Tue 23-06-20 15:13:45, Joonsoo Kim wrote: > > [...] > > > > > -struct page *new_page_nodemask(struct page *page, > > > > > - int preferred_nid, nodemask_t *no= demask) > > > > > +struct page *alloc_migration_target(struct page *page, unsigne= d long private) > > > > > { > > > > > - gfp_t gfp_mask =3D GFP_USER | __GFP_MOVABLE | __GFP_RETRY= _MAYFAIL; > > > > > + struct migration_target_control *mtc; > > > > > + gfp_t gfp_mask; > > > > > unsigned int order =3D 0; > > > > > struct page *new_page =3D NULL; > > > > > + int zidx; > > > > > + > > > > > + mtc =3D (struct migration_target_control *)private; > > > > > + gfp_mask =3D mtc->gfp_mask; > > > > > > > > > > if (PageHuge(page)) { > > > > > return alloc_huge_page_nodemask( > > > > > - page_hstate(compound_head(page)), > > > > > - preferred_nid, nodemask, 0, false= ); > > > > > + page_hstate(compound_head(page)),= mtc->nid, > > > > > + mtc->nmask, gfp_mask, false); > > > > > } > > > > > > > > > > if (PageTransHuge(page)) { > > > > > + gfp_mask &=3D ~__GFP_RECLAIM; > > > > > > > > What's up with this gfp_mask modification? > > > > > > THP page allocation uses a standard gfp masks, GFP_TRANSHUGE_LIGHT = and > > > GFP_TRANHUGE. __GFP_RECLAIM flags is a big part of this standard ma= sk design. > > > So, I clear it here so as not to disrupt the THP gfp mask. > > > > Why this wasn't really needed before? I guess I must be missing > > something here. This patch should be mostly mechanical convergence of > > existing migration callbacks but this change adds a new behavior AFAI= CS. >=20 > Before this patch, a user cannot specify a gfp_mask and THP allocation > uses GFP_TRANSHUGE > statically. Unless I am misreading there are code paths (e.g.new_page_nodemask) which= simply use add GFP_TRANSHUGE to GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL. And this goes all the way to thp migration introduction. > After this patch, a user can specify a gfp_mask and it > could conflict with GFP_TRANSHUGE. > This code tries to avoid this conflict. >=20 > > It would effectively drop __GFP_RETRY_MAYFAIL and __GFP_KSWAPD_RECLAI= M >=20 > __GFP_RETRY_MAYFAIL isn't dropped. __GFP_RECLAIM is > "___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM". > So, __GFP_KSWAPD_RECLAIM would be dropped for THP allocation. > IIUC, THP allocation doesn't use __GFP_KSWAPD_RECLAIM since it's > overhead is too large and this overhead should be given to the caller > rather than system thread (kswapd) and so on. Yes, there is a reason why KSWAPD is excluded from THP allocations in the page fault path. Maybe we want to extend that behavior to the migration as well. I do not have a strong opinion on that because I haven't seen excessive kswapd reclaim due to THP migrations. They are likely too rare. But as I've said in my previous email. Make this a separate patch with an explanation why we want this. --=20 Michal Hocko SUSE Labs