From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E837EC433DB for ; Thu, 18 Feb 2021 15:52:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 697C564E6F for ; Thu, 18 Feb 2021 15:52:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 697C564E6F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C499D6B006E; Thu, 18 Feb 2021 10:52:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BFB426B0070; Thu, 18 Feb 2021 10:52:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B11166B0071; Thu, 18 Feb 2021 10:52:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id 9B0106B006E for ; Thu, 18 Feb 2021 10:52:30 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 636E0181AF5E1 for ; Thu, 18 Feb 2021 15:52:30 +0000 (UTC) X-FDA: 77831830860.03.DADD9B8 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf14.hostedemail.com (Postfix) with ESMTP id AB371C000C66 for ; Thu, 18 Feb 2021 15:52:25 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id z9so1718065pjl.5 for ; Thu, 18 Feb 2021 07:52:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=trbbw9gkdkd105IaA7h2JtZJ5hEDJNcNLn2bkNlm+4I=; b=SzE6MBiZb5aCdj45XpkSS9tohyDUfNdxO8chVfAf8ilCtUKf6KYwYkUN2cHQs/F476 kjlsryICuRkiYgiRUs5fKZoJhIJSTRFb66CV12mFr5V8KNwCw+9Z+QX4my2shC8IMLn7 02TYhaVdUI+OEJtOlvBFY50T7s5jcT2uY2cKXog6TKadvdC5MfevYEP5HW9iR727Vwwp d8C3tayt1khUnAq7b4gqilQbHvZfcaTu+OXO/wTEtoZGOz8vo3d8E4gEmHSmOiB7ODIp V03IXdBTha9TEdsMRFNa80eKSLOVDGUUFLnLKu+C19lVAgFP3VPzH2RgSXrZ1/DVOLXY poWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=trbbw9gkdkd105IaA7h2JtZJ5hEDJNcNLn2bkNlm+4I=; b=JFWMuq7CrOJ2i4c+B5p9s/Igc4E5EBJ2sTsZ1/YqAyjX0BHzmzeEc1mQMUIezKltMh j1n0oWicEFOhp2FZZ345OvyVr6SujFu3Wx4/n8iOaVjrsSExn6WW8aNzTrV9ajWNgYLH iXu1JucFpEQKSKcLC7HunpoxjG1exSMS41/JSsmdiBPKUM030v6sl92wV0dWsJSjdt8p h2v4F+J9YQqs88ONbeMJpcgfTZ2NczDBuMr1XuAe0T1k9Azk8tEUomKkVUXSpAofLJnN POAJ74g4beX5vQ8AAmVknFx7DENuHMomXQ8/WeC71ybRDasuzc0kL7kRvpFYKU+tEZ5G Urrg== X-Gm-Message-State: AOAM530FW9jPLK8a2eXnodVsK9/znKyV5prpAmfhXhuneMoNFNxR/sQM zQ57yPcQtq5LHFZFJOEFHgo= X-Google-Smtp-Source: ABdhPJyd3O6GMj59MJrT2TGhOYJO4n8wZwHwSqWUCOe2jnjaDuLrcfqlkBZW8Xuoyik/vStFt7YAow== X-Received: by 2002:a17:902:ee46:b029:e3:74e1:24e8 with SMTP id 6-20020a170902ee46b02900e374e124e8mr4639333plo.85.1613663548995; Thu, 18 Feb 2021 07:52:28 -0800 (PST) Received: from google.com ([2620:15c:211:201:157d:8a19:5427:ea9e]) by smtp.gmail.com with ESMTPSA id y29sm227702pff.81.2021.02.18.07.52.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Feb 2021 07:52:27 -0800 (PST) Date: Thu, 18 Feb 2021 07:52:25 -0800 From: Minchan Kim To: Michal Hocko Cc: Matthew Wilcox , Andrew Morton , linux-mm , LKML , cgoldswo@codeaurora.org, linux-fsdevel@vger.kernel.org, david@redhat.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, joaodias@google.com Subject: Re: [RFC 1/2] mm: disable LRU pagevec during the migration temporarily Message-ID: References: <20210216170348.1513483-1-minchan@kernel.org> <20210217211612.GO2858050@casper.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: AB371C000C66 X-Stat-Signature: 5e46oh8kqkqeqwgmunpeh4hxg9xxs7rb Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf14; identity=mailfrom; envelope-from=""; helo=mail-pj1-f43.google.com; client-ip=209.85.216.43 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613663545-577621 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Feb 18, 2021 at 09:17:02AM +0100, Michal Hocko wrote: > On Wed 17-02-21 13:32:05, Minchan Kim wrote: > > On Wed, Feb 17, 2021 at 09:16:12PM +0000, Matthew Wilcox wrote: > > > On Wed, Feb 17, 2021 at 12:46:19PM -0800, Minchan Kim wrote: > > > > > I suspect you do not want to add atomic_read inside hot paths, right? Is > > > > > this really something that we have to microoptimize for? atomic_read is > > > > > a simple READ_ONCE on many archs. > > > > > > > > It's also spin_lock_irq_save in some arch. If the new synchonization is > > > > heavily compilcated, atomic would be better for simple start but I thought > > > > this locking scheme is too simple so no need to add atomic operation in > > > > readside. > > > > > > What arch uses a spinlock for atomic_read()? I just had a quick grep and > > > didn't see any. > > > > Ah, my bad. I was confused with update side. > > Okay, let's use atomic op to make it simple. > > Thanks. This should make the code much more simple. Before you send > another version for the review I have another thing to consider. You are > kind of wiring this into the migration code but control over lru pcp > caches can be used in other paths as well. Memory offlining would be > another user. We already disable page allocator pcp caches to prevent > regular draining. We could do the same with lru pcp caches. I didn't catch your point here. If memory offlining is interested on disabling lru pcp, it could call migrate_prep and migrate_finish like other places. Are you suggesting this one? diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index a969463bdda4..0ec1c13bfe32 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -1425,8 +1425,12 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn) node_clear(mtc.nid, nmask); if (nodes_empty(nmask)) node_set(mtc.nid, nmask); + + migrate_prep(); ret = migrate_pages(&source, alloc_migration_target, NULL, (unsigned long)&mtc, MIGRATE_SYNC, MR_MEMORY_HOTPLUG); + + migrate_finish(); if (ret) { list_for_each_entry(page, &source, lru) { pr_warn("migrating pfn %lx failed ret:%d ",