From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FA24C433E2 for ; Fri, 4 Sep 2020 07:02:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0178B206D4 for ; Fri, 4 Sep 2020 07:02:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0178B206D4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 959A46B0074; Fri, 4 Sep 2020 03:02:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E9906B007B; Fri, 4 Sep 2020 03:02:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 784356B007E; Fri, 4 Sep 2020 03:02:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id 604256B0074 for ; Fri, 4 Sep 2020 03:02:38 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 106BE181AC9CB for ; Fri, 4 Sep 2020 07:02:38 +0000 (UTC) X-FDA: 77224485996.15.help22_2911978270b0 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id D548A1814B0C1 for ; Fri, 4 Sep 2020 07:02:37 +0000 (UTC) X-HE-Tag: help22_2911978270b0 X-Filterd-Recvd-Size: 5189 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Sep 2020 07:02:37 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 3C6B7ACCF; Fri, 4 Sep 2020 07:02:37 +0000 (UTC) Date: Fri, 4 Sep 2020 09:02:35 +0200 From: Michal Hocko To: David Hildenbrand Cc: Pavel Tatashin , Vlastimil Babka , LKML , Andrew Morton , linux-mm Subject: Re: [PATCH] mm/memory_hotplug: drain per-cpu pages again during memory offline Message-ID: <20200904070235.GA15277@dhcp22.suse.cz> References: <20200901124615.137200-1-pasha.tatashin@soleen.com> <20200902140851.GJ4617@dhcp22.suse.cz> <74f2341a-7834-3e37-0346-7fbc48d74df3@suse.cz> <20200902151306.GL4617@dhcp22.suse.cz> <20200903063806.GM4617@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: D548A1814B0C1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu 03-09-20 20:31:04, David Hildenbrand wrote: > On 03.09.20 20:23, Pavel Tatashin wrote: > > On Thu, Sep 3, 2020 at 2:20 PM David Hildenbrand wrote: > >> > >> On 03.09.20 08:38, Michal Hocko wrote: [...] > >>> diff --git a/mm/page_isolation.c b/mm/page_isolation.c > >>> index 242c03121d73..56d4892bceb8 100644 > >>> --- a/mm/page_isolation.c > >>> +++ b/mm/page_isolation.c > >>> @@ -170,6 +170,14 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) > >>> * pageblocks we may have modified and return -EBUSY to caller. This > >>> * prevents two threads from simultaneously working on overlapping ranges. > >>> * > >>> + * Please note that there is no strong synchronization with the page allocator > >>> + * either. Pages might be freed while their page blocks are marked ISOLATED. > >>> + * In some cases pages might still end up on pcp lists and that would allow > >>> + * for their allocation even when they are in fact isolated already. Depending on > >>> + * how strong of a guarantee the caller needs drain_all_pages might be needed > >>> + * (e.g. __offline_pages will need to call it after check for isolated range for > >>> + * a next retry). > >>> + * > >> > >> As expressed in reply to v2, I dislike this hack. There is strong > >> synchronization, just PCP is special. Allocating from MIGRATE_ISOLATE is > >> just plain ugly. Completely agreed! I am not happy about that either. But I believe this hack is the easiest way forward for stable trees and as an immediate fix. We can build on top of that of course. > >> Can't we temporarily disable PCP (while some pageblock in the zone is > >> isolated, which we know e.g., due to the counter), so no new pages get > >> put into PCP lists after draining, and re-enable after no pageblocks are > >> isolated again? We keep draining the PCP, so it doesn't seem to be of a > >> lot of use during that period, no? It's a performance hit already. This is a good point. > >> Then, we would only need exactly one drain. And we would only have to > >> check on the free path whether PCP is temporarily disabled. > > > > Hm, we could use a static branches to disable it, that would keep > > release code just as fast, but I am worried it will make code even > > uglier. Let's see what others in this thread think about this idea. I know that static branches are a very effective way to enable/disable features but I have no experience in how they perform for a very shortlived use. Maybe that is just fine for a single place which needs to be patched. This would be especially a problem if the static branch is to be enabled from start_isolate_page_range because that includes all cma allocator users. Another alternative would be to enable/disable static branch only from users who really care but this is quite tricky because how do you tell you need or not? It seems that alloc_contig_range would be just fine with a weaker semantic because it would "only" to a spurious failure. Memory hotplug on the other hand really needs to have a point where nobody interferes with the offlined memory so it could ask for a stronger semantic. Yet another option would be to make draining stronger and actually guarantee there are no in-flight pages to be freed to the pcp list. One way would be to tweak pcp->high and implement a strong barrier (IPI?) to sync with all CPUs. Quite expensive, especially when there are many draining requests (read cma users because hotplug doesn't really matter much as it happens seldom). So no nice&cheap solution I can think of... -- Michal Hocko SUSE Labs