From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39A2ACA9EA1 for ; Fri, 18 Oct 2019 09:37:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C5A582070B for ; Fri, 18 Oct 2019 09:37:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="d2YK7nxn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C5A582070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 154DF8E0018; Fri, 18 Oct 2019 05:37:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0DEAD8E0003; Fri, 18 Oct 2019 05:37:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE7618E0018; Fri, 18 Oct 2019 05:37:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id C89A08E0003 for ; Fri, 18 Oct 2019 05:37:52 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 65E88DB48 for ; Fri, 18 Oct 2019 09:37:52 +0000 (UTC) X-FDA: 76056403584.02.jam69_44a981825a121 X-HE-Tag: jam69_44a981825a121 X-Filterd-Recvd-Size: 6705 Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Oct 2019 09:37:51 +0000 (UTC) Received: by mail-lj1-f195.google.com with SMTP id m7so5542139lji.2 for ; Fri, 18 Oct 2019 02:37:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:date:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=XgKygpmqdxOai3YVrfrSn5tFWpbBq87jsIt+y3JevpI=; b=d2YK7nxnPnGzleOVegU5uASqFkj1Q6gDqyDHoSzwQxDkvAt/H+d3c4uHLTXsKb2Pkw I4TM6AxL8JJj70znbIhoqYNhkMfZ/rt0keQbpxy+XQQ4O62EbZYIApbmNANYrw0BIjrg hM3+lnVqMWReQAsqGN47SCjbSufQm5xgJIKUOOXT8G/CrNzstLnw61/D5q51VJZKxUp5 QAnkV+4OLNHH5/f8+9huR7gpNmkXNCeA1YkXpH/z9SiRuAxEkAmA3LAToiHAxefz0cQn 6k6pk/VhobFDSLzZ3qcdJCodVdxdLRyTsoJBQlZ2kiTUwUg4qdED/wSebMuoBYS+i95R WsiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:date:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=XgKygpmqdxOai3YVrfrSn5tFWpbBq87jsIt+y3JevpI=; b=ix18jz6+JV5pKic06LwsXHxLEOWTHB3+ZQiLfN/Mr95NyTRav0+B4B6+N4ez6DvsH6 TZtccT8ssgZNnQd4O0yLWcdRmY+c5WmUjLJkEvEoO4Mr+FoUHi+4WU8V4Eot1nHin0YZ 8u/U3VqjiGPf9bqwSxgRXqtILiEFnmwXrlrD0GtHCi0SDiw2s9hjyb0gebJcYPkysBjo kzPTa/DVx0UJlB1bGJ4O7x/w8UpLFA6agr7aBUSTjOhfNR3LbmsJfx/GaBpmmIPr5wIQ 3dEuvAYF6e6immZQKwtmuWQ5FBSg0L0KoMw3uT1hSTWCEPoUF2jrkhzycV3bIUZH19l5 oUrg== X-Gm-Message-State: APjAAAUyUnW99Yg9IEfWKL5Ebcu892NXuSe0djMu4LOp71HHo71b5G2S L5e9L7ixSYRZVHW2VgEeBFI= X-Google-Smtp-Source: APXvYqzRCf0C+MAldjnpnN1uCmIYuwJA4nk5XkcFPvv/jbXljZbNr7sMXTeDm5Ltw995gG3+//mHRg== X-Received: by 2002:a2e:9d06:: with SMTP id t6mr5592328lji.253.1571391470011; Fri, 18 Oct 2019 02:37:50 -0700 (PDT) Received: from pc636 ([37.139.158.167]) by smtp.gmail.com with ESMTPSA id f22sm2074597lfk.56.2019.10.18.02.37.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Oct 2019 02:37:49 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Fri, 18 Oct 2019 11:37:41 +0200 To: Michal Hocko Cc: "Uladzislau Rezki (Sony)" , Andrew Morton , Daniel Wagner , Sebastian Andrzej Siewior , Thomas Gleixner , linux-mm@kvack.org, LKML , Peter Zijlstra , Hillf Danton , Matthew Wilcox , Oleksiy Avramchenko , Steven Rostedt Subject: Re: [PATCH v3 1/3] mm/vmalloc: remove preempt_disable/enable when do preloading Message-ID: <20191018093741.GA8744@pc636> References: <20191016095438.12391-1-urezki@gmail.com> <20191016105928.GS317@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191016105928.GS317@dhcp22.suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello, Michal. Sorry for late reply. See my comments enclosed below: > On Wed 16-10-19 11:54:36, Uladzislau Rezki (Sony) wrote: > > Some background. The preemption was disabled before to guarantee > > that a preloaded object is available for a CPU, it was stored for. > > Probably good to be explicit that this has been achieved by combining > the disabling the preemption and taking the spin lock while the > ne_fit_preload_node is checked resp. repopulated, right? > Right, agree with your comment! > > The aim was to not allocate in atomic context when spinlock > > is taken later, for regular vmap allocations. But that approach > > conflicts with CONFIG_PREEMPT_RT philosophy. It means that > > calling spin_lock() with disabled preemption is forbidden > > in the CONFIG_PREEMPT_RT kernel. > > > > Therefore, get rid of preempt_disable() and preempt_enable() when > > the preload is done for splitting purpose. As a result we do not > > guarantee now that a CPU is preloaded, instead we minimize the > > case when it is not, with this change. > > by populating the per cpu preload pointer under the vmap_area_lock. > This implies that at least each caller which has done the preallocation > will not fallback to an atomic allocation later. It is possible that the > preallocation would be pointless or that no preallocation is done > because of the race but your data shows that this is really rare. > That makes sense to add. Please find below updated comment: mm/vmalloc: remove preempt_disable/enable when do preloading Some background. The preemption was disabled before to guarantee that a preloaded object is available for a CPU, it was stored for. That was achieved by combining the disabling the preemption and taking the spin lock while the ne_fit_preload_node is checked. The aim was to not allocate in atomic context when spinlock is taken later, for regular vmap allocations. But that approach conflicts with CONFIG_PREEMPT_RT philosophy. It means that calling spin_lock() with disabled preemption is forbidden in the CONFIG_PREEMPT_RT kernel. Therefore, get rid of preempt_disable() and preempt_enable() when the preload is done for splitting purpose. As a result we do not guarantee now that a CPU is preloaded, instead we minimize the case when it is not, with this change, by populating the per cpu preload pointer under the vmap_area_lock. This implies that at least each caller that has done the preallocation will not fallback to an atomic allocation later. It is possible that the preallocation would be pointless or that no preallocation is done because of the race but the data shows that this is really rare. For example i run the special test case that follows the preload pattern and path. 20 "unbind" threads run it and each does 1000000 allocations. Only 3.5 times among 1000000 a CPU was not preloaded. So it can happen but the number is negligible. V2 - > V3: - update the commit message V1 -> V2: - move __this_cpu_cmpxchg check when spin_lock is taken, as proposed by Andrew Morton - add more explanation in regard of preloading - adjust and move some comments Do you agree on that? Thank you! -- Vlad Rezki