From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B0D1C00140 for ; Fri, 12 Aug 2022 09:12:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A9928E0001; Fri, 12 Aug 2022 05:12:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 159406B0075; Fri, 12 Aug 2022 05:12:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01FEA8E0001; Fri, 12 Aug 2022 05:12:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E82916B0073 for ; Fri, 12 Aug 2022 05:12:20 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id BD9521C7286 for ; Fri, 12 Aug 2022 09:12:20 +0000 (UTC) X-FDA: 79790374440.11.8E4B912 Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) by imf27.hostedemail.com (Postfix) with ESMTP id 5BB6F401A1 for ; Fri, 12 Aug 2022 09:12:20 +0000 (UTC) Received: by mail-yb1-f176.google.com with SMTP id 199so546935ybl.9 for ; Fri, 12 Aug 2022 02:12:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=45DlX2m161o/Gwf6jjpL+rTtdDNcNUjOyYHwEltA6sE=; b=DfJqfQf13dFDfKDEBILnoPos4S3eOqD4I/J5F0zBPLSWIzTKWdjLN86L6n05rKvBIA 6nzFtmLGA9n/mjTr7mEbV1tjkrqS86Xz6kpNyhYtmQjw1pVpsk4xEt1w3Wci+EZtWFWz bzQ1keEBFA5Rar2W7y+RhomQt2lo8yqGo4fq+Q8VOW75secAgbaoTQi4WfwJzscLHyGW VMq18SI4bB763CBJl/7WU74jho+zmpU8VrR0qvcbuC5YMk68ElkfsRWq9s4R7oqwz10d dK5EnuzyTko5Zx3q8acpRKqG8Z9DbeR5JImhe75xgx3AGfxv5iiX21EpM6vmrAcdd0JX PZDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=45DlX2m161o/Gwf6jjpL+rTtdDNcNUjOyYHwEltA6sE=; b=B2bQIo5W4VgiMlzgF1btcpXbofWOBpjdLzIFlW629DE/3YtY/af085eDfN10NBWgpR ThNaB25xvcsmkChzCjLCp3MmU7VhDin8iu0/Mh+iKnevyglS6XXzsmq+m5TgmbGYEpXe jMoQFwoLn7IyvzOq+I+8Ng07xXKhmgw5dYkDeBQK8YnEcuTTMenkXkqa/Fnb+QGe5I4y qRNPrQgf8kd24Pr21NpeDInxL+fVdMe0lYLGvEU9yd71/O41czgzcv6+cIVLf/GB773e 6j+r8KoP9KQiSvD8NprnOqHcbqD1wW/mr6HlJ2zk6zUS58zNdYLivvNLDFsnJkG4Et75 lSiQ== X-Gm-Message-State: ACgBeo2dYa4w9NGTeBlja0pQ4HnhM4k9Yf7BGIXWrOsFFIXoVhghm4oW 5ITtHGiA0UfEyjrMewhEozqHwEsvuSCaMERxk7h9CA== X-Google-Smtp-Source: AA6agR5shmgfDFtyuhUnV9CdH97dw/PbRq+Tofnexu1JDcDhkSkYb809/N5LdCRu5dVicBeuNaAcimvAobV8oKzMI0A= X-Received: by 2002:a25:ad16:0:b0:671:75d9:6aad with SMTP id y22-20020a25ad16000000b0067175d96aadmr2562613ybi.143.1660295539457; Fri, 12 Aug 2022 02:12:19 -0700 (PDT) MIME-Version: 1.0 References: <20220811085938.2506536-1-imran.f.khan@oracle.com> <6b41bb2c-6305-2bf4-1949-84ba08fdbd72@suse.cz> <26acafb0-9528-9b29-0b5d-738890853fca@oracle.com> In-Reply-To: <26acafb0-9528-9b29-0b5d-738890853fca@oracle.com> From: Marco Elver Date: Fri, 12 Aug 2022 11:11:43 +0200 Message-ID: Subject: Re: [PATCH v2] Introduce sysfs interface to disable kfence for selected slabs. To: Imran Khan Cc: vbabka@suse.cz, glider@google.com, dvyukov@google.com, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, roman.gushchin@linux.dev, 42.hyeyoo@gmail.com, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660295540; a=rsa-sha256; cv=none; b=BlvEQMI/MMND4gy3Ob3dGTNr0CHImPyXj08YDajzNmR/EgwQw4SkWEm5P8yK7mnKd6/52l DjGIU/iT7drP7SJQh/7OdSU4k1VTnF0YpZt8vnF5pItSkDbmsOzjWr8QLYMON2Da1R1JAT xRxBry9z7KTsGeE2SgJq0gZCnvyUPpc= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DfJqfQf1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of elver@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=elver@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660295540; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=45DlX2m161o/Gwf6jjpL+rTtdDNcNUjOyYHwEltA6sE=; b=jHILYDyiiBgHqSJ+CZ3VH6+SO8nRH2jz57FamgmIeRPN7peimNTvOfCgghh9WEelfxg5Qw o9rT3gOPqdLzI2nKigzqdKhx8vh7xBgWlboZRt/mdT6hcmtocv1wU87Yq3HNLzUpRvtT+d lOS9BELCy1h0q1k1Wu2eH893ZYYI9Q0= X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5BB6F401A1 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DfJqfQf1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of elver@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=elver@google.com X-Rspam-User: X-Stat-Signature: m7prp8rzna9zen45j4riz6qsmst4sf8z X-HE-Tag: 1660295540-148537 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 11 Aug 2022 at 17:10, Imran Khan wrote: > > Hello Marco, > > On 11/8/22 11:21 pm, Marco Elver wrote: > > On Thu, 11 Aug 2022 at 12:07, wrote: > > [...] > >>> new flag SLAB_SKIP_KFENCE, it also can serve a dual purpose, where > >>> someone might want to explicitly opt out by default and pass it to > >>> kmem_cache_create() (for whatever reason; not that we'd encourage > >>> that). > >> > >> Right, not be able to do that would be a downside (although it should be > >> possible even with opt-in to add an opt-out cache flag that would just make > >> sure the opt-in flag is not set even if eligible by global defaults). > > > > True, but I'd avoid all this unnecessary complexity if possible. > > > >>> I feel that the real use cases for selectively enabling caches for > >>> KFENCE are very narrow, and a design that introduces lots of > >>> complexity elsewhere, just to support this feature cannot be justified > >>> (which is why I suggested the simpler design here back in > >>> https://urldefense.com/v3/__https://lore.kernel.org/lkml/CANpmjNNmD9z7oRqSaP72m90kWL7jYH*cxNAZEGpJP8oLrDV-vw@mail.gmail.com/__;Kw!!ACWV5N9M2RV99hQ!Oh4PBJ1NoN9mEgqGqdaNcWuKtJiC6TS_rIbALuqZadQoo93jpVJaFFmXUpOTuzRUdCwcRJWE6uJ4pe0$ > >>> ) > >> > >> I don't mind strongly either way, just a suggestion to consider. > > > > While switching the semantics of the flag from opt-out to opt-in is > > just as valid, I'm more comfortable with the opt-out flag: the rest of > > the logic can stay the same, and we're aware of the fact that changing > > cache coverage by KFENCE shouldn't be something that needs to be done > > manually. > > > > My main point is that opting out or in to only a few select caches > > should be a rarely used feature, and accordingly it should be as > > simple as possible. Honestly, I still don't quite see the point of it, > > and my solution would be to just increase the KFENCE pool, increase > > sample rate, or decrease the "skip covered threshold%". But in the > > case described by Imran, perhaps a running machine is having trouble > > and limiting the caches to be analyzed by KFENCE might be worthwhile > > if a more aggressive configuration doesn't yield anything (and then > > there's of course KASAN, but I recognize it's not always possible to > > switch kernel and run the same workload with it). > > > > The use case for the proposed change is definitely when an admin or > > kernel dev is starting to debug a problem. KFENCE wasn't designed for > > that (vs. deployment at scale, discovery of bugs). As such I'm having > > a hard time admitting how useful this feature will really be, but > > given the current implementation is simple, having it might actually > > help a few people. > > > > Imran, just to make sure my assumptions here are right, have you had > > success debugging an issue in this way? Can you elaborate on what > > "certain debugging scenarios" you mean (admin debugging something, or > > a kernel dev, production fleet, or test machine)? > > > > I have not used kfence in this way because as of now we don't have such newer > kernels in production fleet but I can cite a couple of instances where using > slub_debug for few selected slabs helped me in locating the issue on a > production system where KASAN or even full slub_debug were not feasible. > Apologies in advance if I am elaborating more than you asked for :). This is very useful to understand the use case. > In one case a freed struct mutex was being used later on and by that time same > address had been given to a kmalloc-32 object. The issue was appearing more > frequently if one would enforce some cgroup memory limitation resulting in fork > of a task exiting prematurely. From the vmcore we could see that mutex or more > specifically task_struct.futex_exit_mutex was in bad shape and eventually using > slub_debug for kmalloc-32 pointed to issue. > > Another case involved a mem_cgroup corruption which was causing system crash but > was giving list corruption warnings beforehand. Since list corruption warnings > were coming from cgroup subsystem, corresponding objects were in doubt. > Enabling slub_debug for kmalloc-4k helped in locating the actual corruption. > > Admittedly both of the above issues were result of backporting mistakes but > nonetheless they happened in production systems where very few debugging options > were available. > > By "certain debugging scenarios" I meant such cases where some initial data > (from production fleet) like vmcore or kernel debug messages can give some > pointer towards which slab objects could be wrong and then we would use this > feature (along with further tuning like increasing sampling frequency, pool size > if needed/possible) to pinpoint the actual issue. The idea is that limiting > KFENCE to few slabs will increase the probablity of catching the issue even if > we are not able to tweak pool size. > > Please let me know if it sounds reasonable or if I missed something from your > query. Thanks for the elaboration on use cases - agreed that in few scenarios this feature can help increase the probability of debugging an issue. Reviewed-by: Marco Elver With minor suggestions: > +SLAB_ATTR(skip_kfence); > + ^ Unnecessary space between SLAB_ATTR and #endif. > +#endif > + And the patch title should be something like "kfence: add sysfs interface to disable kfence for selected slabs" (to follow format ": ").