From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DF37C61DA4 for ; Thu, 9 Mar 2023 22:27:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB970280001; Thu, 9 Mar 2023 17:27:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D695E6B0074; Thu, 9 Mar 2023 17:27:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5894280001; Thu, 9 Mar 2023 17:27:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B2A296B0072 for ; Thu, 9 Mar 2023 17:27:34 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7C4E640833 for ; Thu, 9 Mar 2023 22:27:34 +0000 (UTC) X-FDA: 80550797628.24.B3B009B Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf29.hostedemail.com (Postfix) with ESMTP id B21E1120019 for ; Thu, 9 Mar 2023 22:27:31 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf29.hostedemail.com: domain of "SRS0=VESz=7B=goodmis.org=rostedt@kernel.org" designates 145.40.68.75 as permitted sender) smtp.mailfrom="SRS0=VESz=7B=goodmis.org=rostedt@kernel.org" ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678400852; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VTjSE5+NL+s6PJ4ds5B39mEkkOHSgmIo/M3GK13LOwU=; b=m7wm8eaKN44hh7xXcyyOpfcER572nbwMkFiB00N/J7WCWBqNhlgRh98xnB4p2N8w3XsMwK GEML3fK7/ZwDuZVRoui0q8ZwSS8opwDPeq4f5y/byIK5xfWHTSywO1lslFL5386D1zBkp3 SEvveeR56M4S/+lBW9bygDKXZ4maMCI= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf29.hostedemail.com: domain of "SRS0=VESz=7B=goodmis.org=rostedt@kernel.org" designates 145.40.68.75 as permitted sender) smtp.mailfrom="SRS0=VESz=7B=goodmis.org=rostedt@kernel.org" ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678400852; a=rsa-sha256; cv=none; b=1C54AE5sqwRFLv4VAnql4FzN++lkXXiD4zFRFnLFQfPZMIsnwsBZY2i5xHPnWVgK077K5l Ml5WdSK3Hx0mq0t+07Z9/YFVNx6XaEbBeQVpJh14REnPf3HBxdIvbtFafcjlzPZzf47nn1 iZcVAMuVyxfY5vyBR3/IOsKhsGO7myo= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id EDF4AB82123; Thu, 9 Mar 2023 22:27:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 225A0C433D2; Thu, 9 Mar 2023 22:27:28 +0000 (UTC) Date: Thu, 9 Mar 2023 17:27:26 -0500 From: Steven Rostedt To: Stefan Roesch Cc: kernel-team@fb.com, linux-mm@kvack.org, linux-trace-kernel@vger.kernel.org, akpm@linux-foundation.org Subject: Re: [PATCH v1] mm: add tracepoints to ksm Message-ID: <20230309172726.14fca32e@gandalf.local.home> In-Reply-To: <20230210214645.2720847-1-shr@devkernel.io> References: <20230210214645.2720847-1-shr@devkernel.io> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: B21E1120019 X-Stat-Signature: oznwaos8igfk1moq5ubeosp4isd9pzat X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1678400851-19394 X-HE-Meta: U2FsdGVkX19Hb2FmcRvTLwqRVk+lQWCV3NdM2Y8Hf2K12IjhtDyBkMol5MuRmtATYs1DiUFJA1YYIiIurCJWw/AMx9x6Kpw0TwjxWrgmSWE/+ua2oPgN2/X2X0q9FRd/4VwKeLJZs467IAcSBKk+ihwm7B5GvXNaZ3yLDYdUTUGjb+WGxKgUCwhhugSTOho+9D7eKPfMXUJiRO0LsIEBjSJjhjjycj2Pj4P+ArXGFoWqjBwu3xE0DgQ6M3zm509OvZxuwj88FFWdE3/8S/KK//OUZix0Hp3pTGVDFYslePYWsJ5b7tQOAbdcO/qhlNvtF+pukSpfkN/HEkpgX1HpjnUINvCTD+qzwZtjf9yJMG+bGvT5t1sgf8ON9Bu5QqZjxZQzWh2NyD5i6XV5l+yib04QUozWU9b+DWdnujBaUz8rxSvV2Bh0LgO8hgdohcn8C7Cm1GMCNviD0Qt9qJt2uaviepIOtkGqUa5BLpJ8jSx7tWuiOVlJ/QEyn4dgxj59QehSFs56dgtdFawmCyojelXIvwV6kiqT1hyDpm4CF6IMRtWBNfgBxzlMQs+3Ei6pOjIIqCScj7jIsOmg6dzOhpEfj/ATwjoTNCC4uitX8JD5EePm/cVDViIKpcKKjuKKHo4Piu6DyacDDoq7uudRuzFMBoxe+xoqyolr45i0499lTlZxPygx+qeRUTubmZeN4HniY4olqSkgQ3yUCh3WXm9Tg2zXVqaSfz1SPelfKRbObC6XYXhO670NUcUhB6fGHPb4RQAA9UYDHFjWWqOVJmgHtpe09jobsLKWlc6uTHHhpm/+4fESofQmfo0ihrd+m+DRhLuV3BEhwF/DTOAqaC7AIm8w1SLExI9AlI2C0K/pEdGMdrxCUpHpMkFS4mD8hFTSctLrEqMqX46jMXAE601p/kLCS27s0aFJdgaWcUlXL9flulVIw6ixmywUaa+rnQ7afhl/W6jYgIB8Itg Dm5zAwB1 OH3klDVSAWabrkHg2QgbEVH5DzvM6Da1kbnqQRgLDUiLSwq0iHYHLY6w1mcKJrCwtA3bmngOcifnHtITPbpCeO+QEWL+Qgz8iPJvbp5fWSdzUkuJw97OcWlYWGyIFc04VlhPc7nh+jTdHrA5eHKqq3gh3KR2RCdcnIqvxEgTTIBK8jPRLDKNgzASjL4LjqhbLFSQjC0a99dSs0hJHzuTtTvnXC7kt5vPIg2uba5MxNZjRFdKTDIVveucpastaAkkl3l8uaVpgVibBtZHNnHBNyjMrYkGHH8+JHh4eat3bzwiTWQIA1UeBopEh76M90NabJ69pSesljfq1jD+DYbCOzD7SY41cRsC3hTyx48dKG/RyPrv5jvITbqyGRQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 10 Feb 2023 13:46:45 -0800 Stefan Roesch wrote: Sorry for the late reply, I just noticed this (I had the flu when this was originally sent). > +/** > + * ksm_remove_ksm_page - called after a ksm page has been removed > + * > + * @pfn: page frame number of ksm page > + * > + * Allows to trace the removing of stable ksm pages. > + */ > +TRACE_EVENT(ksm_remove_ksm_page, > + > + TP_PROTO(unsigned long pfn), > + > + TP_ARGS(pfn), > + > + TP_STRUCT__entry( > + __field(unsigned long, pfn) > + ), > + > + TP_fast_assign( > + __entry->pfn = pfn; > + ), > + > + TP_printk("pfn %lu", __entry->pfn) > +); > + > +/** > + * ksm_remove_rmap_item - called after a rmap_item has been removed from the > + * stable tree > + * > + * @pfn: page frame number of ksm page > + * @rmap_item: address of rmap_item object > + * @mm: address of the process mm struct > + * > + * Allows to trace the removal of pages from the stable tree list. > + */ > +TRACE_EVENT(ksm_remove_rmap_item, > + > + TP_PROTO(unsigned long pfn, void *rmap_item, void *mm), > + > + TP_ARGS(pfn, rmap_item, mm), > + > + TP_STRUCT__entry( > + __field(unsigned long, pfn) > + __field(void *, rmap_item) > + __field(void *, mm) > + ), > + > + TP_fast_assign( > + __entry->pfn = pfn; > + __entry->rmap_item = rmap_item; > + __entry->mm = mm; > + ), > + > + TP_printk("pfn %lu rmap_item %p mm %p", > + __entry->pfn, __entry->rmap_item, __entry->mm) > +); > + > +#endif /* _TRACE_KSM_H */ > + > +/* This part must be outside protection */ > +#include > diff --git a/mm/ksm.c b/mm/ksm.c > index 56808e3bfd19..4356af760735 100644 > --- a/mm/ksm.c > +++ b/mm/ksm.c > @@ -45,6 +45,9 @@ > #include "internal.h" > #include "mm_slot.h" > > +#define CREATE_TRACE_POINTS > +#include > + > #ifdef CONFIG_NUMA > #define NUMA(x) (x) > #define DO_NUMA(x) do { (x); } while (0) > @@ -655,10 +658,12 @@ static void remove_node_from_stable_tree(struct ksm_stable_node *stable_node) > BUG_ON(stable_node->rmap_hlist_len < 0); > > hlist_for_each_entry(rmap_item, &stable_node->hlist, hlist) { > - if (rmap_item->hlist.next) > + if (rmap_item->hlist.next) { > ksm_pages_sharing--; > - else > + trace_ksm_remove_rmap_item(stable_node->kpfn, rmap_item, rmap_item->mm); Instead of dereferencing the stable_node here, where the work could possibly happen outside the trace event and in the hot path, could you pass in the stable_node instead, and then in the TP_fast_assign() do: __entry->pfn = stable_node->kpfn; > + } else { > ksm_pages_shared--; > + } > > rmap_item->mm->ksm_merging_pages--; > > @@ -679,6 +684,7 @@ static void remove_node_from_stable_tree(struct ksm_stable_node *stable_node) > BUILD_BUG_ON(STABLE_NODE_DUP_HEAD <= &migrate_nodes); > BUILD_BUG_ON(STABLE_NODE_DUP_HEAD >= &migrate_nodes + 1); > > + trace_ksm_remove_ksm_page(stable_node->kpfn); Here too? -- Steve > if (stable_node->head == &migrate_nodes) > list_del(&stable_node->list); > else > @@ -1367,6 +1373,8 @@ static int try_to_merge_with_ksm_page(struct ksm_rmap_item *rmap_item, > get_anon_vma(vma->anon_vma); > out: > mmap_read_unlock(mm); > + trace_ksm_merge_with_ksm_page(kpage, page_to_pfn(kpage ? kpage : page), > + rmap_item, mm, err); > return err; > } > > @@ -2114,6 +2122,9 @@ static int try_to_merge_with_kernel_zero_page(struct ksm_rmap_item *rmap_item, > if (vma) { > err = try_to_merge_one_page(vma, page, > ZERO_PAGE(rmap_item->address)); > + trace_ksm_merge_one_page( > + page_to_pfn(ZERO_PAGE(rmap_item->address)), > + rmap_item, mm, err); > if (!err) { > rmap_item->address |= ZERO_PAGE_FLAG; > ksm_zero_pages_sharing++; > @@ -2344,6 +2355,8 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) > > mm_slot = ksm_scan.mm_slot; > if (mm_slot == &ksm_mm_head) { > + trace_ksm_start_scan(ksm_scan.seqnr, ksm_rmap_items); > + > /* > * A number of pages can hang around indefinitely on per-cpu > * pagevecs, raised page count preventing write_protect_page > @@ -2510,6 +2523,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) > if (mm_slot != &ksm_mm_head) > goto next_mm; > > + trace_ksm_stop_scan(ksm_scan.seqnr, ksm_rmap_items); > ksm_scan.seqnr++; > return NULL; > } > @@ -2661,6 +2675,7 @@ int __ksm_enter(struct mm_struct *mm) > if (needs_wakeup) > wake_up_interruptible(&ksm_thread_wait); > > + trace_ksm_enter(mm); > return 0; > } > > @@ -2702,6 +2717,8 @@ void __ksm_exit(struct mm_struct *mm) > mmap_write_lock(mm); > mmap_write_unlock(mm); > } > + > + trace_ksm_exit(mm); > } > > struct page *ksm_might_need_to_copy(struct page *page, > > base-commit: 234a68e24b120b98875a8b6e17a9dead277be16a