linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] module: avoid userspace pressure on unwanted allocations
@ 2023-04-14  5:08 Luis Chamberlain
  2023-04-14  5:08 ` [PATCH v3 1/4] module: fix kmemleak annotations for non init ELF sections Luis Chamberlain
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Luis Chamberlain @ 2023-04-14  5:08 UTC (permalink / raw)
  To: david, patches, linux-modules, linux-mm, linux-kernel, pmladek,
	petr.pavlu, prarit, torvalds, gregkh, rafael
  Cc: christophe.leroy, tglx, peterz, song, rppt, dave, willy, vbabka,
	mhocko, dave.hansen, colin.i.king, jim.cromie, catalin.marinas,
	jbaron, rick.p.edgecombe, mcgrof

This v3 series follows up on the second iteration of these patches [0]. This
and other pending changes are avaiable on 20230413-module-alloc-opts
branch [1] which is based on modules-next.

Changes on this v3:

  o Catalin Marinas suggested we just use kmemleak_not_leak() for both
    ELF allocations even if its init stuff.
  o Considerable amount of effort went into trying to see if there's
    relationship with CPU count and wasted virtual memory allocations.
    The new module debugfs counters helped with creating this evaluation.
    The result of that put me on a path to then add even more debugging
    facilities to rule out and identify the culprits. In the end I have
    patches now which can get this down to 0 bytes wasted. The patch
    in this series which helps reduce the allocations has a graph
    showing the findings of the relationship between wasted virtual
    memory allocations and CPU count all during boot. It is insanity
    that the graph has to go into gigabytes of wasted virtual memory all
    at boot.
  o To help folks compare apples to apples I've put the stats debug
    patch *prior* to the one that helps with allocations. This way folks
    can see for themselves what the results look like.
  o Enhanced the statistics a bit more and added an example with 255 CPUs.
  o Went with atomic_long and casting for the debugs big counters.
  o Rolled in the patch that moved a helper as David suggested.
  o Minor fixes reported by 0-day
  o Added tags for Reviews, etc.

[0] https://lkml.kernel.org/r/20230405022702.753323-1-mcgrof@kernel.org
[1] https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git/log/?h=20230413-module-alloc-opts

Luis Chamberlain (4):
  module: fix kmemleak annotations for non init ELF sections
  module: extract patient module check into helper
  module: add debug stats to help identify memory pressure
  module: avoid allocation if module is already present and ready

 Documentation/core-api/kernel-api.rst |  22 +-
 kernel/module/Kconfig                 |  37 +++
 kernel/module/Makefile                |   1 +
 kernel/module/decompress.c            |   4 +
 kernel/module/internal.h              |  74 +++++
 kernel/module/main.c                  | 194 ++++++++----
 kernel/module/stats.c                 | 432 ++++++++++++++++++++++++++
 kernel/module/tracking.c              |   7 +-
 8 files changed, 703 insertions(+), 68 deletions(-)
 create mode 100644 kernel/module/stats.c

-- 
2.39.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/4] module: fix kmemleak annotations for non init ELF sections
  2023-04-14  5:08 [PATCH v3 0/4] module: avoid userspace pressure on unwanted allocations Luis Chamberlain
@ 2023-04-14  5:08 ` Luis Chamberlain
  2023-04-14 10:18   ` Catalin Marinas
  2023-04-14  5:08 ` [PATCH v3 2/4] module: extract patient module check into helper Luis Chamberlain
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Luis Chamberlain @ 2023-04-14  5:08 UTC (permalink / raw)
  To: david, patches, linux-modules, linux-mm, linux-kernel, pmladek,
	petr.pavlu, prarit, torvalds, gregkh, rafael
  Cc: christophe.leroy, tglx, peterz, song, rppt, dave, willy, vbabka,
	mhocko, dave.hansen, colin.i.king, jim.cromie, catalin.marinas,
	jbaron, rick.p.edgecombe, mcgrof

Commit ac3b43283923 ("module: replace module_layout with module_memory")
reworked the way to handle memory allocations to make it clearer. But it
lost in translation how we handled kmemleak_ignore() or kmemleak_not_leak()
for different ELF sections.

Fix this and clarify the comments a bit more. Contrary to the old way
of using kmemleak_ignore() for init.* ELF sections we stick now only to
kmemleak_not_leak() as per suggestion by Catalin Marinas so to avoid
any false positives and simplify the code.

Fixes: ac3b43283923 ("module: replace module_layout with module_memory")
Reported-by: Jim Cromie <jim.cromie@gmail.com>
Acked-by: Song Liu <song@kernel.org>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 kernel/module/main.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/kernel/module/main.c b/kernel/module/main.c
index 5cc21083af04..32554d8a5791 100644
--- a/kernel/module/main.c
+++ b/kernel/module/main.c
@@ -2231,13 +2231,18 @@ static int move_module(struct module *mod, struct load_info *info)
 		}
 		mod->mem[type].size = PAGE_ALIGN(mod->mem[type].size);
 		ptr = module_memory_alloc(mod->mem[type].size, type);
-
 		/*
-		 * The pointer to this block is stored in the module structure
-		 * which is inside the block. Just mark it as not being a
-		 * leak.
+                 * The pointer to these blocks of memory are stored on the module
+                 * structure and we keep that around so long as the module is
+                 * around. We only free that memory when we unload the module.
+                 * Just mark them as not being a leak then. The .init* ELF
+                 * sections *do* get freed after boot so we *could* treat them
+                 * slightly differently with kmemleak_ignore() and only grey
+                 * them out as they work as typical memory allocations which
+                 * *do* eventually get freed, but let's just keep things simple
+                 * and avoid *any* false positives.
 		 */
-		kmemleak_ignore(ptr);
+		kmemleak_not_leak(ptr);
 		if (!ptr) {
 			t = type;
 			goto out_enomem;
-- 
2.39.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 2/4] module: extract patient module check into helper
  2023-04-14  5:08 [PATCH v3 0/4] module: avoid userspace pressure on unwanted allocations Luis Chamberlain
  2023-04-14  5:08 ` [PATCH v3 1/4] module: fix kmemleak annotations for non init ELF sections Luis Chamberlain
@ 2023-04-14  5:08 ` Luis Chamberlain
  2023-04-14  5:08 ` [PATCH v3 3/4] module: add debug stats to help identify memory pressure Luis Chamberlain
  2023-04-14  5:08 ` [PATCH v3 4/4] module: avoid allocation if module is already present and ready Luis Chamberlain
  3 siblings, 0 replies; 9+ messages in thread
From: Luis Chamberlain @ 2023-04-14  5:08 UTC (permalink / raw)
  To: david, patches, linux-modules, linux-mm, linux-kernel, pmladek,
	petr.pavlu, prarit, torvalds, gregkh, rafael
  Cc: christophe.leroy, tglx, peterz, song, rppt, dave, willy, vbabka,
	mhocko, dave.hansen, colin.i.king, jim.cromie, catalin.marinas,
	jbaron, rick.p.edgecombe, mcgrof

The patient module check inside add_unformed_module() is large
enough as we need it. It is a bit hard to read too, so just
move it to a helper and do the inverse checks first to help
shift the code and make it easier to read. The new helper then
is module_patient_check_exists().

To make this work we need to mvoe the finished_loading() up,
we do that without making any functional changes to that routine.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 kernel/module/main.c | 112 +++++++++++++++++++++++--------------------
 1 file changed, 60 insertions(+), 52 deletions(-)

diff --git a/kernel/module/main.c b/kernel/module/main.c
index 32554d8a5791..75b23257128d 100644
--- a/kernel/module/main.c
+++ b/kernel/module/main.c
@@ -2447,27 +2447,6 @@ static int post_relocation(struct module *mod, const struct load_info *info)
 	return module_finalize(info->hdr, info->sechdrs, mod);
 }
 
-/* Is this module of this name done loading?  No locks held. */
-static bool finished_loading(const char *name)
-{
-	struct module *mod;
-	bool ret;
-
-	/*
-	 * The module_mutex should not be a heavily contended lock;
-	 * if we get the occasional sleep here, we'll go an extra iteration
-	 * in the wait_event_interruptible(), which is harmless.
-	 */
-	sched_annotate_sleep();
-	mutex_lock(&module_mutex);
-	mod = find_module_all(name, strlen(name), true);
-	ret = !mod || mod->state == MODULE_STATE_LIVE
-		|| mod->state == MODULE_STATE_GOING;
-	mutex_unlock(&module_mutex);
-
-	return ret;
-}
-
 /* Call module constructors. */
 static void do_mod_ctors(struct module *mod)
 {
@@ -2631,6 +2610,63 @@ static int may_init_module(void)
 	return 0;
 }
 
+/* Is this module of this name done loading?  No locks held. */
+static bool finished_loading(const char *name)
+{
+	struct module *mod;
+	bool ret;
+
+	/*
+	 * The module_mutex should not be a heavily contended lock;
+	 * if we get the occasional sleep here, we'll go an extra iteration
+	 * in the wait_event_interruptible(), which is harmless.
+	 */
+	sched_annotate_sleep();
+	mutex_lock(&module_mutex);
+	mod = find_module_all(name, strlen(name), true);
+	ret = !mod || mod->state == MODULE_STATE_LIVE
+		|| mod->state == MODULE_STATE_GOING;
+	mutex_unlock(&module_mutex);
+
+	return ret;
+}
+
+/* Must be called with module_mutex held */
+static int module_patient_check_exists(const char *name)
+{
+	struct module *old;
+	int err = 0;
+
+	old = find_module_all(name, strlen(name), true);
+	if (old == NULL)
+		return 0;
+
+	if (old->state == MODULE_STATE_COMING ||
+	    old->state == MODULE_STATE_UNFORMED) {
+		/* Wait in case it fails to load. */
+		mutex_unlock(&module_mutex);
+		err = wait_event_interruptible(module_wq,
+				       finished_loading(name));
+		mutex_lock(&module_mutex);
+		if (err)
+			return err;
+
+		/* The module might have gone in the meantime. */
+		old = find_module_all(name, strlen(name), true);
+	}
+
+	/*
+	 * We are here only when the same module was being loaded. Do
+	 * not try to load it again right now. It prevents long delays
+	 * caused by serialized module load failures. It might happen
+	 * when more devices of the same type trigger load of
+	 * a particular module.
+	 */
+	if (old && old->state == MODULE_STATE_LIVE)
+		return -EEXIST;
+	return -EBUSY;
+}
+
 /*
  * We try to place it in the list now to make sure it's unique before
  * we dedicate too many resources.  In particular, temporary percpu
@@ -2639,41 +2675,14 @@ static int may_init_module(void)
 static int add_unformed_module(struct module *mod)
 {
 	int err;
-	struct module *old;
 
 	mod->state = MODULE_STATE_UNFORMED;
 
 	mutex_lock(&module_mutex);
-	old = find_module_all(mod->name, strlen(mod->name), true);
-	if (old != NULL) {
-		if (old->state == MODULE_STATE_COMING
-		    || old->state == MODULE_STATE_UNFORMED) {
-			/* Wait in case it fails to load. */
-			mutex_unlock(&module_mutex);
-			err = wait_event_interruptible(module_wq,
-					       finished_loading(mod->name));
-			if (err)
-				goto out_unlocked;
-
-			/* The module might have gone in the meantime. */
-			mutex_lock(&module_mutex);
-			old = find_module_all(mod->name, strlen(mod->name),
-					      true);
-		}
-
-		/*
-		 * We are here only when the same module was being loaded. Do
-		 * not try to load it again right now. It prevents long delays
-		 * caused by serialized module load failures. It might happen
-		 * when more devices of the same type trigger load of
-		 * a particular module.
-		 */
-		if (old && old->state == MODULE_STATE_LIVE)
-			err = -EEXIST;
-		else
-			err = -EBUSY;
+	err = module_patient_check_exists(mod->name);
+	if (err)
 		goto out;
-	}
+
 	mod_update_bounds(mod);
 	list_add_rcu(&mod->list, &modules);
 	mod_tree_insert(mod);
@@ -2681,7 +2690,6 @@ static int add_unformed_module(struct module *mod)
 
 out:
 	mutex_unlock(&module_mutex);
-out_unlocked:
 	return err;
 }
 
-- 
2.39.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 3/4] module: add debug stats to help identify memory pressure
  2023-04-14  5:08 [PATCH v3 0/4] module: avoid userspace pressure on unwanted allocations Luis Chamberlain
  2023-04-14  5:08 ` [PATCH v3 1/4] module: fix kmemleak annotations for non init ELF sections Luis Chamberlain
  2023-04-14  5:08 ` [PATCH v3 2/4] module: extract patient module check into helper Luis Chamberlain
@ 2023-04-14  5:08 ` Luis Chamberlain
  2023-04-17 11:18   ` Petr Pavlu
  2023-04-18 18:37   ` [PATCH v4] " Luis Chamberlain
  2023-04-14  5:08 ` [PATCH v3 4/4] module: avoid allocation if module is already present and ready Luis Chamberlain
  3 siblings, 2 replies; 9+ messages in thread
From: Luis Chamberlain @ 2023-04-14  5:08 UTC (permalink / raw)
  To: david, patches, linux-modules, linux-mm, linux-kernel, pmladek,
	petr.pavlu, prarit, torvalds, gregkh, rafael
  Cc: christophe.leroy, tglx, peterz, song, rppt, dave, willy, vbabka,
	mhocko, dave.hansen, colin.i.king, jim.cromie, catalin.marinas,
	jbaron, rick.p.edgecombe, mcgrof

Loading modules with finit_module() can end up using vmalloc(), vmap()
and vmalloc() again, for a total of up to 3 separate allocations in the
worst case for a single module. We always kernel_read*() the module,
that's a vmalloc(). Then vmap() is used for the module decompression,
and if so the last read buffer is freed as we use the now decompressed
module buffer to stuff data into our copy module. The last allocation is
specific to each architectures but pretty much that's generally a series
of vmalloc() calls or a variation of vmalloc to handle ELF sections with
special permissions.

Evaluation with new stress-ng module support [1] with just 100 ops
is proving that you can end up using GiBs of data easily even with all
care we have in the kernel and userspace today in trying to not load modules
which are already loaded. 100 ops seems to resemble the sort of pressure a
system with about 400 CPUs can create on module loading. Although issues
relating to duplicate module requests due to each CPU inucurring a new
module reuest is silly and some of these are being fixed, we currently lack
proper tooling to help diagnose easily what happened, when it happened
and who likely is to blame -- userspace or kernel module autoloading.

Provide an initial set of stats which use debugfs to let us easily scrape
post-boot information about failed loads. This sort of information can
be used on production worklaods to try to optimize *avoiding* redundant
memory pressure using finit_module().

There's a few examples that can be provided:

A 255 vCPU system without the next patch in this series applied:

Startup finished in 19.143s (kernel) + 7.078s (userspace) = 26.221s
graphical.target reached after 6.988s in userspace

And 13.58 GiB of virtual memory space lost due to failed module loading:

root@big ~ # cat /sys/kernel/debug/modules/stats
         Mods ever loaded       67
     Mods failed on kread       0
Mods failed on decompress       0
  Mods failed on becoming       0
      Mods failed on load       1411
        Total module size       11464704
      Total mod text size       4194304
       Failed kread bytes       0
  Failed decompress bytes       0
    Failed becoming bytes       0
        Failed kmod bytes       14588526272
 Virtual mem wasted bytes       14588526272
         Average mod size       171115
    Average mod text size       62602
  Average fail load bytes       10339140
Duplicate failed modules:
              module-name        How-many-times                    Reason
                kvm_intel                   249                      Load
                      kvm                   249                      Load
                irqbypass                     8                      Load
         crct10dif_pclmul                   128                      Load
      ghash_clmulni_intel                    27                      Load
             sha512_ssse3                    50                      Load
           sha512_generic                   200                      Load
              aesni_intel                   249                      Load
              crypto_simd                    41                      Load
                   cryptd                   131                      Load
                    evdev                     2                      Load
                serio_raw                     1                      Load
               virtio_pci                     3                      Load
                     nvme                     3                      Load
                nvme_core                     3                      Load
    virtio_pci_legacy_dev                     3                      Load
    virtio_pci_modern_dev                     3                      Load
                   t10_pi                     3                      Load
                   virtio                     3                      Load
             crc32_pclmul                     6                      Load
           crc64_rocksoft                     3                      Load
             crc32c_intel                    40                      Load
              virtio_ring                     3                      Load
                    crc64                     3                      Load

The following screen shot, of a simple 8vcpu 8 GiB KVM guest with the
next patch in this series applied, shows 226.53 MiB are wasted in virtual
memory allocations which due to duplicate module requests during boot.
It also shows an average module memory size of 167.10 KiB and an an
average module .text + .init.text size of 61.13 KiB. The end shows all
modules which were detected as duplicate requests and whether or not
they failed early after just the first kernel_read*() call or late after
we've already allocated the private space for the module in
layout_and_allocate(). A system with module decompression would reveal
more wasted virtual memory space.

We should put effort now into identifying the source of these duplicate
module requests and trimming these down as much possible. Larger systems
will obviously show much more wasted virtual memory allocations.

root@kmod ~ # cat /sys/kernel/debug/modules/stats
         Mods ever loaded       67
     Mods failed on kread       0
Mods failed on decompress       0
  Mods failed on becoming       83
      Mods failed on load       16
        Total module size       11464704
      Total mod text size       4194304
       Failed kread bytes       0
  Failed decompress bytes       0
    Failed becoming bytes       228959096
        Failed kmod bytes       8578080
 Virtual mem wasted bytes       237537176
         Average mod size       171115
    Average mod text size       62602
  Avg fail becoming bytes       2758544
  Average fail load bytes       536130
Duplicate failed modules:
              module-name        How-many-times                    Reason
                kvm_intel                     7                  Becoming
                      kvm                     7                  Becoming
                irqbypass                     6           Becoming & Load
         crct10dif_pclmul                     7           Becoming & Load
      ghash_clmulni_intel                     7           Becoming & Load
             sha512_ssse3                     6           Becoming & Load
           sha512_generic                     7           Becoming & Load
              aesni_intel                     7                  Becoming
              crypto_simd                     7           Becoming & Load
                   cryptd                     3           Becoming & Load
                    evdev                     1                  Becoming
                serio_raw                     1                  Becoming
                     nvme                     3                  Becoming
                nvme_core                     3                  Becoming
                   t10_pi                     3                  Becoming
               virtio_pci                     3                  Becoming
             crc32_pclmul                     6           Becoming & Load
           crc64_rocksoft                     3                  Becoming
             crc32c_intel                     3                  Becoming
    virtio_pci_modern_dev                     2                  Becoming
    virtio_pci_legacy_dev                     1                  Becoming
                    crc64                     2                  Becoming
                   virtio                     2                  Becoming
              virtio_ring                     2                  Becoming

[0] https://github.com/ColinIanKing/stress-ng.git
[1] echo 0 > /proc/sys/vm/oom_dump_tasks
    ./stress-ng --module 100 --module-name xfs

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 Documentation/core-api/kernel-api.rst |  22 +-
 kernel/module/Kconfig                 |  37 +++
 kernel/module/Makefile                |   1 +
 kernel/module/decompress.c            |   4 +
 kernel/module/internal.h              |  74 +++++
 kernel/module/main.c                  |  65 +++-
 kernel/module/stats.c                 | 432 ++++++++++++++++++++++++++
 kernel/module/tracking.c              |   7 +-
 8 files changed, 630 insertions(+), 12 deletions(-)
 create mode 100644 kernel/module/stats.c

diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst
index e27728596008..9b3f3e5f5a95 100644
--- a/Documentation/core-api/kernel-api.rst
+++ b/Documentation/core-api/kernel-api.rst
@@ -220,12 +220,30 @@ relay interface
 Module Support
 ==============
 
-Module Loading
---------------
+Kernel module auto-loading
+--------------------------
 
 .. kernel-doc:: kernel/module/kmod.c
    :export:
 
+Module debugging
+----------------
+
+.. kernel-doc:: kernel/module/stats.c
+   :doc: module debugging statistics overview
+
+dup_failed_modules - tracks duplicate failed modules
+****************************************************
+
+.. kernel-doc:: kernel/module/stats.c
+   :doc: dup_failed_modules - tracks duplicate failed modules
+
+module statistics debugfs counters
+**********************************
+
+.. kernel-doc:: kernel/module/stats.c
+   :doc: module statistics debugfs counters
+
 Inter Module support
 --------------------
 
diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig
index 424b3bc58f3f..ca277b945a67 100644
--- a/kernel/module/Kconfig
+++ b/kernel/module/Kconfig
@@ -22,6 +22,43 @@ menuconfig MODULES
 
 if MODULES
 
+config MODULE_DEBUG
+	bool "Module debugging"
+	depends on DEBUG_FS
+	help
+	  Allows you to enable / disable features which can help you debug
+	  modules. You don't need these options in produciton systems. You can
+	  and probably should enable this prior to making your kernel
+	  produciton ready though.
+
+if MODULE_DEBUG
+
+config MODULE_STATS
+	bool "Module statistics"
+	depends on DEBUG_FS
+	help
+	  This option allows you to maintain a record of module statistics.
+	  For example each all modules size, average size, text size, and
+	  failed modules and the size for each of those. For failed
+	  modules we keep track of module which failed due to either the
+	  existing module taking too long to load or that module already
+	  was loaded.
+
+	  You should enable this if you are debugging production loads
+	  and want to see if userspace or the kernel is doing stupid things
+	  with loading modules when it shouldn't or if you want to help
+	  optimize userspace / kernel space module autoloading schemes.
+	  You might want to do this because failed modules tend to use
+	  use up significan amount of memory, and so you'd be doing everyone
+	  a favor in avoiding these failure proactively.
+
+	  This functionality is also useful for those experimenting with
+	  module .text ELF section optimization.
+
+	  If unsure, say N.
+
+endif # MODULE_DEBUG
+
 config MODULE_FORCE_LOAD
 	bool "Forced module loading"
 	default n
diff --git a/kernel/module/Makefile b/kernel/module/Makefile
index 5b1d26b53b8d..52340bce497e 100644
--- a/kernel/module/Makefile
+++ b/kernel/module/Makefile
@@ -21,3 +21,4 @@ obj-$(CONFIG_SYSFS) += sysfs.o
 obj-$(CONFIG_KGDB_KDB) += kdb.o
 obj-$(CONFIG_MODVERSIONS) += version.o
 obj-$(CONFIG_MODULE_UNLOAD_TAINT_TRACKING) += tracking.o
+obj-$(CONFIG_MODULE_STATS) += stats.o
diff --git a/kernel/module/decompress.c b/kernel/module/decompress.c
index 7ddc87bee274..e97232b125eb 100644
--- a/kernel/module/decompress.c
+++ b/kernel/module/decompress.c
@@ -297,6 +297,10 @@ int module_decompress(struct load_info *info, const void *buf, size_t size)
 	ssize_t data_size;
 	int error;
 
+#if defined(CONFIG_MODULE_STATS)
+	info->compressed_len = size;
+#endif
+
 	/*
 	 * Start with number of pages twice as big as needed for
 	 * compressed data.
diff --git a/kernel/module/internal.h b/kernel/module/internal.h
index 6ae29bb8836f..9d97a59a9127 100644
--- a/kernel/module/internal.h
+++ b/kernel/module/internal.h
@@ -59,6 +59,9 @@ struct load_info {
 	unsigned long mod_kallsyms_init_off;
 #endif
 #ifdef CONFIG_MODULE_DECOMPRESS
+#ifdef CONFIG_MODULE_STATS
+	unsigned long compressed_len;
+#endif
 	struct page **pages;
 	unsigned int max_pages;
 	unsigned int used_pages;
@@ -143,6 +146,77 @@ static inline bool set_livepatch_module(struct module *mod)
 #endif
 }
 
+/**
+ * enum fail_dup_mod_reason - state at which a duplicate module was detected
+ *
+ * @FAIL_DUP_MOD_BECOMING: the module is read properly, passes all checks but
+ * 	we've determined that another module with the same name is already loaded
+ * 	or being processed on our &modules list. This happens on early_mod_check()
+ * 	right before layout_and_allocate(). The kernel would have already
+ * 	vmalloc()'d space for the entire module through finit_module(). If
+ * 	decompression was used two vmap() spaces were used. These failures can
+ * 	happen when userspace has not seen the module present on the kernel and
+ * 	tries to load the module multiple times at same time.
+ * @FAIL_DUP_MOD_LOAD: the module has been read properly, passes all validation
+ *	checks and the kernel determines that the module was unique and because
+ *	of this allocated yet another private kernel copy of the module space in
+ *	layout_and_allocate() but after this determined in add_unformed_module()
+ *	that another module with the same name is already loaded or being processed.
+ *	These failures should be mitigated as much as possible and are indicative
+ *	of really fast races in loading modules. Without module decompression
+ *	they waste twice as much vmap space. With module decompression three
+ *	times the module's size vmap space is wasted.
+ */
+enum fail_dup_mod_reason {
+	FAIL_DUP_MOD_BECOMING = 0,
+	FAIL_DUP_MOD_LOAD,
+};
+
+#ifdef CONFIG_MODULE_STATS
+
+#define mod_stat_add_long(count, var) atomic_long_add(count, var)
+#define mod_stat_inc(name) atomic_inc(name)
+
+extern atomic_long_t total_mod_size;
+extern atomic_long_t total_text_size;
+extern atomic_long_t invalid_kread_bytes;
+extern atomic_long_t invalid_decompress_bytes;
+
+extern atomic_t modcount;
+extern atomic_t failed_kreads;
+extern atomic_t failed_decompress;
+struct mod_fail_load {
+	struct list_head list;
+	char name[MODULE_NAME_LEN];
+	atomic_long_t count;
+	unsigned long dup_fail_mask;
+};
+
+int try_add_failed_module(const char *name, size_t len, enum fail_dup_mod_reason reason);
+void mod_stat_bump_invalid(struct load_info *info, int flags);
+void mod_stat_bump_becoming(struct load_info *info, int flags);
+
+#else
+
+#define mod_stat_add_long(name, var)
+#define mod_stat_inc(name)
+
+static inline int try_add_failed_module(const char *name, size_t len,
+					enum fail_dup_mod_reason reason)
+{
+	return 0;
+}
+
+static inline void mod_stat_bump_invalid(struct load_info *info, int flags)
+{
+}
+
+static inline void mod_stat_bump_becoming(struct load_info *info, int flags)
+{
+}
+
+#endif /* CONFIG_MODULE_STATS */
+
 #ifdef CONFIG_MODULE_UNLOAD_TAINT_TRACKING
 struct mod_unload_taint {
 	struct list_head list;
diff --git a/kernel/module/main.c b/kernel/module/main.c
index 75b23257128d..5642d77657a0 100644
--- a/kernel/module/main.c
+++ b/kernel/module/main.c
@@ -56,6 +56,7 @@
 #include <linux/dynamic_debug.h>
 #include <linux/audit.h>
 #include <linux/cfi.h>
+#include <linux/debugfs.h>
 #include <uapi/linux/module.h>
 #include "internal.h"
 
@@ -87,6 +88,8 @@ struct symsearch {
 	enum mod_license license;
 };
 
+struct dentry *mod_debugfs_root;
+
 /*
  * Bounds of module memory, for speeding up __module_address.
  * Protected by module_mutex.
@@ -2500,6 +2503,18 @@ static noinline int do_init_module(struct module *mod)
 {
 	int ret = 0;
 	struct mod_initfree *freeinit;
+#if defined(CONFIG_MODULE_STATS)
+	unsigned int text_size = 0, total_size = 0;
+
+	for_each_mod_mem_type(type) {
+		const struct module_memory *mod_mem = &mod->mem[type];
+		if (mod_mem->size) {
+			total_size += mod_mem->size;
+			if (type == MOD_TEXT || type == MOD_INIT_TEXT)
+				text_size += mod->mem[type].size;
+		}
+	}
+#endif
 
 	freeinit = kmalloc(sizeof(*freeinit), GFP_KERNEL);
 	if (!freeinit) {
@@ -2561,6 +2576,7 @@ static noinline int do_init_module(struct module *mod)
 		mod->mem[type].base = NULL;
 		mod->mem[type].size = 0;
 	}
+
 #ifdef CONFIG_DEBUG_INFO_BTF_MODULES
 	/* .BTF is not SHF_ALLOC and will get removed, so sanitize pointer */
 	mod->btf_data = NULL;
@@ -2584,6 +2600,11 @@ static noinline int do_init_module(struct module *mod)
 	mutex_unlock(&module_mutex);
 	wake_up_all(&module_wq);
 
+	mod_stat_add_long(text_size, &total_text_size);
+	mod_stat_add_long(total_size, &total_mod_size);
+
+	mod_stat_inc(&modcount);
+
 	return 0;
 
 fail_free_freeinit:
@@ -2599,6 +2620,7 @@ static noinline int do_init_module(struct module *mod)
 	ftrace_release_mod(mod);
 	free_module(mod);
 	wake_up_all(&module_wq);
+
 	return ret;
 }
 
@@ -2632,7 +2654,8 @@ static bool finished_loading(const char *name)
 }
 
 /* Must be called with module_mutex held */
-static int module_patient_check_exists(const char *name)
+static int module_patient_check_exists(const char *name,
+				       enum fail_dup_mod_reason reason)
 {
 	struct module *old;
 	int err = 0;
@@ -2655,6 +2678,9 @@ static int module_patient_check_exists(const char *name)
 		old = find_module_all(name, strlen(name), true);
 	}
 
+	if (try_add_failed_module(name, strlen(name), reason))
+		pr_warn("Could not add fail-tracking for module: %s\n", name);
+
 	/*
 	 * We are here only when the same module was being loaded. Do
 	 * not try to load it again right now. It prevents long delays
@@ -2679,7 +2705,7 @@ static int add_unformed_module(struct module *mod)
 	mod->state = MODULE_STATE_UNFORMED;
 
 	mutex_lock(&module_mutex);
-	err = module_patient_check_exists(mod->name);
+	err = module_patient_check_exists(mod->name, FAIL_DUP_MOD_LOAD);
 	if (err)
 		goto out;
 
@@ -2800,6 +2826,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
 		       int flags)
 {
 	struct module *mod;
+	bool module_allocated = false;
 	long err = 0;
 	char *after_dashes;
 
@@ -2839,6 +2866,8 @@ static int load_module(struct load_info *info, const char __user *uargs,
 		goto free_copy;
 	}
 
+	module_allocated = true;
+
 	audit_log_kern_module(mod->name);
 
 	/* Reserve our place in the list. */
@@ -2983,6 +3012,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
 	synchronize_rcu();
 	mutex_unlock(&module_mutex);
  free_module:
+	mod_stat_bump_invalid(info, flags);
 	/* Free lock-classes; relies on the preceding sync_rcu() */
 	for_class_mod_mem_type(type, core_data) {
 		lockdep_free_key_range(mod->mem[type].base,
@@ -2991,6 +3021,13 @@ static int load_module(struct load_info *info, const char __user *uargs,
 
 	module_deallocate(mod, info);
  free_copy:
+	/*
+	 * The info->len is always set. We distinguish between
+	 * failures once the proper module was allocated and
+	 * before that.
+	 */
+	if (!module_allocated)
+		mod_stat_bump_becoming(info, flags);
 	free_copy(info, flags);
 	return err;
 }
@@ -3009,8 +3046,11 @@ SYSCALL_DEFINE3(init_module, void __user *, umod,
 	       umod, len, uargs);
 
 	err = copy_module_from_user(umod, len, &info);
-	if (err)
+	if (err) {
+		mod_stat_inc(&failed_kreads);
+		mod_stat_add_long(len, &invalid_kread_bytes);
 		return err;
+	}
 
 	return load_module(&info, uargs, 0);
 }
@@ -3035,14 +3075,20 @@ SYSCALL_DEFINE3(finit_module, int, fd, const char __user *, uargs, int, flags)
 
 	len = kernel_read_file_from_fd(fd, 0, &buf, INT_MAX, NULL,
 				       READING_MODULE);
-	if (len < 0)
+	if (len < 0) {
+		mod_stat_inc(&failed_kreads);
+		mod_stat_add_long(len, &invalid_kread_bytes);
 		return len;
+	}
 
 	if (flags & MODULE_INIT_COMPRESSED_FILE) {
 		err = module_decompress(&info, buf, len);
 		vfree(buf); /* compressed data is no longer needed */
-		if (err)
+		if (err) {
+			mod_stat_inc(&failed_decompress);
+			mod_stat_add_long(len, &invalid_decompress_bytes);
 			return err;
+		}
 	} else {
 		info.hdr = buf;
 		info.len = len;
@@ -3216,3 +3262,12 @@ void print_modules(void)
 			last_unloaded_module.taints);
 	pr_cont("\n");
 }
+
+#ifdef CONFIG_MODULE_DEBUG
+static int module_debugfs_init(void)
+{
+	mod_debugfs_root = debugfs_create_dir("modules", NULL);
+	return 0;
+}
+module_init(module_debugfs_init);
+#endif
diff --git a/kernel/module/stats.c b/kernel/module/stats.c
new file mode 100644
index 000000000000..d4b5b2b9e6ad
--- /dev/null
+++ b/kernel/module/stats.c
@@ -0,0 +1,432 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Debugging module statistics.
+ *
+ * Copyright (C) 2023 Luis Chamberlain <mcgrof@kernel.org>
+ */
+
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/debugfs.h>
+#include <linux/rculist.h>
+#include <linux/math.h>
+
+#include "internal.h"
+
+/**
+ * DOC: module debugging statistics overview
+ *
+ * Enabling CONFIG_MODULE_STATS enables module debugging statistics which
+ * are useful to monitor and root cause memory pressure issues with module
+ * loading. These statistics are useful to allow us to improve production
+ * workloads.
+ *
+ * The current module debugging statistics supported help keep track of module
+ * loading failures to enable improvements either for kernel module
+ * auto-loading usage (request_module()) or interactions with userspace.
+ * Statistics are provided to track of all possible failures in the
+ * finit_module() path and memory wasted in this process space.  Each of the
+ * failure counters are associated to a type of module loading failure which
+ * is known to incur a certain amount of memory allocation loss. In the worst
+ * case loading a module will fail after a 3 step memory allocation process:
+ *
+ *   a) memory allocated with kernel_read_file_from_fd()
+ *   b) module decompression processes the file read from
+ *      kernel_read_file_from_fd(), and vmap() is used to map
+ *      the decompressed module to a new local buffer which represents
+ *      a copy of the decompressed module passed from userspace. The buffer
+ *      from kernel_read_file_from_fd() is freed right away.
+ *   c) layout_and_allocate() allocates space for the final resting
+ *      place where we would keep the module if it were to be processed
+ *      successfully.
+ *
+ * If a failure occurs after these three different allocations only one
+ * counters will be incremetned with the summation of the lost bytes incurred
+ * during this failure. Likewise, if a module loading failed only after step b)
+ * a separate counter is used and incremented for the bytes lost during both
+ * of those allocations.
+ *
+ * Virtual memory space can be limited, for example on x86 virtual memory size
+ * defaults to 128 MiB. We should strive to limit and avoid wasting virtual
+ * memory allocations when possible. These module dubugging statistics help
+ * to evaluate how much memory is being wasted on bootup due to module loading
+ * failures.
+ *
+ * All counters are designed to be incremental. Atomic counters are used so to
+ * remain simple and avoid delays and deadlocks.
+ */
+
+extern struct dentry *mod_debugfs_root;
+
+/**
+ * DOC: dup_failed_modules - tracks duplicate failed modules
+ *
+ * Linked list of modules which failed to be loaded because an already existing
+ * module with the same name was already being processed or already loaded.
+ * The finit_module() system call incurs heavy virtual memory allocations. In
+ * the worst case an finit_module() system call can end up allocating virtual
+ * memory 3 times:
+ *
+ *   1) kernel_read_file_from_fd() call uses vmalloc()
+ *   2) optional module decompression uses vmap()
+ *   3) layout_and allocate() can use vzalloc() or an arch specific variation of
+ *      vmalloc to deal with ELF sections requiring special permissions
+ *
+ * In practice on a typical boot today most finit_module() calls fail due to
+ * the module with the same name already being loaded or about to be processed.
+ * All virtual memory allocated to these failed modules will be lost with
+ * no functional use.
+ *
+ * To help with this the dup_failed_modules allows us to track modules which
+ * failed to load due to the fact that a module already was loaded or being
+ * processed already.  There are only two points at which we can fail such
+ * calls, we list them below along with the number of virtual memory allocation
+ * calls:
+ *
+ *   a) FAIL_DUP_MOD_BECOMING: at the end of early_mod_check() before
+ *	layout_and_allocate(). This does not yet happen.
+ *	- with module decompression: 2 virtual memory allocation calls
+ *	- without module decompression: 1 virtual memory allocation calls
+ *   b) FAIL_DUP_MOD_LOAD: after layout_and_allocate() on add_unformed_module()
+ *   	- with module decompression 3 virtual memory allocation calls
+ *   	- without module decompression 2 virtual memory allocation calls
+ *
+ * We should strive to get this list to be as small as possible. If this list
+ * is not empty it is a reflection of possible work or optimizations possible
+ * either in-kernel or in userspace.
+ */
+static LIST_HEAD(dup_failed_modules);
+
+/**
+ * DOC: module statistics debugfs counters
+ *
+ * The total amount of wasted virtual memory allocation space during module
+ * loading can be computed by adding the total from the summation:
+ *
+ *   * @invalid_kread_bytes +
+ *     @invalid_decompress_bytes +
+ *     @invalid_becoming_bytes +
+ *     @invalid_mod_bytes
+ *
+ * The following debugfs counters are available to inspect module loading
+ * failures:
+ *
+ *   * total_mod_size: total bytes ever used by all modules we've dealt with on
+ *     this system
+ *   * total_text_size: total bytes of the .text and .init.text ELF section
+ *     sizes we've dealt with on this system
+ *   * invalid_kread_bytes: bytes wasted in failures which happen due to
+ *     memory allocations with the initial kernel_read_file_from_fd().
+ *     kernel_read_file_from_fd() uses vmalloc() and so these are wasted
+ *     vmalloc() memory allocations. These should typically not happen unless
+ *     your system is under memory pressure.
+ *   * invalid_decompress_bytes: number of bytes wasted due to
+ *     memory allocations in the module decompression path that use vmap().
+ *     These typically should not happen unless your system is under memory
+ *     presssure.
+ *   * invalid_becoming_bytes: total number of bytes wasted due to
+ *     allocations used to read the kernel module userspace wants us to read
+ *     before we promote it to be processed to be added to our @modules linked
+ *     list. These failures could in theory happen in if we had a check in between
+ *     between a successful kernel_read_file_from_fd() call and right before
+ *     we allocate the our private memory for the module which would be kept if
+ *     the module is successfully loaded. The most common reason for this failure
+ *     is when userspace is racing to load a module which it does not yet see
+ *     loaded. The first module to succeed in add_unformed_module() will add a
+ *     module to our &modules list and subsequent loads of modules with the
+ *     same name will error out at the end of early_mod_check(). A check
+ *     for module_patient_check_exists() at the end of early_mod_check() could be
+ *     added to prevent duplicate allocations on layout_and_allocate() for
+ *     modules already being processed. These duplicate failed modules are
+ *     non-fatal, however they typically are indicative of userspace not seeing
+ *     a module in userspace loaded yet and unecessarily trying to load a
+ *     module before the kernel even has a chance to begin to process prior
+ *     requests. Although duplicate failures can be non-fatal, we should try to
+ *     reduce vmalloc() pressure proactively, so ideally after boot this will
+ *     be close to as 0 as possible.  If module decompression was used we also
+ *     add to this counter the cost of the initial kernel_read_file_from_fd()
+ *     of the compressed module. If module decompression was not used the
+ *     value represents the total wasted allocations in kernel_read_file_from_fd()
+ *     calls for these type of failures. These failures can occur because:
+ *
+ *    * module_sig_check() - module signature checks
+ *    * elf_validity_cache_copy() - some ELF validation issue
+ *    * early_mod_check():
+ *
+ *      * blacklisting
+ *      * failed to rewrite section headers
+ *      * version magic
+ *      * live patch requirements didn't check out
+ *      * the module was detected as being already present
+ *
+ *   * invalid_mod_bytes: these are the total number of bytes lost due to
+ *     failures after we did all the sanity checks of the module which userspace
+ *     passed to us and after our first check that the module is unique.  A
+ *     module can still fail to load if we detect the module is loaded after we
+ *     allocate space for it with layout_and_allocate(), we do this check right
+ *     before processing the module as live and run its initialiation routines.
+ *     Note that you have a failure of this type it also means the respective
+ *     kernel_read_file_from_fd() memory space was also wasted, and so we
+ *     increment this counter with twice the size of the module. Additionally
+ *     if you used module decompression the size of the compressed module is
+ *     also added to this counter.
+ *
+ *  * modcount: how many modules we've loaded in our kernel life time
+ *  * failed_kreads: how many modules failed due to failed kernel_read_file_from_fd()
+ *  * failed_decompress: how many failed module decompression attempts we've had.
+ *    These really should not happen unless your compression / decompression
+ *    might be broken.
+ *  * failed_becoming: how many modules failed after we kernel_read_file_from_fd()
+ *    it and before we allocate memory for it with layout_and_allocate(). This
+ *    counter is never incremented if you manage to validate the module and
+ *    call layout_and_allocate() for it.
+ *  * failed_load_modules: how many modules failed once we've allocated our
+ *    private space for our module using layout_and_allocate(). These failures
+ *    should hopefully mostly be dealt with already. Races in theory could
+ *    still exist here, but it would just mean the kernel had started processing
+ *    two threads concurrently up to early_mod_check() and then one just one
+ *    thread won. These failures are good signs the kernel or userspace is
+ *    doing something seriously stupid or that could be improved. We should
+ *    strive to fix these, but it is perhaps not easy to fix them.
+ *    A recent example are the modules requests incurred for frequency modules,
+ *    a separate module request was being issued for each CPU on a system.
+ */
+
+atomic_long_t total_mod_size;
+atomic_long_t total_text_size;
+atomic_long_t invalid_kread_bytes;
+atomic_long_t invalid_decompress_bytes;
+static atomic_long_t invalid_becoming_bytes;
+static atomic_long_t invalid_mod_bytes;
+atomic_t modcount;
+atomic_t failed_kreads;
+atomic_t failed_decompress;
+static atomic_t failed_becoming;
+static atomic_t failed_load_modules;
+
+static const char *mod_fail_to_str(struct mod_fail_load *mod_fail)
+{
+	if (test_bit(FAIL_DUP_MOD_BECOMING, &mod_fail->dup_fail_mask) &&
+	    test_bit(FAIL_DUP_MOD_LOAD, &mod_fail->dup_fail_mask))
+		return "Becoming & Load";
+	if (test_bit(FAIL_DUP_MOD_BECOMING, &mod_fail->dup_fail_mask))
+		return "Becoming";
+	if (test_bit(FAIL_DUP_MOD_LOAD, &mod_fail->dup_fail_mask))
+		return "Load";
+	return "Bug-on-stats";
+}
+
+void mod_stat_bump_invalid(struct load_info *info, int flags)
+{
+	atomic_long_add(info->len * 2, &invalid_mod_bytes);
+	atomic_inc(&failed_load_modules);
+#if defined(CONFIG_MODULE_DECOMPRESS)
+	if (flags & MODULE_INIT_COMPRESSED_FILE)
+		atomic_long_add(info->compressed_len, &invalid_mod_byte);
+#endif
+}
+
+void mod_stat_bump_becoming(struct load_info *info, int flags)
+{
+	atomic_inc(&failed_becoming);
+	atomic_long_add(info->len, &invalid_becoming_bytes);
+#if defined(CONFIG_MODULE_DECOMPRESS)
+	if (flags & MODULE_INIT_COMPRESSED_FILE)
+		atomic_long_add(info->compressed_len, &invalid_becoming_bytes);
+#endif
+}
+
+int try_add_failed_module(const char *name, size_t len, enum fail_dup_mod_reason reason)
+{
+	struct mod_fail_load *mod_fail;
+
+	list_for_each_entry_rcu(mod_fail, &dup_failed_modules, list,
+				lockdep_is_held(&module_mutex)) {
+		if (strlen(mod_fail->name) == len && !memcmp(mod_fail->name, name, len)) {
+                        atomic_long_inc(&mod_fail->count);
+			__set_bit(reason, &mod_fail->dup_fail_mask);
+                        goto out;
+                }
+        }
+
+	mod_fail = kzalloc(sizeof(*mod_fail), GFP_KERNEL);
+	if (!mod_fail)
+		return -ENOMEM;
+	memcpy(mod_fail->name, name, len);
+	__set_bit(reason, &mod_fail->dup_fail_mask);
+        atomic_long_inc(&mod_fail->count);
+        list_add_rcu(&mod_fail->list, &dup_failed_modules);
+out:
+	return 0;
+}
+
+/*
+ * At 64 bytes per module and assuming a 1024 bytes preamble we can fit the
+ * 112 module prints within 8k.
+ *
+ * 1024 + (64*112) = 8k
+ */
+#define MAX_PREAMBLE 1024
+#define MAX_FAILED_MOD_PRINT 112
+#define MAX_BYTES_PER_MOD 64
+static ssize_t read_file_mod_stats(struct file *file, char __user *user_buf,
+				   size_t count, loff_t *ppos)
+{
+	struct mod_fail_load *mod_fail;
+	unsigned int len, size, count_failed = 0;
+	char *buf;
+	u32 live_mod_count, fkreads, fdecompress, fbecoming, floads;
+	u64 total_size, text_size, ikread_bytes, ibecoming_bytes, idecompress_bytes, imod_bytes,
+	    total_virtual_lost;
+
+	live_mod_count = atomic_read(&modcount);
+	fkreads = atomic_read(&failed_kreads);
+	fdecompress = atomic_read(&failed_decompress);
+	fbecoming = atomic_read(&failed_becoming);
+	floads = atomic_read(&failed_load_modules);
+
+	total_size = atomic64_read(&total_mod_size);
+	text_size = atomic64_read(&total_text_size);
+	ikread_bytes = atomic64_read(&invalid_kread_bytes);
+	idecompress_bytes = atomic64_read(&invalid_decompress_bytes);
+	ibecoming_bytes = atomic64_read(&invalid_becoming_bytes);
+	imod_bytes = atomic64_read(&invalid_mod_bytes);
+
+	total_virtual_lost = ikread_bytes + idecompress_bytes + ibecoming_bytes + imod_bytes;
+
+	size = MAX_PREAMBLE + min((unsigned int)(floads + fbecoming) * MAX_BYTES_PER_MOD,
+			  (unsigned int) MAX_FAILED_MOD_PRINT * MAX_BYTES_PER_MOD);
+	buf = kzalloc(size, GFP_KERNEL);
+	if (buf == NULL)
+		return -ENOMEM;
+
+	/* The beginning of our debug preamble */
+	len = scnprintf(buf + 0, size - len, "%25s\t%u\n", "Mods ever loaded", live_mod_count);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on kread", fkreads);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on decompress",
+			 fdecompress);
+	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on becoming", fbecoming);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on load", floads);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Total module size", total_size);
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Total mod text size", text_size);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed kread bytes", ikread_bytes);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed decompress bytes",
+			 idecompress_bytes);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed becoming bytes", ibecoming_bytes);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed kmod bytes", imod_bytes);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Virtual mem wasted bytes", total_virtual_lost);
+
+	if (live_mod_count && total_size) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Average mod size",
+				 DIV_ROUND_UP(total_size, live_mod_count));
+	}
+
+	if (live_mod_count && text_size) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Average mod text size",
+				 DIV_ROUND_UP(text_size, live_mod_count));
+	}
+
+	/*
+	 * We use WARN_ON_ONCE() for the counters to ensure we always have parity
+	 * for keeping tabs on a type of failure with one type of byte counter.
+	 * The counters for imod_bytes does not increase for fkreads failures
+	 * for example, and so on.
+	 */
+
+	WARN_ON_ONCE(ikread_bytes && !fkreads);
+	if (fkreads && ikread_bytes) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Avg fail kread bytes",
+				 DIV_ROUND_UP(ikread_bytes, fkreads));
+	}
+
+	WARN_ON_ONCE(ibecoming_bytes && !fbecoming);
+	if (fbecoming && ibecoming_bytes) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Avg fail becoming bytes",
+				 DIV_ROUND_UP(ibecoming_bytes, fbecoming));
+	}
+
+	WARN_ON_ONCE(idecompress_bytes && !fdecompress);
+	if (fdecompress && idecompress_bytes) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Avg fail decomp bytes",
+				 DIV_ROUND_UP(idecompress_bytes, fdecompress));
+	}
+
+	WARN_ON_ONCE(imod_bytes && !floads);
+	if (floads && imod_bytes) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Average fail load bytes",
+				 DIV_ROUND_UP(imod_bytes, floads));
+	}
+
+	/* End of our debug preamble header. */
+
+	/* Catch when we've gone beyond our expected preamble */
+	WARN_ON_ONCE(len >= MAX_PREAMBLE);
+
+	if (list_empty(&dup_failed_modules))
+		goto out;
+
+	len += scnprintf(buf + len, size - len, "Duplicate failed modules:\n");
+	len += scnprintf(buf + len, size - len, "%25s\t%15s\t%25s\n",
+			 "module-name", "How-many-times", "Reason");
+	mutex_lock(&module_mutex);
+
+
+	list_for_each_entry_rcu(mod_fail, &dup_failed_modules, list) {
+		if (WARN_ON_ONCE(++count_failed >= MAX_FAILED_MOD_PRINT))
+			goto out_unlock;
+		len += scnprintf(buf + len, size - len, "%25s\t%15llu\t%25s\n", mod_fail->name,
+				 atomic64_read(&mod_fail->count), mod_fail_to_str(mod_fail));
+	}
+out_unlock:
+	mutex_unlock(&module_mutex);
+out:
+	kfree(buf);
+        return simple_read_from_buffer(user_buf, count, ppos, buf, len);
+}
+#undef MAX_PREAMBLE
+#undef MAX_FAILED_MOD_PRINT
+#undef MAX_BYTES_PER_MOD
+
+static const struct file_operations fops_mod_stats = {
+	.read = read_file_mod_stats,
+	.open = simple_open,
+	.owner = THIS_MODULE,
+	.llseek = default_llseek,
+};
+
+#define mod_debug_add_ulong(name) debugfs_create_ulong(#name, 0400, mod_debugfs_root, (unsigned long *) &name.counter)
+#define mod_debug_add_atomic(name) debugfs_create_atomic_t(#name, 0400, mod_debugfs_root, &name)
+static int __init module_stats_init(void)
+{
+	mod_debug_add_ulong(total_mod_size);
+	mod_debug_add_ulong(total_text_size);
+	mod_debug_add_ulong(invalid_kread_bytes);
+	mod_debug_add_ulong(invalid_decompress_bytes);
+	mod_debug_add_ulong(invalid_becoming_bytes);
+	mod_debug_add_ulong(invalid_mod_bytes);
+
+	mod_debug_add_atomic(modcount);
+	mod_debug_add_atomic(failed_kreads);
+	mod_debug_add_atomic(failed_decompress);
+	mod_debug_add_atomic(failed_becoming);
+	mod_debug_add_atomic(failed_load_modules);
+
+	debugfs_create_file("stats", 0400, mod_debugfs_root, mod_debugfs_root, &fops_mod_stats);
+
+	return 0;
+}
+#undef mod_debug_add_ulong
+#undef mod_debug_add_atomic
+module_init(module_stats_init);
diff --git a/kernel/module/tracking.c b/kernel/module/tracking.c
index 26d812e07615..16742d1c630c 100644
--- a/kernel/module/tracking.c
+++ b/kernel/module/tracking.c
@@ -15,6 +15,7 @@
 #include "internal.h"
 
 static LIST_HEAD(unloaded_tainted_modules);
+extern struct dentry *mod_debugfs_root;
 
 int try_add_tainted_module(struct module *mod)
 {
@@ -120,12 +121,8 @@ static const struct file_operations unloaded_tainted_modules_fops = {
 
 static int __init unloaded_tainted_modules_init(void)
 {
-	struct dentry *dir;
-
-	dir = debugfs_create_dir("modules", NULL);
-	debugfs_create_file("unloaded_tainted", 0444, dir, NULL,
+	debugfs_create_file("unloaded_tainted", 0444, mod_debugfs_root, NULL,
 			    &unloaded_tainted_modules_fops);
-
 	return 0;
 }
 module_init(unloaded_tainted_modules_init);
-- 
2.39.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 4/4] module: avoid allocation if module is already present and ready
  2023-04-14  5:08 [PATCH v3 0/4] module: avoid userspace pressure on unwanted allocations Luis Chamberlain
                   ` (2 preceding siblings ...)
  2023-04-14  5:08 ` [PATCH v3 3/4] module: add debug stats to help identify memory pressure Luis Chamberlain
@ 2023-04-14  5:08 ` Luis Chamberlain
  3 siblings, 0 replies; 9+ messages in thread
From: Luis Chamberlain @ 2023-04-14  5:08 UTC (permalink / raw)
  To: david, patches, linux-modules, linux-mm, linux-kernel, pmladek,
	petr.pavlu, prarit, torvalds, gregkh, rafael
  Cc: christophe.leroy, tglx, peterz, song, rppt, dave, willy, vbabka,
	mhocko, dave.hansen, colin.i.king, jim.cromie, catalin.marinas,
	jbaron, rick.p.edgecombe, mcgrof

The finit_module() system call can create unnecessary virtual memory
pressure for duplicate modules. This is because load_module() can in
the worse case allocate more than twice the size of a module in virtual
memory. This saves at least a full size of the module in wasted vmalloc
space memory by trying to avoid duplicates as soon as we can validate
the module name in the read module structure.

This can only be an issue if a system is getting hammered with userspace
loading modules. There are two ways to load modules typically on systems,
one is the kernel moduile auto-loading (*request_module*() calls in-kernel)
and the other is things like udev. The auto-loading is in-kernel, but that
pings back to userspace to just call modprobe. We already have a way to
restrict the amount of concurrent kernel auto-loads in a given time, however
that still allows multiple requests for the same module to go through
and force two threads in userspace racing to call modprobe for the same
exact module. Even though libkmod which both modprobe and udev does check
if a module is already loaded prior calling finit_module() races are
still possible and this is clearly evident today when you have multiple
CPUs.

To avoid memory pressure for such stupid cases put a stop gap for them.
The *earliest* we can detect duplicates from the modules side of things
is once we have blessed the module name, sadly after the first vmalloc
allocation. We can check for the module being present *before* a secondary
vmalloc() allocation.

There is a linear relationship between wasted virtual memory bytes and
the number of CPU counts. The reason is that udev ends up racing to call
tons of the same modules for each of the CPUs.

We can see the different linear relationships between wasted virtual
memory and CPU count during after boot in the following graph:

         +----------------------------------------------------------------------------+
    14GB |-+          +            +            +           +           *+          +-|
         |                                                          ****              |
         |                                                       ***                  |
         |                                                     **                     |
    12GB |-+                                                 **                     +-|
         |                                                 **                         |
         |                                               **                           |
         |                                             **                             |
         |                                           **                               |
    10GB |-+                                       **                               +-|
         |                                       **                                   |
         |                                     **                                     |
         |                                   **                                       |
     8GB |-+                               **                                       +-|
waste    |                               **                             ###           |
         |                             **                           ####              |
         |                           **                      #######                  |
     6GB |-+                     ****                    ####                       +-|
         |                      *                    ####                             |
         |                     *                 ####                                 |
         |                *****              ####                                     |
     4GB |-+            **               ####                                       +-|
         |            **             ####                                             |
         |          **           ####                                                 |
         |        **         ####                                                     |
     2GB |-+    **      #####                                                       +-|
         |     *    ####                                                              |
         |    * ####                                                   Before ******* |
         |  **##      +            +            +           +           After ####### |
         +----------------------------------------------------------------------------+
         0            50          100          150         200          250          300
                                          CPUs count

On the y-axis we can see gigabytes of wasted virtual memory during boot
due to duplicate module requests which just end up failing. Trying to
infer the slope this ends up being about ~463 MiB per CPU lost prior
to this patch. After this patch we only loose about ~230 MiB per CPU, for
a total savings of about ~233 MiB per CPU. This is all *just on bootup*!

On a 8vcpu 8 GiB RAM system using kdevops and testing against selftests
kmod.sh -t 0008 I see a saving in the *highest* side of memory
consumption of up to ~ 84 MiB with the Linux kernel selftests kmod
test 0008. With the new stress-ng module test I see a 145 MiB difference
in max memory consumption with 100 ops. The stress-ng module ops tests can be
pretty pathalogical -- it is not realistic, however it was used to
finally successfully reproduce issues which are only reported to happen on
system with over 400 CPUs [0] by just usign 100 ops on a 8vcpu 8 GiB RAM
system. Running out of virtual memory space is no surprise given the
above graph, since at least on x86_64 we're capped at 128 MiB, eventually
we'd hit a series of errors and once can use the above graph to
guestimate when. This of course will vary depending on the features
you have enabled. So for instance, enabling KASAN seems to make this
much worse.

The results with kmod and stress-ng can be observed and visualized below.
The time it takes to run the test is also not affected.

The kmod tests 0008:

The gnuplot is set to a range from 400000 KiB (390 Mib) - 580000 (566 Mib)
given the tests peak around that range.

cat kmod.plot
set term dumb
set output fileout
set yrange [400000:580000]
plot filein with linespoints title "Memory usage (KiB)"

Before:
root@kmod ~ # /data/linux-next/tools/testing/selftests/kmod/kmod.sh -t 0008
root@kmod ~ # free -k -s 1 -c 40 | grep Mem | awk '{print $3}' > log-0008-before.txt ^C
root@kmod ~ # sort -n -r log-0008-before.txt | head -1
528732

So ~516.33 MiB

After:

root@kmod ~ # /data/linux-next/tools/testing/selftests/kmod/kmod.sh -t 0008
root@kmod ~ # free -k -s 1 -c 40 | grep Mem | awk '{print $3}' > log-0008-after.txt ^C

root@kmod ~ # sort -n -r log-0008-after.txt | head -1
442516

So ~432.14 MiB

That's about 84 ~MiB in savings in the worst case. The graphs:

root@kmod ~ # gnuplot -e "filein='log-0008-before.txt'; fileout='graph-0008-before.txt'" kmod.plot
root@kmod ~ # gnuplot -e "filein='log-0008-after.txt';  fileout='graph-0008-after.txt'"  kmod.plot

root@kmod ~ # cat graph-0008-before.txt

  580000 +-----------------------------------------------------------------+
         |       +        +       +       +       +        +       +       |
  560000 |-+                                    Memory usage (KiB) ***A***-|
         |                                                                 |
  540000 |-+                                                             +-|
         |                                                                 |
         |        *A     *AA*AA*A*AA          *A*AA    A*A*A *AA*A*AA*A  A |
  520000 |-+A*A*AA  *AA*A           *A*AA*A*AA     *A*A     A          *A+-|
         |*A                                                               |
  500000 |-+                                                             +-|
         |                                                                 |
  480000 |-+                                                             +-|
         |                                                                 |
  460000 |-+                                                             +-|
         |                                                                 |
         |                                                                 |
  440000 |-+                                                             +-|
         |                                                                 |
  420000 |-+                                                             +-|
         |       +        +       +       +       +        +       +       |
  400000 +-----------------------------------------------------------------+
         0       5        10      15      20      25       30      35      40

root@kmod ~ # cat graph-0008-after.txt

  580000 +-----------------------------------------------------------------+
         |       +        +       +       +       +        +       +       |
  560000 |-+                                    Memory usage (KiB) ***A***-|
         |                                                                 |
  540000 |-+                                                             +-|
         |                                                                 |
         |                                                                 |
  520000 |-+                                                             +-|
         |                                                                 |
  500000 |-+                                                             +-|
         |                                                                 |
  480000 |-+                                                             +-|
         |                                                                 |
  460000 |-+                                                             +-|
         |                                                                 |
         |          *A              *A*A                                   |
  440000 |-+A*A*AA*A  A       A*A*AA    A*A*AA*A*AA*A*AA*A*AA*AA*A*AA*A*AA-|
         |*A           *A*AA*A                                             |
  420000 |-+                                                             +-|
         |       +        +       +       +       +        +       +       |
  400000 +-----------------------------------------------------------------+
         0       5        10      15      20      25       30      35      40

The stress-ng module tests:

This is used to run the test to try to reproduce the vmap issues
reported by David:

  echo 0 > /proc/sys/vm/oom_dump_tasks
  ./stress-ng --module 100 --module-name xfs

Prior to this commit:
root@kmod ~ # free -k -s 1 -c 40 | grep Mem | awk '{print $3}' > baseline-stress-ng.txt
root@kmod ~ # sort -n -r baseline-stress-ng.txt | head -1
5046456

After this commit:
root@kmod ~ # free -k -s 1 -c 40 | grep Mem | awk '{print $3}' > after-stress-ng.txt
root@kmod ~ # sort -n -r after-stress-ng.txt | head -1
4896972

5046456 - 4896972
149484
149484/1024
145.98046875000000000000

So this commit using stress-ng reveals saving about 145 MiB in memory
using 100 ops from stress-ng which reproduced the vmap issue reported.

cat kmod.plot
set term dumb
set output fileout
set yrange [4700000:5070000]
plot filein with linespoints title "Memory usage (KiB)"

root@kmod ~ # gnuplot -e "filein='baseline-stress-ng.txt'; fileout='graph-stress-ng-before.txt'"  kmod-simple-stress-ng.plot
root@kmod ~ # gnuplot -e "filein='after-stress-ng.txt'; fileout='graph-stress-ng-after.txt'"  kmod-simple-stress-ng.plot

root@kmod ~ # cat graph-stress-ng-before.txt

           +---------------------------------------------------------------+
  5.05e+06 |-+     + A     +       +       +       +       +       +     +-|
           |         *                          Memory usage (KiB) ***A*** |
           |         *                             A                       |
     5e+06 |-+      **                            **                     +-|
           |        **                            * *    A                 |
  4.95e+06 |-+      * *                          A  *   A*               +-|
           |        * *      A       A           *  *  *  *             A  |
           |       *  *     * *     * *        *A   *  *  *      A      *  |
   4.9e+06 |-+     *  *     * A*A   * A*AA*A  A      *A    **A   **A*A  *+-|
           |       A  A*A  A    *  A       *  *      A     A *  A    * **  |
           |      *      **      **         * *              *  *    * * * |
  4.85e+06 |-+   A       A       A          **               *  *     ** *-|
           |     *                           *               * *      ** * |
           |     *                           A               * *      *  * |
   4.8e+06 |-+   *                                           * *      A  A-|
           |     *                                           * *           |
  4.75e+06 |-+  *                                            * *         +-|
           |    *                                            **            |
           |    *  +       +       +       +       +       + **    +       |
   4.7e+06 +---------------------------------------------------------------+
           0       5       10      15      20      25      30      35      40

root@kmod ~ # cat graph-stress-ng-after.txt

           +---------------------------------------------------------------+
  5.05e+06 |-+     +       +       +       +       +       +       +     +-|
           |                                    Memory usage (KiB) ***A*** |
           |                                                               |
     5e+06 |-+                                                           +-|
           |                                                               |
  4.95e+06 |-+                                                           +-|
           |                                                               |
           |                                                               |
   4.9e+06 |-+                                      *AA                  +-|
           |  A*AA*A*A  A  A*AA*AA*A*AA*A  A  A  A*A   *AA*A*A  A  A*AA*AA |
           |  *      * **  *            *  *  ** *            ***  *       |
  4.85e+06 |-+*       ***  *            * * * ***             A *  *     +-|
           |  *       A *  *             ** * * A               *  *       |
           |  *         *  *             *  **                  *  *       |
   4.8e+06 |-+*         *  *             A   *                  *  *     +-|
           | *          * *                  A                  * *        |
  4.75e+06 |-*          * *                                     * *      +-|
           | *          * *                                     * *        |
           | *     +    * *+       +       +       +       +    * *+       |
   4.7e+06 +---------------------------------------------------------------+
           0       5       10      15      20      25      30      35      40

[0] https://lkml.kernel.org/r/20221013180518.217405-1-david@redhat.com

Reported-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---
 kernel/module/main.c  |  6 +++++-
 kernel/module/stats.c | 14 +++++++-------
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/kernel/module/main.c b/kernel/module/main.c
index 5642d77657a0..1ed373145278 100644
--- a/kernel/module/main.c
+++ b/kernel/module/main.c
@@ -2815,7 +2815,11 @@ static int early_mod_check(struct load_info *info, int flags)
 	if (err)
 		return err;
 
-	return 0;
+	mutex_lock(&module_mutex);
+	err = module_patient_check_exists(info->mod->name, FAIL_DUP_MOD_BECOMING);
+	mutex_unlock(&module_mutex);
+
+	return err;
 }
 
 /*
diff --git a/kernel/module/stats.c b/kernel/module/stats.c
index d4b5b2b9e6ad..d9b9bccf4256 100644
--- a/kernel/module/stats.c
+++ b/kernel/module/stats.c
@@ -87,7 +87,7 @@ extern struct dentry *mod_debugfs_root;
  * calls:
  *
  *   a) FAIL_DUP_MOD_BECOMING: at the end of early_mod_check() before
- *	layout_and_allocate(). This does not yet happen.
+ *	layout_and_allocate().
  *	- with module decompression: 2 virtual memory allocation calls
  *	- without module decompression: 1 virtual memory allocation calls
  *   b) FAIL_DUP_MOD_LOAD: after layout_and_allocate() on add_unformed_module()
@@ -130,15 +130,15 @@ static LIST_HEAD(dup_failed_modules);
  *   * invalid_becoming_bytes: total number of bytes wasted due to
  *     allocations used to read the kernel module userspace wants us to read
  *     before we promote it to be processed to be added to our @modules linked
- *     list. These failures could in theory happen in if we had a check in between
- *     between a successful kernel_read_file_from_fd() call and right before
- *     we allocate the our private memory for the module which would be kept if
- *     the module is successfully loaded. The most common reason for this failure
+ *     list. These failures could can happen in between a successful
+ *     kernel_read_file_from_fd() call and right before we allocate the our
+ *     private memory for the module which would be kept if the module is
+ *     successfully loaded. The most common reason for this failure
  *     is when userspace is racing to load a module which it does not yet see
  *     loaded. The first module to succeed in add_unformed_module() will add a
  *     module to our &modules list and subsequent loads of modules with the
- *     same name will error out at the end of early_mod_check(). A check
- *     for module_patient_check_exists() at the end of early_mod_check() could be
+ *     same name will error out at the end of early_mod_check(). The check
+ *     for module_patient_check_exists() at the end of early_mod_check() was
  *     added to prevent duplicate allocations on layout_and_allocate() for
  *     modules already being processed. These duplicate failed modules are
  *     non-fatal, however they typically are indicative of userspace not seeing
-- 
2.39.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/4] module: fix kmemleak annotations for non init ELF sections
  2023-04-14  5:08 ` [PATCH v3 1/4] module: fix kmemleak annotations for non init ELF sections Luis Chamberlain
@ 2023-04-14 10:18   ` Catalin Marinas
  0 siblings, 0 replies; 9+ messages in thread
From: Catalin Marinas @ 2023-04-14 10:18 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: david, patches, linux-modules, linux-mm, linux-kernel, pmladek,
	petr.pavlu, prarit, torvalds, gregkh, rafael, christophe.leroy,
	tglx, peterz, song, rppt, dave, willy, vbabka, mhocko,
	dave.hansen, colin.i.king, jim.cromie, jbaron, rick.p.edgecombe

On Thu, Apr 13, 2023 at 10:08:33PM -0700, Luis Chamberlain wrote:
> Commit ac3b43283923 ("module: replace module_layout with module_memory")
> reworked the way to handle memory allocations to make it clearer. But it
> lost in translation how we handled kmemleak_ignore() or kmemleak_not_leak()
> for different ELF sections.
> 
> Fix this and clarify the comments a bit more. Contrary to the old way
> of using kmemleak_ignore() for init.* ELF sections we stick now only to
> kmemleak_not_leak() as per suggestion by Catalin Marinas so to avoid
> any false positives and simplify the code.
> 
> Fixes: ac3b43283923 ("module: replace module_layout with module_memory")
> Reported-by: Jim Cromie <jim.cromie@gmail.com>
> Acked-by: Song Liu <song@kernel.org>
> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 3/4] module: add debug stats to help identify memory pressure
  2023-04-14  5:08 ` [PATCH v3 3/4] module: add debug stats to help identify memory pressure Luis Chamberlain
@ 2023-04-17 11:18   ` Petr Pavlu
  2023-04-18 18:30     ` Luis Chamberlain
  2023-04-18 18:37   ` [PATCH v4] " Luis Chamberlain
  1 sibling, 1 reply; 9+ messages in thread
From: Petr Pavlu @ 2023-04-17 11:18 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: christophe.leroy, tglx, peterz, song, rppt, dave, willy, vbabka,
	mhocko, dave.hansen, colin.i.king, jim.cromie, catalin.marinas,
	jbaron, rick.p.edgecombe, david, patches, linux-modules,
	linux-mm, linux-kernel, pmladek, prarit, torvalds, gregkh,
	rafael

On 4/14/23 07:08, Luis Chamberlain wrote:
> Loading modules with finit_module() can end up using vmalloc(), vmap()
> and vmalloc() again, for a total of up to 3 separate allocations in the
> worst case for a single module. We always kernel_read*() the module,
> that's a vmalloc(). Then vmap() is used for the module decompression,
> and if so the last read buffer is freed as we use the now decompressed
> module buffer to stuff data into our copy module. The last allocation is
> specific to each architectures but pretty much that's generally a series
> of vmalloc() calls or a variation of vmalloc to handle ELF sections with
> special permissions.
> 
> Evaluation with new stress-ng module support [1] with just 100 ops
> is proving that you can end up using GiBs of data easily even with all
> care we have in the kernel and userspace today in trying to not load modules
> which are already loaded. 100 ops seems to resemble the sort of pressure a
> system with about 400 CPUs can create on module loading. Although issues
> relating to duplicate module requests due to each CPU inucurring a new
> module reuest is silly and some of these are being fixed, we currently lack
> proper tooling to help diagnose easily what happened, when it happened
> and who likely is to blame -- userspace or kernel module autoloading.
> 
> Provide an initial set of stats which use debugfs to let us easily scrape
> post-boot information about failed loads. This sort of information can
> be used on production worklaods to try to optimize *avoiding* redundant
> memory pressure using finit_module().

This looks useful, thanks for it. Some comments below.

> 
> There's a few examples that can be provided:
> 
> A 255 vCPU system without the next patch in this series applied:
> 
> Startup finished in 19.143s (kernel) + 7.078s (userspace) = 26.221s
> graphical.target reached after 6.988s in userspace
> 
> And 13.58 GiB of virtual memory space lost due to failed module loading:
> 
> root@big ~ # cat /sys/kernel/debug/modules/stats
>          Mods ever loaded       67
>      Mods failed on kread       0
> Mods failed on decompress       0
>   Mods failed on becoming       0
>       Mods failed on load       1411
>         Total module size       11464704
>       Total mod text size       4194304
>        Failed kread bytes       0
>   Failed decompress bytes       0
>     Failed becoming bytes       0
>         Failed kmod bytes       14588526272
>  Virtual mem wasted bytes       14588526272
>          Average mod size       171115
>     Average mod text size       62602
>   Average fail load bytes       10339140
> Duplicate failed modules:
>               module-name        How-many-times                    Reason
>                 kvm_intel                   249                      Load
>                       kvm                   249                      Load
>                 irqbypass                     8                      Load
>          crct10dif_pclmul                   128                      Load
>       ghash_clmulni_intel                    27                      Load
>              sha512_ssse3                    50                      Load
>            sha512_generic                   200                      Load
>               aesni_intel                   249                      Load
>               crypto_simd                    41                      Load
>                    cryptd                   131                      Load
>                     evdev                     2                      Load
>                 serio_raw                     1                      Load
>                virtio_pci                     3                      Load
>                      nvme                     3                      Load
>                 nvme_core                     3                      Load
>     virtio_pci_legacy_dev                     3                      Load
>     virtio_pci_modern_dev                     3                      Load
>                    t10_pi                     3                      Load
>                    virtio                     3                      Load
>              crc32_pclmul                     6                      Load
>            crc64_rocksoft                     3                      Load
>              crc32c_intel                    40                      Load
>               virtio_ring                     3                      Load
>                     crc64                     3                      Load
> 
> The following screen shot, of a simple 8vcpu 8 GiB KVM guest with the
> next patch in this series applied, shows 226.53 MiB are wasted in virtual
> memory allocations which due to duplicate module requests during boot.
> It also shows an average module memory size of 167.10 KiB and an an
> average module .text + .init.text size of 61.13 KiB. The end shows all
> modules which were detected as duplicate requests and whether or not
> they failed early after just the first kernel_read*() call or late after
> we've already allocated the private space for the module in
> layout_and_allocate(). A system with module decompression would reveal
> more wasted virtual memory space.
> 
> We should put effort now into identifying the source of these duplicate
> module requests and trimming these down as much possible. Larger systems
> will obviously show much more wasted virtual memory allocations.
> 
> root@kmod ~ # cat /sys/kernel/debug/modules/stats
>          Mods ever loaded       67
>      Mods failed on kread       0
> Mods failed on decompress       0
>   Mods failed on becoming       83
>       Mods failed on load       16
>         Total module size       11464704
>       Total mod text size       4194304
>        Failed kread bytes       0
>   Failed decompress bytes       0
>     Failed becoming bytes       228959096
>         Failed kmod bytes       8578080
>  Virtual mem wasted bytes       237537176
>          Average mod size       171115
>     Average mod text size       62602
>   Avg fail becoming bytes       2758544
>   Average fail load bytes       536130
> Duplicate failed modules:
>               module-name        How-many-times                    Reason
>                 kvm_intel                     7                  Becoming
>                       kvm                     7                  Becoming
>                 irqbypass                     6           Becoming & Load
>          crct10dif_pclmul                     7           Becoming & Load
>       ghash_clmulni_intel                     7           Becoming & Load
>              sha512_ssse3                     6           Becoming & Load
>            sha512_generic                     7           Becoming & Load
>               aesni_intel                     7                  Becoming
>               crypto_simd                     7           Becoming & Load
>                    cryptd                     3           Becoming & Load
>                     evdev                     1                  Becoming
>                 serio_raw                     1                  Becoming
>                      nvme                     3                  Becoming
>                 nvme_core                     3                  Becoming
>                    t10_pi                     3                  Becoming
>                virtio_pci                     3                  Becoming
>              crc32_pclmul                     6           Becoming & Load
>            crc64_rocksoft                     3                  Becoming
>              crc32c_intel                     3                  Becoming
>     virtio_pci_modern_dev                     2                  Becoming
>     virtio_pci_legacy_dev                     1                  Becoming
>                     crc64                     2                  Becoming
>                    virtio                     2                  Becoming
>               virtio_ring                     2                  Becoming
> 
> [0] https://github.com/ColinIanKing/stress-ng.git
> [1] echo 0 > /proc/sys/vm/oom_dump_tasks
>     ./stress-ng --module 100 --module-name xfs
> 
> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
>  Documentation/core-api/kernel-api.rst |  22 +-
>  kernel/module/Kconfig                 |  37 +++
>  kernel/module/Makefile                |   1 +
>  kernel/module/decompress.c            |   4 +
>  kernel/module/internal.h              |  74 +++++
>  kernel/module/main.c                  |  65 +++-
>  kernel/module/stats.c                 | 432 ++++++++++++++++++++++++++
>  kernel/module/tracking.c              |   7 +-
>  8 files changed, 630 insertions(+), 12 deletions(-)
>  create mode 100644 kernel/module/stats.c
> 
> diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst
> index e27728596008..9b3f3e5f5a95 100644
> --- a/Documentation/core-api/kernel-api.rst
> +++ b/Documentation/core-api/kernel-api.rst
> @@ -220,12 +220,30 @@ relay interface
>  Module Support
>  ==============
>  
> -Module Loading
> ---------------
> +Kernel module auto-loading
> +--------------------------
>  
>  .. kernel-doc:: kernel/module/kmod.c
>     :export:
>  
> +Module debugging
> +----------------
> +
> +.. kernel-doc:: kernel/module/stats.c
> +   :doc: module debugging statistics overview
> +
> +dup_failed_modules - tracks duplicate failed modules
> +****************************************************
> +
> +.. kernel-doc:: kernel/module/stats.c
> +   :doc: dup_failed_modules - tracks duplicate failed modules
> +
> +module statistics debugfs counters
> +**********************************
> +
> +.. kernel-doc:: kernel/module/stats.c
> +   :doc: module statistics debugfs counters
> +
>  Inter Module support
>  --------------------
>  
> diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig
> index 424b3bc58f3f..ca277b945a67 100644
> --- a/kernel/module/Kconfig
> +++ b/kernel/module/Kconfig
> @@ -22,6 +22,43 @@ menuconfig MODULES
>  
>  if MODULES
>  
> +config MODULE_DEBUG
> +	bool "Module debugging"
> +	depends on DEBUG_FS
> +	help
> +	  Allows you to enable / disable features which can help you debug
> +	  modules. You don't need these options in produciton systems. You can
> +	  and probably should enable this prior to making your kernel
> +	  produciton ready though.

2x typo: produciton -> production.

The last sentence could be misinterpreted to mean that you should enable this
to make your kernel production ready. Not sure, maybe I would just drop this
sentence.

Note that there are plenty of other typos in the added comments and
documentation. Please review them with a spell checker.

> +
> +if MODULE_DEBUG
> +
> +config MODULE_STATS
> +	bool "Module statistics"
> +	depends on DEBUG_FS
> +	help
> +	  This option allows you to maintain a record of module statistics.
> +	  For example each all modules size, average size, text size, and
> +	  failed modules and the size for each of those. For failed

This sentence doesn't quite make sense. I guess it should say something as:
For example, size of all modules, average size, text size, a list of failed
modules and the size for each of those.

> +	  modules we keep track of module which failed due to either the
> +	  existing module taking too long to load or that module already
> +	  was loaded.
> +
> +	  You should enable this if you are debugging production loads
> +	  and want to see if userspace or the kernel is doing stupid things
> +	  with loading modules when it shouldn't or if you want to help
> +	  optimize userspace / kernel space module autoloading schemes.
> +	  You might want to do this because failed modules tend to use
> +	  use up significan amount of memory, and so you'd be doing everyone

Word 'use' is repeated twice.

> +	  a favor in avoiding these failure proactively.
> +
> +	  This functionality is also useful for those experimenting with
> +	  module .text ELF section optimization.
> +
> +	  If unsure, say N.
> +
> +endif # MODULE_DEBUG
> +
>  config MODULE_FORCE_LOAD
>  	bool "Forced module loading"
>  	default n
> diff --git a/kernel/module/Makefile b/kernel/module/Makefile
> index 5b1d26b53b8d..52340bce497e 100644
> --- a/kernel/module/Makefile
> +++ b/kernel/module/Makefile
> @@ -21,3 +21,4 @@ obj-$(CONFIG_SYSFS) += sysfs.o
>  obj-$(CONFIG_KGDB_KDB) += kdb.o
>  obj-$(CONFIG_MODVERSIONS) += version.o
>  obj-$(CONFIG_MODULE_UNLOAD_TAINT_TRACKING) += tracking.o
> +obj-$(CONFIG_MODULE_STATS) += stats.o
> diff --git a/kernel/module/decompress.c b/kernel/module/decompress.c
> index 7ddc87bee274..e97232b125eb 100644
> --- a/kernel/module/decompress.c
> +++ b/kernel/module/decompress.c
> @@ -297,6 +297,10 @@ int module_decompress(struct load_info *info, const void *buf, size_t size)
>  	ssize_t data_size;
>  	int error;
>  
> +#if defined(CONFIG_MODULE_STATS)
> +	info->compressed_len = size;
> +#endif
> +
>  	/*
>  	 * Start with number of pages twice as big as needed for
>  	 * compressed data.
> diff --git a/kernel/module/internal.h b/kernel/module/internal.h
> index 6ae29bb8836f..9d97a59a9127 100644
> --- a/kernel/module/internal.h
> +++ b/kernel/module/internal.h
> @@ -59,6 +59,9 @@ struct load_info {
>  	unsigned long mod_kallsyms_init_off;
>  #endif
>  #ifdef CONFIG_MODULE_DECOMPRESS
> +#ifdef CONFIG_MODULE_STATS
> +	unsigned long compressed_len;
> +#endif
>  	struct page **pages;
>  	unsigned int max_pages;
>  	unsigned int used_pages;
> @@ -143,6 +146,77 @@ static inline bool set_livepatch_module(struct module *mod)
>  #endif
>  }
>  
> +/**
> + * enum fail_dup_mod_reason - state at which a duplicate module was detected
> + *
> + * @FAIL_DUP_MOD_BECOMING: the module is read properly, passes all checks but
> + * 	we've determined that another module with the same name is already loaded
> + * 	or being processed on our &modules list. This happens on early_mod_check()
> + * 	right before layout_and_allocate(). The kernel would have already
> + * 	vmalloc()'d space for the entire module through finit_module(). If
> + * 	decompression was used two vmap() spaces were used. These failures can
> + * 	happen when userspace has not seen the module present on the kernel and
> + * 	tries to load the module multiple times at same time.
> + * @FAIL_DUP_MOD_LOAD: the module has been read properly, passes all validation
> + *	checks and the kernel determines that the module was unique and because
> + *	of this allocated yet another private kernel copy of the module space in
> + *	layout_and_allocate() but after this determined in add_unformed_module()
> + *	that another module with the same name is already loaded or being processed.
> + *	These failures should be mitigated as much as possible and are indicative
> + *	of really fast races in loading modules. Without module decompression
> + *	they waste twice as much vmap space. With module decompression three
> + *	times the module's size vmap space is wasted.
> + */
> +enum fail_dup_mod_reason {
> +	FAIL_DUP_MOD_BECOMING = 0,
> +	FAIL_DUP_MOD_LOAD,
> +};
> +
> +#ifdef CONFIG_MODULE_STATS
> +
> +#define mod_stat_add_long(count, var) atomic_long_add(count, var)
> +#define mod_stat_inc(name) atomic_inc(name)
> +
> +extern atomic_long_t total_mod_size;
> +extern atomic_long_t total_text_size;
> +extern atomic_long_t invalid_kread_bytes;
> +extern atomic_long_t invalid_decompress_bytes;
> +
> +extern atomic_t modcount;
> +extern atomic_t failed_kreads;
> +extern atomic_t failed_decompress;
> +struct mod_fail_load {
> +	struct list_head list;
> +	char name[MODULE_NAME_LEN];
> +	atomic_long_t count;
> +	unsigned long dup_fail_mask;
> +};
> +
> +int try_add_failed_module(const char *name, size_t len, enum fail_dup_mod_reason reason);
> +void mod_stat_bump_invalid(struct load_info *info, int flags);
> +void mod_stat_bump_becoming(struct load_info *info, int flags);
> +
> +#else
> +
> +#define mod_stat_add_long(name, var)
> +#define mod_stat_inc(name)
> +
> +static inline int try_add_failed_module(const char *name, size_t len,
> +					enum fail_dup_mod_reason reason)
> +{
> +	return 0;
> +}
> +
> +static inline void mod_stat_bump_invalid(struct load_info *info, int flags)
> +{
> +}
> +
> +static inline void mod_stat_bump_becoming(struct load_info *info, int flags)
> +{
> +}
> +
> +#endif /* CONFIG_MODULE_STATS */
> +
>  #ifdef CONFIG_MODULE_UNLOAD_TAINT_TRACKING
>  struct mod_unload_taint {
>  	struct list_head list;
> diff --git a/kernel/module/main.c b/kernel/module/main.c
> index 75b23257128d..5642d77657a0 100644
> --- a/kernel/module/main.c
> +++ b/kernel/module/main.c
> @@ -56,6 +56,7 @@
>  #include <linux/dynamic_debug.h>
>  #include <linux/audit.h>
>  #include <linux/cfi.h>
> +#include <linux/debugfs.h>
>  #include <uapi/linux/module.h>
>  #include "internal.h"
>  
> @@ -87,6 +88,8 @@ struct symsearch {
>  	enum mod_license license;
>  };
>  
> +struct dentry *mod_debugfs_root;
> +
>  /*
>   * Bounds of module memory, for speeding up __module_address.
>   * Protected by module_mutex.
> @@ -2500,6 +2503,18 @@ static noinline int do_init_module(struct module *mod)
>  {
>  	int ret = 0;
>  	struct mod_initfree *freeinit;
> +#if defined(CONFIG_MODULE_STATS)
> +	unsigned int text_size = 0, total_size = 0;
> +
> +	for_each_mod_mem_type(type) {
> +		const struct module_memory *mod_mem = &mod->mem[type];
> +		if (mod_mem->size) {
> +			total_size += mod_mem->size;
> +			if (type == MOD_TEXT || type == MOD_INIT_TEXT)
> +				text_size += mod->mem[type].size;

'text_size += mod_mem->size;' would be simpler.

> +		}
> +	}
> +#endif
>  
>  	freeinit = kmalloc(sizeof(*freeinit), GFP_KERNEL);
>  	if (!freeinit) {
> @@ -2561,6 +2576,7 @@ static noinline int do_init_module(struct module *mod)
>  		mod->mem[type].base = NULL;
>  		mod->mem[type].size = 0;
>  	}
> +
>  #ifdef CONFIG_DEBUG_INFO_BTF_MODULES
>  	/* .BTF is not SHF_ALLOC and will get removed, so sanitize pointer */
>  	mod->btf_data = NULL;
> @@ -2584,6 +2600,11 @@ static noinline int do_init_module(struct module *mod)
>  	mutex_unlock(&module_mutex);
>  	wake_up_all(&module_wq);
>  
> +	mod_stat_add_long(text_size, &total_text_size);
> +	mod_stat_add_long(total_size, &total_mod_size);
> +
> +	mod_stat_inc(&modcount);
> +
>  	return 0;
>  
>  fail_free_freeinit:
> @@ -2599,6 +2620,7 @@ static noinline int do_init_module(struct module *mod)
>  	ftrace_release_mod(mod);
>  	free_module(mod);
>  	wake_up_all(&module_wq);
> +
>  	return ret;
>  }
>  
> @@ -2632,7 +2654,8 @@ static bool finished_loading(const char *name)
>  }
>  
>  /* Must be called with module_mutex held */
> -static int module_patient_check_exists(const char *name)
> +static int module_patient_check_exists(const char *name,
> +				       enum fail_dup_mod_reason reason)
>  {
>  	struct module *old;
>  	int err = 0;
> @@ -2655,6 +2678,9 @@ static int module_patient_check_exists(const char *name)
>  		old = find_module_all(name, strlen(name), true);
>  	}
>  
> +	if (try_add_failed_module(name, strlen(name), reason))
> +		pr_warn("Could not add fail-tracking for module: %s\n", name);
> +
>  	/*
>  	 * We are here only when the same module was being loaded. Do
>  	 * not try to load it again right now. It prevents long delays
> @@ -2679,7 +2705,7 @@ static int add_unformed_module(struct module *mod)
>  	mod->state = MODULE_STATE_UNFORMED;
>  
>  	mutex_lock(&module_mutex);
> -	err = module_patient_check_exists(mod->name);
> +	err = module_patient_check_exists(mod->name, FAIL_DUP_MOD_LOAD);
>  	if (err)
>  		goto out;
>  
> @@ -2800,6 +2826,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
>  		       int flags)
>  {
>  	struct module *mod;
> +	bool module_allocated = false;
>  	long err = 0;
>  	char *after_dashes;
>  
> @@ -2839,6 +2866,8 @@ static int load_module(struct load_info *info, const char __user *uargs,
>  		goto free_copy;
>  	}
>  
> +	module_allocated = true;
> +
>  	audit_log_kern_module(mod->name);
>  
>  	/* Reserve our place in the list. */
> @@ -2983,6 +3012,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
>  	synchronize_rcu();
>  	mutex_unlock(&module_mutex);
>   free_module:
> +	mod_stat_bump_invalid(info, flags);
>  	/* Free lock-classes; relies on the preceding sync_rcu() */
>  	for_class_mod_mem_type(type, core_data) {
>  		lockdep_free_key_range(mod->mem[type].base,
> @@ -2991,6 +3021,13 @@ static int load_module(struct load_info *info, const char __user *uargs,
>  
>  	module_deallocate(mod, info);
>   free_copy:
> +	/*
> +	 * The info->len is always set. We distinguish between
> +	 * failures once the proper module was allocated and
> +	 * before that.
> +	 */
> +	if (!module_allocated)
> +		mod_stat_bump_becoming(info, flags);
>  	free_copy(info, flags);
>  	return err;
>  }
> @@ -3009,8 +3046,11 @@ SYSCALL_DEFINE3(init_module, void __user *, umod,
>  	       umod, len, uargs);
>  
>  	err = copy_module_from_user(umod, len, &info);
> -	if (err)
> +	if (err) {
> +		mod_stat_inc(&failed_kreads);
> +		mod_stat_add_long(len, &invalid_kread_bytes);
>  		return err;
> +	}
>  
>  	return load_module(&info, uargs, 0);
>  }
> @@ -3035,14 +3075,20 @@ SYSCALL_DEFINE3(finit_module, int, fd, const char __user *, uargs, int, flags)
>  
>  	len = kernel_read_file_from_fd(fd, 0, &buf, INT_MAX, NULL,
>  				       READING_MODULE);
> -	if (len < 0)
> +	if (len < 0) {
> +		mod_stat_inc(&failed_kreads);
> +		mod_stat_add_long(len, &invalid_kread_bytes);
>  		return len;
> +	}
>  
>  	if (flags & MODULE_INIT_COMPRESSED_FILE) {
>  		err = module_decompress(&info, buf, len);
>  		vfree(buf); /* compressed data is no longer needed */
> -		if (err)
> +		if (err) {
> +			mod_stat_inc(&failed_decompress);
> +			mod_stat_add_long(len, &invalid_decompress_bytes);
>  			return err;
> +		}
>  	} else {
>  		info.hdr = buf;
>  		info.len = len;
> @@ -3216,3 +3262,12 @@ void print_modules(void)
>  			last_unloaded_module.taints);
>  	pr_cont("\n");
>  }
> +
> +#ifdef CONFIG_MODULE_DEBUG
> +static int module_debugfs_init(void)
> +{
> +	mod_debugfs_root = debugfs_create_dir("modules", NULL);
> +	return 0;
> +}
> +module_init(module_debugfs_init);
> +#endif
> diff --git a/kernel/module/stats.c b/kernel/module/stats.c
> new file mode 100644
> index 000000000000..d4b5b2b9e6ad
> --- /dev/null
> +++ b/kernel/module/stats.c
> @@ -0,0 +1,432 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Debugging module statistics.
> + *
> + * Copyright (C) 2023 Luis Chamberlain <mcgrof@kernel.org>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/string.h>
> +#include <linux/printk.h>
> +#include <linux/slab.h>
> +#include <linux/list.h>
> +#include <linux/debugfs.h>
> +#include <linux/rculist.h>
> +#include <linux/math.h>
> +
> +#include "internal.h"
> +
> +/**
> + * DOC: module debugging statistics overview
> + *
> + * Enabling CONFIG_MODULE_STATS enables module debugging statistics which
> + * are useful to monitor and root cause memory pressure issues with module
> + * loading. These statistics are useful to allow us to improve production
> + * workloads.
> + *
> + * The current module debugging statistics supported help keep track of module
> + * loading failures to enable improvements either for kernel module
> + * auto-loading usage (request_module()) or interactions with userspace.
> + * Statistics are provided to track of all possible failures in the
> + * finit_module() path and memory wasted in this process space.  Each of the
> + * failure counters are associated to a type of module loading failure which
> + * is known to incur a certain amount of memory allocation loss. In the worst
> + * case loading a module will fail after a 3 step memory allocation process:
> + *
> + *   a) memory allocated with kernel_read_file_from_fd()
> + *   b) module decompression processes the file read from
> + *      kernel_read_file_from_fd(), and vmap() is used to map
> + *      the decompressed module to a new local buffer which represents
> + *      a copy of the decompressed module passed from userspace. The buffer
> + *      from kernel_read_file_from_fd() is freed right away.
> + *   c) layout_and_allocate() allocates space for the final resting
> + *      place where we would keep the module if it were to be processed
> + *      successfully.
> + *
> + * If a failure occurs after these three different allocations only one
> + * counters will be incremetned with the summation of the lost bytes incurred
> + * during this failure. Likewise, if a module loading failed only after step b)
> + * a separate counter is used and incremented for the bytes lost during both
> + * of those allocations.
> + *
> + * Virtual memory space can be limited, for example on x86 virtual memory size
> + * defaults to 128 MiB. We should strive to limit and avoid wasting virtual
> + * memory allocations when possible. These module dubugging statistics help
> + * to evaluate how much memory is being wasted on bootup due to module loading
> + * failures.
> + *
> + * All counters are designed to be incremental. Atomic counters are used so to
> + * remain simple and avoid delays and deadlocks.
> + */
> +
> +extern struct dentry *mod_debugfs_root;

Files kernel/module/stats.c and kernel/module/tracking.c both add this extern
declaration. Can it be moved to kernel/module/internal.h?

> +
> +/**
> + * DOC: dup_failed_modules - tracks duplicate failed modules
> + *
> + * Linked list of modules which failed to be loaded because an already existing
> + * module with the same name was already being processed or already loaded.
> + * The finit_module() system call incurs heavy virtual memory allocations. In
> + * the worst case an finit_module() system call can end up allocating virtual
> + * memory 3 times:
> + *
> + *   1) kernel_read_file_from_fd() call uses vmalloc()
> + *   2) optional module decompression uses vmap()
> + *   3) layout_and allocate() can use vzalloc() or an arch specific variation of
> + *      vmalloc to deal with ELF sections requiring special permissions
> + *
> + * In practice on a typical boot today most finit_module() calls fail due to
> + * the module with the same name already being loaded or about to be processed.
> + * All virtual memory allocated to these failed modules will be lost with
> + * no functional use.
> + *
> + * To help with this the dup_failed_modules allows us to track modules which
> + * failed to load due to the fact that a module already was loaded or being
> + * processed already.  There are only two points at which we can fail such
> + * calls, we list them below along with the number of virtual memory allocation
> + * calls:
> + *
> + *   a) FAIL_DUP_MOD_BECOMING: at the end of early_mod_check() before
> + *	layout_and_allocate(). This does not yet happen.
> + *	- with module decompression: 2 virtual memory allocation calls
> + *	- without module decompression: 1 virtual memory allocation calls
> + *   b) FAIL_DUP_MOD_LOAD: after layout_and_allocate() on add_unformed_module()
> + *   	- with module decompression 3 virtual memory allocation calls
> + *   	- without module decompression 2 virtual memory allocation calls
> + *
> + * We should strive to get this list to be as small as possible. If this list
> + * is not empty it is a reflection of possible work or optimizations possible
> + * either in-kernel or in userspace.
> + */
> +static LIST_HEAD(dup_failed_modules);
> +
> +/**
> + * DOC: module statistics debugfs counters
> + *
> + * The total amount of wasted virtual memory allocation space during module
> + * loading can be computed by adding the total from the summation:
> + *
> + *   * @invalid_kread_bytes +
> + *     @invalid_decompress_bytes +
> + *     @invalid_becoming_bytes +
> + *     @invalid_mod_bytes
> + *
> + * The following debugfs counters are available to inspect module loading
> + * failures:
> + *
> + *   * total_mod_size: total bytes ever used by all modules we've dealt with on
> + *     this system
> + *   * total_text_size: total bytes of the .text and .init.text ELF section
> + *     sizes we've dealt with on this system
> + *   * invalid_kread_bytes: bytes wasted in failures which happen due to
> + *     memory allocations with the initial kernel_read_file_from_fd().
> + *     kernel_read_file_from_fd() uses vmalloc() and so these are wasted
> + *     vmalloc() memory allocations. These should typically not happen unless
> + *     your system is under memory pressure.
> + *   * invalid_decompress_bytes: number of bytes wasted due to
> + *     memory allocations in the module decompression path that use vmap().
> + *     These typically should not happen unless your system is under memory
> + *     presssure.
> + *   * invalid_becoming_bytes: total number of bytes wasted due to
> + *     allocations used to read the kernel module userspace wants us to read
> + *     before we promote it to be processed to be added to our @modules linked
> + *     list. These failures could in theory happen in if we had a check in between
> + *     between a successful kernel_read_file_from_fd() call and right before
> + *     we allocate the our private memory for the module which would be kept if
> + *     the module is successfully loaded. The most common reason for this failure
> + *     is when userspace is racing to load a module which it does not yet see
> + *     loaded. The first module to succeed in add_unformed_module() will add a
> + *     module to our &modules list and subsequent loads of modules with the
> + *     same name will error out at the end of early_mod_check(). A check
> + *     for module_patient_check_exists() at the end of early_mod_check() could be
> + *     added to prevent duplicate allocations on layout_and_allocate() for
> + *     modules already being processed. These duplicate failed modules are
> + *     non-fatal, however they typically are indicative of userspace not seeing
> + *     a module in userspace loaded yet and unecessarily trying to load a
> + *     module before the kernel even has a chance to begin to process prior
> + *     requests. Although duplicate failures can be non-fatal, we should try to
> + *     reduce vmalloc() pressure proactively, so ideally after boot this will
> + *     be close to as 0 as possible.  If module decompression was used we also
> + *     add to this counter the cost of the initial kernel_read_file_from_fd()
> + *     of the compressed module. If module decompression was not used the
> + *     value represents the total wasted allocations in kernel_read_file_from_fd()
> + *     calls for these type of failures. These failures can occur because:
> + *
> + *    * module_sig_check() - module signature checks
> + *    * elf_validity_cache_copy() - some ELF validation issue
> + *    * early_mod_check():
> + *
> + *      * blacklisting
> + *      * failed to rewrite section headers
> + *      * version magic
> + *      * live patch requirements didn't check out
> + *      * the module was detected as being already present
> + *
> + *   * invalid_mod_bytes: these are the total number of bytes lost due to
> + *     failures after we did all the sanity checks of the module which userspace
> + *     passed to us and after our first check that the module is unique.  A
> + *     module can still fail to load if we detect the module is loaded after we
> + *     allocate space for it with layout_and_allocate(), we do this check right
> + *     before processing the module as live and run its initialiation routines.
> + *     Note that you have a failure of this type it also means the respective
> + *     kernel_read_file_from_fd() memory space was also wasted, and so we
> + *     increment this counter with twice the size of the module. Additionally
> + *     if you used module decompression the size of the compressed module is
> + *     also added to this counter.
> + *
> + *  * modcount: how many modules we've loaded in our kernel life time
> + *  * failed_kreads: how many modules failed due to failed kernel_read_file_from_fd()
> + *  * failed_decompress: how many failed module decompression attempts we've had.
> + *    These really should not happen unless your compression / decompression
> + *    might be broken.
> + *  * failed_becoming: how many modules failed after we kernel_read_file_from_fd()
> + *    it and before we allocate memory for it with layout_and_allocate(). This
> + *    counter is never incremented if you manage to validate the module and
> + *    call layout_and_allocate() for it.
> + *  * failed_load_modules: how many modules failed once we've allocated our
> + *    private space for our module using layout_and_allocate(). These failures
> + *    should hopefully mostly be dealt with already. Races in theory could
> + *    still exist here, but it would just mean the kernel had started processing
> + *    two threads concurrently up to early_mod_check() and then one just one
> + *    thread won. These failures are good signs the kernel or userspace is
> + *    doing something seriously stupid or that could be improved. We should
> + *    strive to fix these, but it is perhaps not easy to fix them.
> + *    A recent example are the modules requests incurred for frequency modules,
> + *    a separate module request was being issued for each CPU on a system.
> + */
> +
> +atomic_long_t total_mod_size;
> +atomic_long_t total_text_size;
> +atomic_long_t invalid_kread_bytes;
> +atomic_long_t invalid_decompress_bytes;
> +static atomic_long_t invalid_becoming_bytes;
> +static atomic_long_t invalid_mod_bytes;
> +atomic_t modcount;
> +atomic_t failed_kreads;
> +atomic_t failed_decompress;
> +static atomic_t failed_becoming;
> +static atomic_t failed_load_modules;
> +
> +static const char *mod_fail_to_str(struct mod_fail_load *mod_fail)
> +{
> +	if (test_bit(FAIL_DUP_MOD_BECOMING, &mod_fail->dup_fail_mask) &&
> +	    test_bit(FAIL_DUP_MOD_LOAD, &mod_fail->dup_fail_mask))
> +		return "Becoming & Load";
> +	if (test_bit(FAIL_DUP_MOD_BECOMING, &mod_fail->dup_fail_mask))
> +		return "Becoming";
> +	if (test_bit(FAIL_DUP_MOD_LOAD, &mod_fail->dup_fail_mask))
> +		return "Load";
> +	return "Bug-on-stats";
> +}
> +
> +void mod_stat_bump_invalid(struct load_info *info, int flags)
> +{
> +	atomic_long_add(info->len * 2, &invalid_mod_bytes);
> +	atomic_inc(&failed_load_modules);
> +#if defined(CONFIG_MODULE_DECOMPRESS)
> +	if (flags & MODULE_INIT_COMPRESSED_FILE)
> +		atomic_long_add(info->compressed_len, &invalid_mod_byte);

Variable invalid_mod_byte is not declared, should be invalid_mod_bytes.

> +#endif
> +}
> +
> +void mod_stat_bump_becoming(struct load_info *info, int flags)
> +{
> +	atomic_inc(&failed_becoming);
> +	atomic_long_add(info->len, &invalid_becoming_bytes);
> +#if defined(CONFIG_MODULE_DECOMPRESS)
> +	if (flags & MODULE_INIT_COMPRESSED_FILE)
> +		atomic_long_add(info->compressed_len, &invalid_becoming_bytes);
> +#endif
> +}
> +
> +int try_add_failed_module(const char *name, size_t len, enum fail_dup_mod_reason reason)

Function try_add_failed_module() is only called from
module_patient_check_exists() which always passes in a NUL-terminated string.
The len parameter could be then dropped and the comparison in
try_add_failed_module() could simply use strcmp().

> +{
> +	struct mod_fail_load *mod_fail;
> +
> +	list_for_each_entry_rcu(mod_fail, &dup_failed_modules, list,
> +				lockdep_is_held(&module_mutex)) {
> +		if (strlen(mod_fail->name) == len && !memcmp(mod_fail->name, name, len)) {
> +                        atomic_long_inc(&mod_fail->count);
> +			__set_bit(reason, &mod_fail->dup_fail_mask);
> +                        goto out;
> +                }
> +        }
> +
> +	mod_fail = kzalloc(sizeof(*mod_fail), GFP_KERNEL);
> +	if (!mod_fail)
> +		return -ENOMEM;
> +	memcpy(mod_fail->name, name, len);
> +	__set_bit(reason, &mod_fail->dup_fail_mask);
> +        atomic_long_inc(&mod_fail->count);
> +        list_add_rcu(&mod_fail->list, &dup_failed_modules);
> +out:
> +	return 0;
> +}

Indentation in try_add_failed_module() uses spaces instead of tabs in a few
places.

> +
> +/*
> + * At 64 bytes per module and assuming a 1024 bytes preamble we can fit the
> + * 112 module prints within 8k.
> + *
> + * 1024 + (64*112) = 8k
> + */
> +#define MAX_PREAMBLE 1024
> +#define MAX_FAILED_MOD_PRINT 112
> +#define MAX_BYTES_PER_MOD 64
> +static ssize_t read_file_mod_stats(struct file *file, char __user *user_buf,
> +				   size_t count, loff_t *ppos)
> +{
> +	struct mod_fail_load *mod_fail;
> +	unsigned int len, size, count_failed = 0;
> +	char *buf;
> +	u32 live_mod_count, fkreads, fdecompress, fbecoming, floads;
> +	u64 total_size, text_size, ikread_bytes, ibecoming_bytes, idecompress_bytes, imod_bytes,
> +	    total_virtual_lost;
> +
> +	live_mod_count = atomic_read(&modcount);
> +	fkreads = atomic_read(&failed_kreads);
> +	fdecompress = atomic_read(&failed_decompress);
> +	fbecoming = atomic_read(&failed_becoming);
> +	floads = atomic_read(&failed_load_modules);
> +
> +	total_size = atomic64_read(&total_mod_size);
> +	text_size = atomic64_read(&total_text_size);
> +	ikread_bytes = atomic64_read(&invalid_kread_bytes);
> +	idecompress_bytes = atomic64_read(&invalid_decompress_bytes);
> +	ibecoming_bytes = atomic64_read(&invalid_becoming_bytes);
> +	imod_bytes = atomic64_read(&invalid_mod_bytes);
> +
> +	total_virtual_lost = ikread_bytes + idecompress_bytes + ibecoming_bytes + imod_bytes;
> +
> +	size = MAX_PREAMBLE + min((unsigned int)(floads + fbecoming) * MAX_BYTES_PER_MOD,
> +			  (unsigned int) MAX_FAILED_MOD_PRINT * MAX_BYTES_PER_MOD);

Using
'size = MAX_PREAMBLE + min((unsigned int)(floads + fbecoming), (unsigned int)MAX_FAILED_MOD_PRINT) * MAX_BYTES_PER_MOD;'
is a bit simpler and avoids any theoretical overflow of
'(floads + fbecoming) * MAX_BYTES_PER_MOD'.

> +	buf = kzalloc(size, GFP_KERNEL);
> +	if (buf == NULL)
> +		return -ENOMEM;
> +
> +	/* The beginning of our debug preamble */
> +	len = scnprintf(buf + 0, size - len, "%25s\t%u\n", "Mods ever loaded", live_mod_count);
> +
> +	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on kread", fkreads);
> +
> +	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on decompress",
> +			 fdecompress);
> +	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on becoming", fbecoming);
> +
> +	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on load", floads);
> +
> +	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Total module size", total_size);
> +	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Total mod text size", text_size);
> +
> +	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed kread bytes", ikread_bytes);
> +
> +	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed decompress bytes",
> +			 idecompress_bytes);
> +
> +	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed becoming bytes", ibecoming_bytes);
> +
> +	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed kmod bytes", imod_bytes);
> +
> +	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Virtual mem wasted bytes", total_virtual_lost);
> +
> +	if (live_mod_count && total_size) {
> +		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Average mod size",
> +				 DIV_ROUND_UP(total_size, live_mod_count));
> +	}
> +
> +	if (live_mod_count && text_size) {
> +		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Average mod text size",
> +				 DIV_ROUND_UP(text_size, live_mod_count));
> +	}
> +
> +	/*
> +	 * We use WARN_ON_ONCE() for the counters to ensure we always have parity
> +	 * for keeping tabs on a type of failure with one type of byte counter.
> +	 * The counters for imod_bytes does not increase for fkreads failures
> +	 * for example, and so on.
> +	 */
> +
> +	WARN_ON_ONCE(ikread_bytes && !fkreads);
> +	if (fkreads && ikread_bytes) {
> +		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Avg fail kread bytes",
> +				 DIV_ROUND_UP(ikread_bytes, fkreads));
> +	}
> +
> +	WARN_ON_ONCE(ibecoming_bytes && !fbecoming);
> +	if (fbecoming && ibecoming_bytes) {
> +		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Avg fail becoming bytes",
> +				 DIV_ROUND_UP(ibecoming_bytes, fbecoming));
> +	}
> +
> +	WARN_ON_ONCE(idecompress_bytes && !fdecompress);
> +	if (fdecompress && idecompress_bytes) {
> +		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Avg fail decomp bytes",
> +				 DIV_ROUND_UP(idecompress_bytes, fdecompress));
> +	}
> +
> +	WARN_ON_ONCE(imod_bytes && !floads);
> +	if (floads && imod_bytes) {
> +		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Average fail load bytes",
> +				 DIV_ROUND_UP(imod_bytes, floads));
> +	}
> +
> +	/* End of our debug preamble header. */
> +
> +	/* Catch when we've gone beyond our expected preamble */
> +	WARN_ON_ONCE(len >= MAX_PREAMBLE);
> +
> +	if (list_empty(&dup_failed_modules))
> +		goto out;
> +
> +	len += scnprintf(buf + len, size - len, "Duplicate failed modules:\n");
> +	len += scnprintf(buf + len, size - len, "%25s\t%15s\t%25s\n",
> +			 "module-name", "How-many-times", "Reason");

"module-name" -> "Module-name"

> +	mutex_lock(&module_mutex);
> +
> +
> +	list_for_each_entry_rcu(mod_fail, &dup_failed_modules, list) {
> +		if (WARN_ON_ONCE(++count_failed >= MAX_FAILED_MOD_PRINT))
> +			goto out_unlock;
> +		len += scnprintf(buf + len, size - len, "%25s\t%15llu\t%25s\n", mod_fail->name,
> +				 atomic64_read(&mod_fail->count), mod_fail_to_str(mod_fail));
> +	}
> +out_unlock:
> +	mutex_unlock(&module_mutex);
> +out:
> +	kfree(buf);
> +        return simple_read_from_buffer(user_buf, count, ppos, buf, len);
> +}
> +#undef MAX_PREAMBLE
> +#undef MAX_FAILED_MOD_PRINT
> +#undef MAX_BYTES_PER_MOD
> +
> +static const struct file_operations fops_mod_stats = {
> +	.read = read_file_mod_stats,
> +	.open = simple_open,
> +	.owner = THIS_MODULE,
> +	.llseek = default_llseek,
> +};
> +
> +#define mod_debug_add_ulong(name) debugfs_create_ulong(#name, 0400, mod_debugfs_root, (unsigned long *) &name.counter)
> +#define mod_debug_add_atomic(name) debugfs_create_atomic_t(#name, 0400, mod_debugfs_root, &name)
> +static int __init module_stats_init(void)
> +{
> +	mod_debug_add_ulong(total_mod_size);
> +	mod_debug_add_ulong(total_text_size);
> +	mod_debug_add_ulong(invalid_kread_bytes);
> +	mod_debug_add_ulong(invalid_decompress_bytes);
> +	mod_debug_add_ulong(invalid_becoming_bytes);
> +	mod_debug_add_ulong(invalid_mod_bytes);
> +
> +	mod_debug_add_atomic(modcount);
> +	mod_debug_add_atomic(failed_kreads);
> +	mod_debug_add_atomic(failed_decompress);
> +	mod_debug_add_atomic(failed_becoming);
> +	mod_debug_add_atomic(failed_load_modules);
> +
> +	debugfs_create_file("stats", 0400, mod_debugfs_root, mod_debugfs_root, &fops_mod_stats);
> +
> +	return 0;
> +}
> +#undef mod_debug_add_ulong
> +#undef mod_debug_add_atomic
> +module_init(module_stats_init);

Function module_stats_init() requires mod_debugfs_root being initialized which
is done in module_debugfs_init(). Both functions are recorded to be called via
module_init(). Just to make sure, is their ordering guaranteed in some way?

> diff --git a/kernel/module/tracking.c b/kernel/module/tracking.c
> index 26d812e07615..16742d1c630c 100644
> --- a/kernel/module/tracking.c
> +++ b/kernel/module/tracking.c
> @@ -15,6 +15,7 @@
>  #include "internal.h"
>  
>  static LIST_HEAD(unloaded_tainted_modules);
> +extern struct dentry *mod_debugfs_root;
>  
>  int try_add_tainted_module(struct module *mod)
>  {
> @@ -120,12 +121,8 @@ static const struct file_operations unloaded_tainted_modules_fops = {
>  
>  static int __init unloaded_tainted_modules_init(void)
>  {
> -	struct dentry *dir;
> -
> -	dir = debugfs_create_dir("modules", NULL);
> -	debugfs_create_file("unloaded_tainted", 0444, dir, NULL,
> +	debugfs_create_file("unloaded_tainted", 0444, mod_debugfs_root, NULL,
>  			    &unloaded_tainted_modules_fops);

mod_debugfs_root is initialized in module_debugfs_init() only if
CONFIG_MODULE_DEBUG is set. However, my reading is that feature
CONFIG_MODULE_UNLOAD_TAINT_TRACKING is orthogonal to it and doesn't require
CONFIG_MODULE_DEBUG, so it looks this change breaks this tracking?

> -
>  	return 0;
>  }
>  module_init(unloaded_tainted_modules_init);

Cheers,
Petr



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 3/4] module: add debug stats to help identify memory pressure
  2023-04-17 11:18   ` Petr Pavlu
@ 2023-04-18 18:30     ` Luis Chamberlain
  0 siblings, 0 replies; 9+ messages in thread
From: Luis Chamberlain @ 2023-04-18 18:30 UTC (permalink / raw)
  To: Petr Pavlu
  Cc: christophe.leroy, tglx, peterz, song, rppt, dave, willy, vbabka,
	mhocko, dave.hansen, colin.i.king, jim.cromie, catalin.marinas,
	jbaron, rick.p.edgecombe, david, patches, linux-modules,
	linux-mm, linux-kernel, pmladek, prarit, torvalds, gregkh,
	rafael

On Mon, Apr 17, 2023 at 01:18:14PM +0200, Petr Pavlu wrote:
> On 4/14/23 07:08, Luis Chamberlain wrote:

<-- Petr's spell checking -->

> Note that there are plenty of other typos in the added comments and
> documentation. Please review them with a spell checker.

Yes I am terrible at that, I've now integrated a spell checker into
my workflow. Fixed all these, thanks.

> > @@ -2500,6 +2503,18 @@ static noinline int do_init_module(struct module *mod)
> >  {
> >  	int ret = 0;
> >  	struct mod_initfree *freeinit;
> > +#if defined(CONFIG_MODULE_STATS)
> > +	unsigned int text_size = 0, total_size = 0;
> > +
> > +	for_each_mod_mem_type(type) {
> > +		const struct module_memory *mod_mem = &mod->mem[type];
> > +		if (mod_mem->size) {
> > +			total_size += mod_mem->size;
> > +			if (type == MOD_TEXT || type == MOD_INIT_TEXT)
> > +				text_size += mod->mem[type].size;
> 
> 'text_size += mod_mem->size;' would be simpler.

Sure.

> > +extern struct dentry *mod_debugfs_root;
> 
> Files kernel/module/stats.c and kernel/module/tracking.c both add this extern
> declaration. Can it be moved to kernel/module/internal.h?

Sure.

> > +#if defined(CONFIG_MODULE_DECOMPRESS)
> > +	if (flags & MODULE_INIT_COMPRESSED_FILE)
> > +		atomic_long_add(info->compressed_len, &invalid_mod_byte);
> 
> Variable invalid_mod_byte is not declared, should be invalid_mod_bytes.

Arnd already sent a fix for that, thanks.

> > +int try_add_failed_module(const char *name, size_t len, enum fail_dup_mod_reason reason)
> 
> Function try_add_failed_module() is only called from
> module_patient_check_exists() which always passes in a NUL-terminated string.
> The len parameter could be then dropped and the comparison in
> try_add_failed_module() could simply use strcmp().

Sure, did that.

> Indentation in try_add_failed_module() uses spaces instead of tabs in a few
> places.

Fixed.

> > +	size = MAX_PREAMBLE + min((unsigned int)(floads + fbecoming) * MAX_BYTES_PER_MOD,
> > +			  (unsigned int) MAX_FAILED_MOD_PRINT * MAX_BYTES_PER_MOD);
> 
> Using
> 'size = MAX_PREAMBLE + min((unsigned int)(floads + fbecoming), (unsigned int)MAX_FAILED_MOD_PRINT) * MAX_BYTES_PER_MOD;'
> is a bit simpler and avoids any theoretical overflow of
> '(floads + fbecoming) * MAX_BYTES_PER_MOD'.

Sure.

> > +	len += scnprintf(buf + len, size - len, "%25s\t%15s\t%25s\n",
> > +			 "module-name", "How-many-times", "Reason");
> 
> "module-name" -> "Module-name"

OK sure.

> Function module_stats_init() requires mod_debugfs_root being initialized which
> is done in module_debugfs_init(). Both functions are recorded to be called via
> module_init(). Just to make sure, is their ordering guaranteed in some way?

Link order takes care of that and main.o goes first.

> mod_debugfs_root is initialized in module_debugfs_init() only if
> CONFIG_MODULE_DEBUG is set. However, my reading is that feature
> CONFIG_MODULE_UNLOAD_TAINT_TRACKING is orthogonal to it and doesn't require
> CONFIG_MODULE_DEBUG, so it looks this change breaks this tracking?

Ah yes We need a bool CONFIG_MODULE_DEBUGFS which is selected by those
that need it. Added.

  Luis


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v4] module: add debug stats to help identify memory pressure
  2023-04-14  5:08 ` [PATCH v3 3/4] module: add debug stats to help identify memory pressure Luis Chamberlain
  2023-04-17 11:18   ` Petr Pavlu
@ 2023-04-18 18:37   ` Luis Chamberlain
  1 sibling, 0 replies; 9+ messages in thread
From: Luis Chamberlain @ 2023-04-18 18:37 UTC (permalink / raw)
  To: david, patches, linux-modules, linux-mm, linux-kernel, pmladek,
	petr.pavlu, prarit, torvalds, gregkh, rafael
  Cc: christophe.leroy, tglx, peterz, song, rppt, dave, willy, vbabka,
	mhocko, dave.hansen, colin.i.king, jim.cromie, catalin.marinas,
	jbaron, rick.p.edgecombe

Loading modules with finit_module() can end up using vmalloc(), vmap()
and vmalloc() again, for a total of up to 3 separate allocations in the
worst case for a single module. We always kernel_read*() the module,
that's a vmalloc(). Then vmap() is used for the module decompression,
and if so the last read buffer is freed as we use the now decompressed
module buffer to stuff data into our copy module. The last allocation is
specific to each architectures but pretty much that's generally a series
of vmalloc() calls or a variation of vmalloc to handle ELF sections with
special permissions.

Evaluation with new stress-ng module support [1] with just 100 ops
is proving that you can end up using GiBs of data easily even with all
care we have in the kernel and userspace today in trying to not load modules
which are already loaded. 100 ops seems to resemble the sort of pressure a
system with about 400 CPUs can create on module loading. Although issues
relating to duplicate module requests due to each CPU inucurring a new
module reuest is silly and some of these are being fixed, we currently lack
proper tooling to help diagnose easily what happened, when it happened
and who likely is to blame -- userspace or kernel module autoloading.

Provide an initial set of stats which use debugfs to let us easily scrape
post-boot information about failed loads. This sort of information can
be used on production worklaods to try to optimize *avoiding* redundant
memory pressure using finit_module().

There's a few examples that can be provided:

A 255 vCPU system without the next patch in this series applied:

Startup finished in 19.143s (kernel) + 7.078s (userspace) = 26.221s
graphical.target reached after 6.988s in userspace

And 13.58 GiB of virtual memory space lost due to failed module loading:

root@big ~ # cat /sys/kernel/debug/modules/stats
         Mods ever loaded       67
     Mods failed on kread       0
Mods failed on decompress       0
  Mods failed on becoming       0
      Mods failed on load       1411
        Total module size       11464704
      Total mod text size       4194304
       Failed kread bytes       0
  Failed decompress bytes       0
    Failed becoming bytes       0
        Failed kmod bytes       14588526272
 Virtual mem wasted bytes       14588526272
         Average mod size       171115
    Average mod text size       62602
  Average fail load bytes       10339140
Duplicate failed modules:
              module-name        How-many-times                    Reason
                kvm_intel                   249                      Load
                      kvm                   249                      Load
                irqbypass                     8                      Load
         crct10dif_pclmul                   128                      Load
      ghash_clmulni_intel                    27                      Load
             sha512_ssse3                    50                      Load
           sha512_generic                   200                      Load
              aesni_intel                   249                      Load
              crypto_simd                    41                      Load
                   cryptd                   131                      Load
                    evdev                     2                      Load
                serio_raw                     1                      Load
               virtio_pci                     3                      Load
                     nvme                     3                      Load
                nvme_core                     3                      Load
    virtio_pci_legacy_dev                     3                      Load
    virtio_pci_modern_dev                     3                      Load
                   t10_pi                     3                      Load
                   virtio                     3                      Load
             crc32_pclmul                     6                      Load
           crc64_rocksoft                     3                      Load
             crc32c_intel                    40                      Load
              virtio_ring                     3                      Load
                    crc64                     3                      Load

The following screen shot, of a simple 8vcpu 8 GiB KVM guest with the
next patch in this series applied, shows 226.53 MiB are wasted in virtual
memory allocations which due to duplicate module requests during boot.
It also shows an average module memory size of 167.10 KiB and an an
average module .text + .init.text size of 61.13 KiB. The end shows all
modules which were detected as duplicate requests and whether or not
they failed early after just the first kernel_read*() call or late after
we've already allocated the private space for the module in
layout_and_allocate(). A system with module decompression would reveal
more wasted virtual memory space.

We should put effort now into identifying the source of these duplicate
module requests and trimming these down as much possible. Larger systems
will obviously show much more wasted virtual memory allocations.

root@kmod ~ # cat /sys/kernel/debug/modules/stats
         Mods ever loaded       67
     Mods failed on kread       0
Mods failed on decompress       0
  Mods failed on becoming       83
      Mods failed on load       16
        Total module size       11464704
      Total mod text size       4194304
       Failed kread bytes       0
  Failed decompress bytes       0
    Failed becoming bytes       228959096
        Failed kmod bytes       8578080
 Virtual mem wasted bytes       237537176
         Average mod size       171115
    Average mod text size       62602
  Avg fail becoming bytes       2758544
  Average fail load bytes       536130
Duplicate failed modules:
              module-name        How-many-times                    Reason
                kvm_intel                     7                  Becoming
                      kvm                     7                  Becoming
                irqbypass                     6           Becoming & Load
         crct10dif_pclmul                     7           Becoming & Load
      ghash_clmulni_intel                     7           Becoming & Load
             sha512_ssse3                     6           Becoming & Load
           sha512_generic                     7           Becoming & Load
              aesni_intel                     7                  Becoming
              crypto_simd                     7           Becoming & Load
                   cryptd                     3           Becoming & Load
                    evdev                     1                  Becoming
                serio_raw                     1                  Becoming
                     nvme                     3                  Becoming
                nvme_core                     3                  Becoming
                   t10_pi                     3                  Becoming
               virtio_pci                     3                  Becoming
             crc32_pclmul                     6           Becoming & Load
           crc64_rocksoft                     3                  Becoming
             crc32c_intel                     3                  Becoming
    virtio_pci_modern_dev                     2                  Becoming
    virtio_pci_legacy_dev                     1                  Becoming
                    crc64                     2                  Becoming
                   virtio                     2                  Becoming
              virtio_ring                     2                  Becoming

[0] https://github.com/ColinIanKing/stress-ng.git
[1] echo 0 > /proc/sys/vm/oom_dump_tasks
    ./stress-ng --module 100 --module-name xfs

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
---

Only sending a v4 for this patch. All fixes are suggested by
Petr. Changes on this v4 for this patch:

  o simplify try_add_failed_module() with strcmp
  o fix identation on try_add_failed_module()
  o simplfy max bytes for debugfs
  o Module-name for debug print
  o add MODULE_DEBUGFS and make stats / taint tracking select it 
  o use extern for debugfs root

I'd hope we can move on with fixes using modules-next now.

 Documentation/core-api/kernel-api.rst |  22 +-
 kernel/module/Kconfig                 |  41 ++-
 kernel/module/Makefile                |   1 +
 kernel/module/decompress.c            |   4 +
 kernel/module/internal.h              |  78 +++++
 kernel/module/main.c                  |  65 +++-
 kernel/module/stats.c                 | 430 ++++++++++++++++++++++++++
 kernel/module/tracking.c              |   7 +-
 8 files changed, 635 insertions(+), 13 deletions(-)
 create mode 100644 kernel/module/stats.c

diff --git a/Documentation/core-api/kernel-api.rst b/Documentation/core-api/kernel-api.rst
index e27728596008..9b3f3e5f5a95 100644
--- a/Documentation/core-api/kernel-api.rst
+++ b/Documentation/core-api/kernel-api.rst
@@ -220,12 +220,30 @@ relay interface
 Module Support
 ==============
 
-Module Loading
---------------
+Kernel module auto-loading
+--------------------------
 
 .. kernel-doc:: kernel/module/kmod.c
    :export:
 
+Module debugging
+----------------
+
+.. kernel-doc:: kernel/module/stats.c
+   :doc: module debugging statistics overview
+
+dup_failed_modules - tracks duplicate failed modules
+****************************************************
+
+.. kernel-doc:: kernel/module/stats.c
+   :doc: dup_failed_modules - tracks duplicate failed modules
+
+module statistics debugfs counters
+**********************************
+
+.. kernel-doc:: kernel/module/stats.c
+   :doc: module statistics debugfs counters
+
 Inter Module support
 --------------------
 
diff --git a/kernel/module/Kconfig b/kernel/module/Kconfig
index 424b3bc58f3f..e6df183e2c80 100644
--- a/kernel/module/Kconfig
+++ b/kernel/module/Kconfig
@@ -22,6 +22,45 @@ menuconfig MODULES
 
 if MODULES
 
+config MODULE_DEBUGFS
+	bool
+
+config MODULE_DEBUG
+	bool "Module debugging"
+	depends on DEBUG_FS
+	help
+	  Allows you to enable / disable features which can help you debug
+	  modules. You don't need these options on production systems.
+
+if MODULE_DEBUG
+
+config MODULE_STATS
+	bool "Module statistics"
+	depends on DEBUG_FS
+	select MODULE_DEBUGFS
+	help
+	  This option allows you to maintain a record of module statistics.
+	  For example, size of all modules, average size, text size, a list
+	  of failed modules and the size for each of those. For failed
+	  modules we keep track of modules which failed due to either the
+	  existing module taking too long to load or that module was already
+	  loaded.
+
+	  You should enable this if you are debugging production loads
+	  and want to see if userspace or the kernel is doing stupid things
+	  with loading modules when it shouldn't or if you want to help
+	  optimize userspace / kernel space module autoloading schemes.
+	  You might want to do this because failed modules tend to use
+	  up significant amount of memory, and so you'd be doing everyone a
+	  favor in avoiding these failures proactively.
+
+	  This functionality is also useful for those experimenting with
+	  module .text ELF section optimization.
+
+	  If unsure, say N.
+
+endif # MODULE_DEBUG
+
 config MODULE_FORCE_LOAD
 	bool "Forced module loading"
 	default n
@@ -51,7 +90,7 @@ config MODULE_FORCE_UNLOAD
 config MODULE_UNLOAD_TAINT_TRACKING
 	bool "Tainted module unload tracking"
 	depends on MODULE_UNLOAD
-	default n
+	select MODULE_DEBUGFS
 	help
 	  This option allows you to maintain a record of each unloaded
 	  module that tainted the kernel. In addition to displaying a
diff --git a/kernel/module/Makefile b/kernel/module/Makefile
index 5b1d26b53b8d..52340bce497e 100644
--- a/kernel/module/Makefile
+++ b/kernel/module/Makefile
@@ -21,3 +21,4 @@ obj-$(CONFIG_SYSFS) += sysfs.o
 obj-$(CONFIG_KGDB_KDB) += kdb.o
 obj-$(CONFIG_MODVERSIONS) += version.o
 obj-$(CONFIG_MODULE_UNLOAD_TAINT_TRACKING) += tracking.o
+obj-$(CONFIG_MODULE_STATS) += stats.o
diff --git a/kernel/module/decompress.c b/kernel/module/decompress.c
index 7ddc87bee274..e97232b125eb 100644
--- a/kernel/module/decompress.c
+++ b/kernel/module/decompress.c
@@ -297,6 +297,10 @@ int module_decompress(struct load_info *info, const void *buf, size_t size)
 	ssize_t data_size;
 	int error;
 
+#if defined(CONFIG_MODULE_STATS)
+	info->compressed_len = size;
+#endif
+
 	/*
 	 * Start with number of pages twice as big as needed for
 	 * compressed data.
diff --git a/kernel/module/internal.h b/kernel/module/internal.h
index 6ae29bb8836f..1fd75dd346dc 100644
--- a/kernel/module/internal.h
+++ b/kernel/module/internal.h
@@ -59,6 +59,9 @@ struct load_info {
 	unsigned long mod_kallsyms_init_off;
 #endif
 #ifdef CONFIG_MODULE_DECOMPRESS
+#ifdef CONFIG_MODULE_STATS
+	unsigned long compressed_len;
+#endif
 	struct page **pages;
 	unsigned int max_pages;
 	unsigned int used_pages;
@@ -143,6 +146,81 @@ static inline bool set_livepatch_module(struct module *mod)
 #endif
 }
 
+/**
+ * enum fail_dup_mod_reason - state at which a duplicate module was detected
+ *
+ * @FAIL_DUP_MOD_BECOMING: the module is read properly, passes all checks but
+ * 	we've determined that another module with the same name is already loaded
+ * 	or being processed on our &modules list. This happens on early_mod_check()
+ * 	right before layout_and_allocate(). The kernel would have already
+ * 	vmalloc()'d space for the entire module through finit_module(). If
+ * 	decompression was used two vmap() spaces were used. These failures can
+ * 	happen when userspace has not seen the module present on the kernel and
+ * 	tries to load the module multiple times at same time.
+ * @FAIL_DUP_MOD_LOAD: the module has been read properly, passes all validation
+ *	checks and the kernel determines that the module was unique and because
+ *	of this allocated yet another private kernel copy of the module space in
+ *	layout_and_allocate() but after this determined in add_unformed_module()
+ *	that another module with the same name is already loaded or being processed.
+ *	These failures should be mitigated as much as possible and are indicative
+ *	of really fast races in loading modules. Without module decompression
+ *	they waste twice as much vmap space. With module decompression three
+ *	times the module's size vmap space is wasted.
+ */
+enum fail_dup_mod_reason {
+	FAIL_DUP_MOD_BECOMING = 0,
+	FAIL_DUP_MOD_LOAD,
+};
+
+#ifdef CONFIG_MODULE_DEBUGFS
+extern struct dentry *mod_debugfs_root;
+#endif
+
+#ifdef CONFIG_MODULE_STATS
+
+#define mod_stat_add_long(count, var) atomic_long_add(count, var)
+#define mod_stat_inc(name) atomic_inc(name)
+
+extern atomic_long_t total_mod_size;
+extern atomic_long_t total_text_size;
+extern atomic_long_t invalid_kread_bytes;
+extern atomic_long_t invalid_decompress_bytes;
+
+extern atomic_t modcount;
+extern atomic_t failed_kreads;
+extern atomic_t failed_decompress;
+struct mod_fail_load {
+	struct list_head list;
+	char name[MODULE_NAME_LEN];
+	atomic_long_t count;
+	unsigned long dup_fail_mask;
+};
+
+int try_add_failed_module(const char *name, enum fail_dup_mod_reason reason);
+void mod_stat_bump_invalid(struct load_info *info, int flags);
+void mod_stat_bump_becoming(struct load_info *info, int flags);
+
+#else
+
+#define mod_stat_add_long(name, var)
+#define mod_stat_inc(name)
+
+static inline int try_add_failed_module(const char *name,
+					enum fail_dup_mod_reason reason)
+{
+	return 0;
+}
+
+static inline void mod_stat_bump_invalid(struct load_info *info, int flags)
+{
+}
+
+static inline void mod_stat_bump_becoming(struct load_info *info, int flags)
+{
+}
+
+#endif /* CONFIG_MODULE_STATS */
+
 #ifdef CONFIG_MODULE_UNLOAD_TAINT_TRACKING
 struct mod_unload_taint {
 	struct list_head list;
diff --git a/kernel/module/main.c b/kernel/module/main.c
index 75b23257128d..01fffa8afef2 100644
--- a/kernel/module/main.c
+++ b/kernel/module/main.c
@@ -56,6 +56,7 @@
 #include <linux/dynamic_debug.h>
 #include <linux/audit.h>
 #include <linux/cfi.h>
+#include <linux/debugfs.h>
 #include <uapi/linux/module.h>
 #include "internal.h"
 
@@ -2500,6 +2501,18 @@ static noinline int do_init_module(struct module *mod)
 {
 	int ret = 0;
 	struct mod_initfree *freeinit;
+#if defined(CONFIG_MODULE_STATS)
+	unsigned int text_size = 0, total_size = 0;
+
+	for_each_mod_mem_type(type) {
+		const struct module_memory *mod_mem = &mod->mem[type];
+		if (mod_mem->size) {
+			total_size += mod_mem->size;
+			if (type == MOD_TEXT || type == MOD_INIT_TEXT)
+				text_size += mod_mem->size;
+		}
+	}
+#endif
 
 	freeinit = kmalloc(sizeof(*freeinit), GFP_KERNEL);
 	if (!freeinit) {
@@ -2561,6 +2574,7 @@ static noinline int do_init_module(struct module *mod)
 		mod->mem[type].base = NULL;
 		mod->mem[type].size = 0;
 	}
+
 #ifdef CONFIG_DEBUG_INFO_BTF_MODULES
 	/* .BTF is not SHF_ALLOC and will get removed, so sanitize pointer */
 	mod->btf_data = NULL;
@@ -2584,6 +2598,11 @@ static noinline int do_init_module(struct module *mod)
 	mutex_unlock(&module_mutex);
 	wake_up_all(&module_wq);
 
+	mod_stat_add_long(text_size, &total_text_size);
+	mod_stat_add_long(total_size, &total_mod_size);
+
+	mod_stat_inc(&modcount);
+
 	return 0;
 
 fail_free_freeinit:
@@ -2599,6 +2618,7 @@ static noinline int do_init_module(struct module *mod)
 	ftrace_release_mod(mod);
 	free_module(mod);
 	wake_up_all(&module_wq);
+
 	return ret;
 }
 
@@ -2632,7 +2652,8 @@ static bool finished_loading(const char *name)
 }
 
 /* Must be called with module_mutex held */
-static int module_patient_check_exists(const char *name)
+static int module_patient_check_exists(const char *name,
+				       enum fail_dup_mod_reason reason)
 {
 	struct module *old;
 	int err = 0;
@@ -2655,6 +2676,9 @@ static int module_patient_check_exists(const char *name)
 		old = find_module_all(name, strlen(name), true);
 	}
 
+	if (try_add_failed_module(name, reason))
+		pr_warn("Could not add fail-tracking for module: %s\n", name);
+
 	/*
 	 * We are here only when the same module was being loaded. Do
 	 * not try to load it again right now. It prevents long delays
@@ -2679,7 +2703,7 @@ static int add_unformed_module(struct module *mod)
 	mod->state = MODULE_STATE_UNFORMED;
 
 	mutex_lock(&module_mutex);
-	err = module_patient_check_exists(mod->name);
+	err = module_patient_check_exists(mod->name, FAIL_DUP_MOD_LOAD);
 	if (err)
 		goto out;
 
@@ -2800,6 +2824,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
 		       int flags)
 {
 	struct module *mod;
+	bool module_allocated = false;
 	long err = 0;
 	char *after_dashes;
 
@@ -2839,6 +2864,8 @@ static int load_module(struct load_info *info, const char __user *uargs,
 		goto free_copy;
 	}
 
+	module_allocated = true;
+
 	audit_log_kern_module(mod->name);
 
 	/* Reserve our place in the list. */
@@ -2983,6 +3010,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
 	synchronize_rcu();
 	mutex_unlock(&module_mutex);
  free_module:
+	mod_stat_bump_invalid(info, flags);
 	/* Free lock-classes; relies on the preceding sync_rcu() */
 	for_class_mod_mem_type(type, core_data) {
 		lockdep_free_key_range(mod->mem[type].base,
@@ -2991,6 +3019,13 @@ static int load_module(struct load_info *info, const char __user *uargs,
 
 	module_deallocate(mod, info);
  free_copy:
+	/*
+	 * The info->len is always set. We distinguish between
+	 * failures once the proper module was allocated and
+	 * before that.
+	 */
+	if (!module_allocated)
+		mod_stat_bump_becoming(info, flags);
 	free_copy(info, flags);
 	return err;
 }
@@ -3009,8 +3044,11 @@ SYSCALL_DEFINE3(init_module, void __user *, umod,
 	       umod, len, uargs);
 
 	err = copy_module_from_user(umod, len, &info);
-	if (err)
+	if (err) {
+		mod_stat_inc(&failed_kreads);
+		mod_stat_add_long(len, &invalid_kread_bytes);
 		return err;
+	}
 
 	return load_module(&info, uargs, 0);
 }
@@ -3035,14 +3073,20 @@ SYSCALL_DEFINE3(finit_module, int, fd, const char __user *, uargs, int, flags)
 
 	len = kernel_read_file_from_fd(fd, 0, &buf, INT_MAX, NULL,
 				       READING_MODULE);
-	if (len < 0)
+	if (len < 0) {
+		mod_stat_inc(&failed_kreads);
+		mod_stat_add_long(len, &invalid_kread_bytes);
 		return len;
+	}
 
 	if (flags & MODULE_INIT_COMPRESSED_FILE) {
 		err = module_decompress(&info, buf, len);
 		vfree(buf); /* compressed data is no longer needed */
-		if (err)
+		if (err) {
+			mod_stat_inc(&failed_decompress);
+			mod_stat_add_long(len, &invalid_decompress_bytes);
 			return err;
+		}
 	} else {
 		info.hdr = buf;
 		info.len = len;
@@ -3216,3 +3260,14 @@ void print_modules(void)
 			last_unloaded_module.taints);
 	pr_cont("\n");
 }
+
+#ifdef CONFIG_MODULE_DEBUGFS
+struct dentry *mod_debugfs_root;
+
+static int module_debugfs_init(void)
+{
+	mod_debugfs_root = debugfs_create_dir("modules", NULL);
+	return 0;
+}
+module_init(module_debugfs_init);
+#endif
diff --git a/kernel/module/stats.c b/kernel/module/stats.c
new file mode 100644
index 000000000000..3d45744b3920
--- /dev/null
+++ b/kernel/module/stats.c
@@ -0,0 +1,430 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Debugging module statistics.
+ *
+ * Copyright (C) 2023 Luis Chamberlain <mcgrof@kernel.org>
+ */
+
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/debugfs.h>
+#include <linux/rculist.h>
+#include <linux/math.h>
+
+#include "internal.h"
+
+/**
+ * DOC: module debugging statistics overview
+ *
+ * Enabling CONFIG_MODULE_STATS enables module debugging statistics which
+ * are useful to monitor and root cause memory pressure issues with module
+ * loading. These statistics are useful to allow us to improve production
+ * workloads.
+ *
+ * The current module debugging statistics supported help keep track of module
+ * loading failures to enable improvements either for kernel module auto-loading
+ * usage (request_module()) or interactions with userspace. Statistics are
+ * provided to track all possible failures in the finit_module() path and memory
+ * wasted in this process space.  Each of the failure counters are associated
+ * to a type of module loading failure which is known to incur a certain amount
+ * of memory allocation loss. In the worst case loading a module will fail after
+ * a 3 step memory allocation process:
+ *
+ *   a) memory allocated with kernel_read_file_from_fd()
+ *   b) module decompression processes the file read from
+ *      kernel_read_file_from_fd(), and vmap() is used to map
+ *      the decompressed module to a new local buffer which represents
+ *      a copy of the decompressed module passed from userspace. The buffer
+ *      from kernel_read_file_from_fd() is freed right away.
+ *   c) layout_and_allocate() allocates space for the final resting
+ *      place where we would keep the module if it were to be processed
+ *      successfully.
+ *
+ * If a failure occurs after these three different allocations only one
+ * counter will be incremented with the summation of the allocated bytes freed
+ * incurred during this failure. Likewise, if module loading failed only after
+ * step b) a separate counter is used and incremented for the bytes freed and
+ * not used during both of those allocations.
+ *
+ * Virtual memory space can be limited, for example on x86 virtual memory size
+ * defaults to 128 MiB. We should strive to limit and avoid wasting virtual
+ * memory allocations when possible. These module debugging statistics help
+ * to evaluate how much memory is being wasted on bootup due to module loading
+ * failures.
+ *
+ * All counters are designed to be incremental. Atomic counters are used so to
+ * remain simple and avoid delays and deadlocks.
+ */
+
+/**
+ * DOC: dup_failed_modules - tracks duplicate failed modules
+ *
+ * Linked list of modules which failed to be loaded because an already existing
+ * module with the same name was already being processed or already loaded.
+ * The finit_module() system call incurs heavy virtual memory allocations. In
+ * the worst case an finit_module() system call can end up allocating virtual
+ * memory 3 times:
+ *
+ *   1) kernel_read_file_from_fd() call uses vmalloc()
+ *   2) optional module decompression uses vmap()
+ *   3) layout_and allocate() can use vzalloc() or an arch specific variation of
+ *      vmalloc to deal with ELF sections requiring special permissions
+ *
+ * In practice on a typical boot today most finit_module() calls fail due to
+ * the module with the same name already being loaded or about to be processed.
+ * All virtual memory allocated to these failed modules will be freed with
+ * no functional use.
+ *
+ * To help with this the dup_failed_modules allows us to track modules which
+ * failed to load due to the fact that a module was already loaded or being
+ * processed.  There are only two points at which we can fail such calls,
+ * we list them below along with the number of virtual memory allocation
+ * calls:
+ *
+ *   a) FAIL_DUP_MOD_BECOMING: at the end of early_mod_check() before
+ *	layout_and_allocate(). This does not yet happen.
+ *	- with module decompression: 2 virtual memory allocation calls
+ *	- without module decompression: 1 virtual memory allocation calls
+ *   b) FAIL_DUP_MOD_LOAD: after layout_and_allocate() on add_unformed_module()
+ *   	- with module decompression 3 virtual memory allocation calls
+ *   	- without module decompression 2 virtual memory allocation calls
+ *
+ * We should strive to get this list to be as small as possible. If this list
+ * is not empty it is a reflection of possible work or optimizations possible
+ * either in-kernel or in userspace.
+ */
+static LIST_HEAD(dup_failed_modules);
+
+/**
+ * DOC: module statistics debugfs counters
+ *
+ * The total amount of wasted virtual memory allocation space during module
+ * loading can be computed by adding the total from the summation:
+ *
+ *   * @invalid_kread_bytes +
+ *     @invalid_decompress_bytes +
+ *     @invalid_becoming_bytes +
+ *     @invalid_mod_bytes
+ *
+ * The following debugfs counters are available to inspect module loading
+ * failures:
+ *
+ *   * total_mod_size: total bytes ever used by all modules we've dealt with on
+ *     this system
+ *   * total_text_size: total bytes of the .text and .init.text ELF section
+ *     sizes we've dealt with on this system
+ *   * invalid_kread_bytes: bytes allocated and then freed on failures which
+ *     happen due to the initial kernel_read_file_from_fd(). kernel_read_file_from_fd()
+ *     uses vmalloc(). These should typically not happen unless your system is
+ *     under memory pressure.
+ *   * invalid_decompress_bytes: number of bytes allocated and freed due to
+ *     memory allocations in the module decompression path that use vmap().
+ *     These typically should not happen unless your system is under memory
+ *     pressure.
+ *   * invalid_becoming_bytes: total number of bytes allocated and freed used
+ *     used to read the kernel module userspace wants us to read before we
+ *     promote it to be processed to be added to our @modules linked list.
+ *     These failures could in theory happen if we had a check in
+ *     between a successful kernel_read_file_from_fd()
+ *     call and right before we allocate the our private memory for the module
+ *     which would be kept if the module is successfully loaded. The most common
+ *     reason for this failure is when userspace is racing to load a module
+ *     which it does not yet see loaded. The first module to succeed in
+ *     add_unformed_module() will add a module to our &modules list and
+ *     subsequent loads of modules with the same name will error out at the
+ *     end of early_mod_check(). A check for module_patient_check_exists()
+ *     at the end of early_mod_check() could be added to prevent duplicate allocations
+ *     on layout_and_allocate() for modules already being processed. These
+ *     duplicate failed modules are non-fatal, however they typically are
+ *     indicative of userspace not seeing a module in userspace loaded yet and
+ *     unnecessarily trying to load a module before the kernel even has a chance
+ *     to begin to process prior requests. Although duplicate failures can be
+ *     non-fatal, we should try to reduce vmalloc() pressure proactively, so
+ *     ideally after boot this will be close to as 0 as possible.  If module
+ *     decompression was used we also add to this counter the cost of the
+ *     initial kernel_read_file_from_fd() of the compressed module. If module
+ *     decompression was not used the value represents the total allocated and
+ *     freed bytes in kernel_read_file_from_fd() calls for these type of
+ *     failures. These failures can occur because:
+ *
+ *    * module_sig_check() - module signature checks
+ *    * elf_validity_cache_copy() - some ELF validation issue
+ *    * early_mod_check():
+ *
+ *      * blacklisting
+ *      * failed to rewrite section headers
+ *      * version magic
+ *      * live patch requirements didn't check out
+ *      * the module was detected as being already present
+ *
+ *   * invalid_mod_bytes: these are the total number of bytes allocated and
+ *     freed due to failures after we did all the sanity checks of the module
+ *     which userspace passed to us and after our first check that the module
+ *     is unique.  A module can still fail to load if we detect the module is
+ *     loaded after we allocate space for it with layout_and_allocate(), we do
+ *     this check right before processing the module as live and run its
+ *     initialization routines. Note that you have a failure of this type it
+ *     also means the respective kernel_read_file_from_fd() memory space was
+ *     also freed and not used, and so we increment this counter with twice
+ *     the size of the module. Additionally if you used module decompression
+ *     the size of the compressed module is also added to this counter.
+ *
+ *  * modcount: how many modules we've loaded in our kernel life time
+ *  * failed_kreads: how many modules failed due to failed kernel_read_file_from_fd()
+ *  * failed_decompress: how many failed module decompression attempts we've had.
+ *    These really should not happen unless your compression / decompression
+ *    might be broken.
+ *  * failed_becoming: how many modules failed after we kernel_read_file_from_fd()
+ *    it and before we allocate memory for it with layout_and_allocate(). This
+ *    counter is never incremented if you manage to validate the module and
+ *    call layout_and_allocate() for it.
+ *  * failed_load_modules: how many modules failed once we've allocated our
+ *    private space for our module using layout_and_allocate(). These failures
+ *    should hopefully mostly be dealt with already. Races in theory could
+ *    still exist here, but it would just mean the kernel had started processing
+ *    two threads concurrently up to early_mod_check() and one thread won.
+ *    These failures are good signs the kernel or userspace is doing something
+ *    seriously stupid or that could be improved. We should strive to fix these,
+ *    but it is perhaps not easy to fix them. A recent example are the modules
+ *    requests incurred for frequency modules, a separate module request was
+ *    being issued for each CPU on a system.
+ */
+
+atomic_long_t total_mod_size;
+atomic_long_t total_text_size;
+atomic_long_t invalid_kread_bytes;
+atomic_long_t invalid_decompress_bytes;
+static atomic_long_t invalid_becoming_bytes;
+static atomic_long_t invalid_mod_bytes;
+atomic_t modcount;
+atomic_t failed_kreads;
+atomic_t failed_decompress;
+static atomic_t failed_becoming;
+static atomic_t failed_load_modules;
+
+static const char *mod_fail_to_str(struct mod_fail_load *mod_fail)
+{
+	if (test_bit(FAIL_DUP_MOD_BECOMING, &mod_fail->dup_fail_mask) &&
+	    test_bit(FAIL_DUP_MOD_LOAD, &mod_fail->dup_fail_mask))
+		return "Becoming & Load";
+	if (test_bit(FAIL_DUP_MOD_BECOMING, &mod_fail->dup_fail_mask))
+		return "Becoming";
+	if (test_bit(FAIL_DUP_MOD_LOAD, &mod_fail->dup_fail_mask))
+		return "Load";
+	return "Bug-on-stats";
+}
+
+void mod_stat_bump_invalid(struct load_info *info, int flags)
+{
+	atomic_long_add(info->len * 2, &invalid_mod_bytes);
+	atomic_inc(&failed_load_modules);
+#if defined(CONFIG_MODULE_DECOMPRESS)
+	if (flags & MODULE_INIT_COMPRESSED_FILE)
+		atomic_long_add(info->compressed_len, &invalid_mod_byte);
+#endif
+}
+
+void mod_stat_bump_becoming(struct load_info *info, int flags)
+{
+	atomic_inc(&failed_becoming);
+	atomic_long_add(info->len, &invalid_becoming_bytes);
+#if defined(CONFIG_MODULE_DECOMPRESS)
+	if (flags & MODULE_INIT_COMPRESSED_FILE)
+		atomic_long_add(info->compressed_len, &invalid_becoming_bytes);
+#endif
+}
+
+int try_add_failed_module(const char *name, enum fail_dup_mod_reason reason)
+{
+	struct mod_fail_load *mod_fail;
+
+	list_for_each_entry_rcu(mod_fail, &dup_failed_modules, list,
+				lockdep_is_held(&module_mutex)) {
+		if (!strcmp(mod_fail->name, name)) {
+			atomic_long_inc(&mod_fail->count);
+			__set_bit(reason, &mod_fail->dup_fail_mask);
+			goto out;
+		}
+	}
+
+	mod_fail = kzalloc(sizeof(*mod_fail), GFP_KERNEL);
+	if (!mod_fail)
+		return -ENOMEM;
+	memcpy(mod_fail->name, name, strlen(name));
+	__set_bit(reason, &mod_fail->dup_fail_mask);
+	atomic_long_inc(&mod_fail->count);
+	list_add_rcu(&mod_fail->list, &dup_failed_modules);
+out:
+	return 0;
+}
+
+/*
+ * At 64 bytes per module and assuming a 1024 bytes preamble we can fit the
+ * 112 module prints within 8k.
+ *
+ * 1024 + (64*112) = 8k
+ */
+#define MAX_PREAMBLE 1024
+#define MAX_FAILED_MOD_PRINT 112
+#define MAX_BYTES_PER_MOD 64
+static ssize_t read_file_mod_stats(struct file *file, char __user *user_buf,
+				   size_t count, loff_t *ppos)
+{
+	struct mod_fail_load *mod_fail;
+	unsigned int len, size, count_failed = 0;
+	char *buf;
+	u32 live_mod_count, fkreads, fdecompress, fbecoming, floads;
+	u64 total_size, text_size, ikread_bytes, ibecoming_bytes, idecompress_bytes, imod_bytes,
+	    total_virtual_lost;
+
+	live_mod_count = atomic_read(&modcount);
+	fkreads = atomic_read(&failed_kreads);
+	fdecompress = atomic_read(&failed_decompress);
+	fbecoming = atomic_read(&failed_becoming);
+	floads = atomic_read(&failed_load_modules);
+
+	total_size = atomic64_read(&total_mod_size);
+	text_size = atomic64_read(&total_text_size);
+	ikread_bytes = atomic64_read(&invalid_kread_bytes);
+	idecompress_bytes = atomic64_read(&invalid_decompress_bytes);
+	ibecoming_bytes = atomic64_read(&invalid_becoming_bytes);
+	imod_bytes = atomic64_read(&invalid_mod_bytes);
+
+	total_virtual_lost = ikread_bytes + idecompress_bytes + ibecoming_bytes + imod_bytes;
+
+	size = MAX_PREAMBLE + min((unsigned int)(floads + fbecoming),
+				  (unsigned int)MAX_FAILED_MOD_PRINT) * MAX_BYTES_PER_MOD;
+	buf = kzalloc(size, GFP_KERNEL);
+	if (buf == NULL)
+		return -ENOMEM;
+
+	/* The beginning of our debug preamble */
+	len = scnprintf(buf + 0, size - len, "%25s\t%u\n", "Mods ever loaded", live_mod_count);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on kread", fkreads);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on decompress",
+			 fdecompress);
+	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on becoming", fbecoming);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%u\n", "Mods failed on load", floads);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Total module size", total_size);
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Total mod text size", text_size);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed kread bytes", ikread_bytes);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed decompress bytes",
+			 idecompress_bytes);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed becoming bytes", ibecoming_bytes);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Failed kmod bytes", imod_bytes);
+
+	len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Virtual mem wasted bytes", total_virtual_lost);
+
+	if (live_mod_count && total_size) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Average mod size",
+				 DIV_ROUND_UP(total_size, live_mod_count));
+	}
+
+	if (live_mod_count && text_size) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Average mod text size",
+				 DIV_ROUND_UP(text_size, live_mod_count));
+	}
+
+	/*
+	 * We use WARN_ON_ONCE() for the counters to ensure we always have parity
+	 * for keeping tabs on a type of failure with one type of byte counter.
+	 * The counters for imod_bytes does not increase for fkreads failures
+	 * for example, and so on.
+	 */
+
+	WARN_ON_ONCE(ikread_bytes && !fkreads);
+	if (fkreads && ikread_bytes) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Avg fail kread bytes",
+				 DIV_ROUND_UP(ikread_bytes, fkreads));
+	}
+
+	WARN_ON_ONCE(ibecoming_bytes && !fbecoming);
+	if (fbecoming && ibecoming_bytes) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Avg fail becoming bytes",
+				 DIV_ROUND_UP(ibecoming_bytes, fbecoming));
+	}
+
+	WARN_ON_ONCE(idecompress_bytes && !fdecompress);
+	if (fdecompress && idecompress_bytes) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Avg fail decomp bytes",
+				 DIV_ROUND_UP(idecompress_bytes, fdecompress));
+	}
+
+	WARN_ON_ONCE(imod_bytes && !floads);
+	if (floads && imod_bytes) {
+		len += scnprintf(buf + len, size - len, "%25s\t%llu\n", "Average fail load bytes",
+				 DIV_ROUND_UP(imod_bytes, floads));
+	}
+
+	/* End of our debug preamble header. */
+
+	/* Catch when we've gone beyond our expected preamble */
+	WARN_ON_ONCE(len >= MAX_PREAMBLE);
+
+	if (list_empty(&dup_failed_modules))
+		goto out;
+
+	len += scnprintf(buf + len, size - len, "Duplicate failed modules:\n");
+	len += scnprintf(buf + len, size - len, "%25s\t%15s\t%25s\n",
+			 "Module-name", "How-many-times", "Reason");
+	mutex_lock(&module_mutex);
+
+
+	list_for_each_entry_rcu(mod_fail, &dup_failed_modules, list) {
+		if (WARN_ON_ONCE(++count_failed >= MAX_FAILED_MOD_PRINT))
+			goto out_unlock;
+		len += scnprintf(buf + len, size - len, "%25s\t%15llu\t%25s\n", mod_fail->name,
+				 atomic64_read(&mod_fail->count), mod_fail_to_str(mod_fail));
+	}
+out_unlock:
+	mutex_unlock(&module_mutex);
+out:
+	kfree(buf);
+        return simple_read_from_buffer(user_buf, count, ppos, buf, len);
+}
+#undef MAX_PREAMBLE
+#undef MAX_FAILED_MOD_PRINT
+#undef MAX_BYTES_PER_MOD
+
+static const struct file_operations fops_mod_stats = {
+	.read = read_file_mod_stats,
+	.open = simple_open,
+	.owner = THIS_MODULE,
+	.llseek = default_llseek,
+};
+
+#define mod_debug_add_ulong(name) debugfs_create_ulong(#name, 0400, mod_debugfs_root, (unsigned long *) &name.counter)
+#define mod_debug_add_atomic(name) debugfs_create_atomic_t(#name, 0400, mod_debugfs_root, &name)
+static int __init module_stats_init(void)
+{
+	mod_debug_add_ulong(total_mod_size);
+	mod_debug_add_ulong(total_text_size);
+	mod_debug_add_ulong(invalid_kread_bytes);
+	mod_debug_add_ulong(invalid_decompress_bytes);
+	mod_debug_add_ulong(invalid_becoming_bytes);
+	mod_debug_add_ulong(invalid_mod_bytes);
+
+	mod_debug_add_atomic(modcount);
+	mod_debug_add_atomic(failed_kreads);
+	mod_debug_add_atomic(failed_decompress);
+	mod_debug_add_atomic(failed_becoming);
+	mod_debug_add_atomic(failed_load_modules);
+
+	debugfs_create_file("stats", 0400, mod_debugfs_root, mod_debugfs_root, &fops_mod_stats);
+
+	return 0;
+}
+#undef mod_debug_add_ulong
+#undef mod_debug_add_atomic
+module_init(module_stats_init);
diff --git a/kernel/module/tracking.c b/kernel/module/tracking.c
index 26d812e07615..16742d1c630c 100644
--- a/kernel/module/tracking.c
+++ b/kernel/module/tracking.c
@@ -15,6 +15,7 @@
 #include "internal.h"
 
 static LIST_HEAD(unloaded_tainted_modules);
+extern struct dentry *mod_debugfs_root;
 
 int try_add_tainted_module(struct module *mod)
 {
@@ -120,12 +121,8 @@ static const struct file_operations unloaded_tainted_modules_fops = {
 
 static int __init unloaded_tainted_modules_init(void)
 {
-	struct dentry *dir;
-
-	dir = debugfs_create_dir("modules", NULL);
-	debugfs_create_file("unloaded_tainted", 0444, dir, NULL,
+	debugfs_create_file("unloaded_tainted", 0444, mod_debugfs_root, NULL,
 			    &unloaded_tainted_modules_fops);
-
 	return 0;
 }
 module_init(unloaded_tainted_modules_init);
-- 
2.39.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-04-18 18:37 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-14  5:08 [PATCH v3 0/4] module: avoid userspace pressure on unwanted allocations Luis Chamberlain
2023-04-14  5:08 ` [PATCH v3 1/4] module: fix kmemleak annotations for non init ELF sections Luis Chamberlain
2023-04-14 10:18   ` Catalin Marinas
2023-04-14  5:08 ` [PATCH v3 2/4] module: extract patient module check into helper Luis Chamberlain
2023-04-14  5:08 ` [PATCH v3 3/4] module: add debug stats to help identify memory pressure Luis Chamberlain
2023-04-17 11:18   ` Petr Pavlu
2023-04-18 18:30     ` Luis Chamberlain
2023-04-18 18:37   ` [PATCH v4] " Luis Chamberlain
2023-04-14  5:08 ` [PATCH v3 4/4] module: avoid allocation if module is already present and ready Luis Chamberlain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox