From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7E3EC761A6 for ; Tue, 4 Apr 2023 13:41:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2EE746B0071; Tue, 4 Apr 2023 09:41:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 29E166B0072; Tue, 4 Apr 2023 09:41:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 167536B0074; Tue, 4 Apr 2023 09:41:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 074396B0071 for ; Tue, 4 Apr 2023 09:41:28 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 9073080E68 for ; Tue, 4 Apr 2023 13:41:27 +0000 (UTC) X-FDA: 80643820614.07.E8B7F48 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf10.hostedemail.com (Postfix) with ESMTP id B88F3C0027 for ; Tue, 4 Apr 2023 13:41:25 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="s9l/JArC"; spf=pass (imf10.hostedemail.com: domain of cem@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cem@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1680615685; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SZAEyrr8+uViIYRx8j1kdDMUgTpk7n0MlTWUGhhHyno=; b=ieHAQfkUWI4AsRQfTiN4MOwglP7iKRgM5dkkNKgPhEBmnB5L/TJJWiFAEVpetDSP32Li3m mERcOD0BcHLZ/icFP1z9TiNiF5ghhbTuZWBjqmhYgqDlKJuJVy0530nS4EKQZ/JITbtEXA 5iK1azD15GTTs0uiq6vLcFd83vg9G60= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="s9l/JArC"; spf=pass (imf10.hostedemail.com: domain of cem@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cem@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1680615685; a=rsa-sha256; cv=none; b=o+6f4eujFXT+pPLrjN/ynR4m/xi2USTh2douqtt9eCsFhu2A981oObI7C1XKq0xG0QyVRx dvgDEUWDdTGR7JeHQduWudltS8OfkrfhXiD3DBos7Om6MN4fJo6oxAHwU1MYKU9qBTVuRK r2ox1UHZd/Xmmog+DdBHqBsDuAHHVDI= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8DBDE620C9; Tue, 4 Apr 2023 13:41:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9315BC433EF; Tue, 4 Apr 2023 13:41:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1680615684; bh=7Eu9DbVJEq82+SCEaKk+DvzoL3wKW/2UU4XSJvV4dk8=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=s9l/JArCK49XD2VaCDThsBMwJ2vFo+GaMgbV2miPws9FLVBZ710MmoorOScd8wp+Y DUuRMq35mTzbbViX1NWPiTDQKq3KuEZAzVo/vJzlde4A4IDxxWhdXvbaktMt6cDEPK 6HwjJye5npIVb7Fy6q7f2Oh5NgXrKI6L6KUTVELqPkUWR3mlgwAV/rSPewxnTbs4J1 UjtjVLRbU1CJi+k9t0rSBG98+IPF4bnDWxt4hdlDLq5od/ZN8CcJirjN8Mnfs91V/C GfiIc8DuJq3PajFgcKtMeawIbyF7bt44J74bJfKh0kZNrS8DC6KMTxi7E6XJl/AwI8 v29qjOxbOMViw== Date: Tue, 4 Apr 2023 15:41:19 +0200 From: Carlos Maiolino To: "Darrick J. Wong" Cc: hughd@google.com, jack@suse.cz, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH 5/6] shmem: quota support Message-ID: <20230404134119.egxr4ypiwlbhwm7v@andromeda> References: <20230403084759.884681-1-cem@kernel.org> <20230403084759.884681-6-cem@kernel.org> <0mQXCgnCByEywhSdM0tfzTKIZ3fMw49KdlQoFnlUn_Pey7-3hSgewu91nMZH-C8fITLI_QPYXFCRsUMyzs6jWA==@protonmail.internalid> <20230403184625.GA379281@frogsfrogsfrogs> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230403184625.GA379281@frogsfrogsfrogs> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B88F3C0027 X-Rspam-User: X-Stat-Signature: xzb9g3wjdyguyjwbhft18kkz5whpddte X-HE-Tag: 1680615685-574706 X-HE-Meta: U2FsdGVkX1/A73d/y93LmKLft1Wb+f2HpzkpPv8Xz6MCw6gLIM440M5ONNHLcPTrOR1l1YV7itgzHQciasJe8U+Q5Z0PzMGZoRi++NOnf7bABgNom4IK9GEIMtgJBN2kIfpwcQflVoOOelvAID7DfzOJm6kA/bsL8x70Otg//HfzFpBs9CogYcLXQ66dcMyJxcgH9DbR6/ZGa7ipOyDFORHWgI7A+yxP0WCYPhZ57t6gxEhLMXY38OK3ZcFh6zwdYPDVUofZsZckUnEn80oUmyFBxssfCyoXqGojEf9GVxau28qgtZQYh71wfI3IBNzYw3+sE9tojOhSwMCeP/+ND20CanXrzzS3TKA9+w3E9pyrWwCtZICy08nXz3ivg0SuVEvULpjVfKKhWmI4DqMD3p27WuozkBTRRaGMepK+Tx5g1vg4nh29TyXgBs3To/EFe9s9Z+4UI/AWyI0GWpUbfhTg4EvUyB0I5vhB4TcUSd6jHrqRAucEF13nXlj7ZhrHQVj9Wtfi5TTII0/jHgQeHiV/V/4Jc2z29+S+g4zUX5cg+g3JIFV0FcS+VlgmTzKDCRsm5Tn4eNu/0rmDxbUyXc5c1aavboc53cBvwzEswwGCVqwih2lQE78MF7X21gkVD9NUN9BTpZo+cg+Gfz9tUNFR9U8jVK/CMkT03ttMHl3lWaGSJvn+P5QciwpImTzwKTjmTaUhk6ErgQgb9qEJJX+u7DmtQgJk8BXXuinedJnUBVuQV4kQX/GHVWpD8QHqSuaoAd+qCj8c2EWs2NVJI3Cb0uqtRMrgiHLNffuZxaZi7Fj5c3bhvAj5JEbJ6oR1KQo4J5r5/gKnPeikQku4caHVSbg+nbyEuwO1DouLmop1O5yBwb8VfK1yFhsGLkKtsb3KvQLQHfXaP0pjdqNKC/Q1wzNqD+2EQj+yL+bxfdIlrdYeUBnZZ1WFbH9xYw+MXCDlyOcBQIb5t5W4dwJ vyFUWbqL +yJz5XQn+xFctk9VwOGuMijrmzPwRL+prXm+vMM+yeoFl4CGbnjRh56OqjQX78F/Ph+5YZoUzwSZT2ZNEyjKgGpsPp0KMy+n8ICbfJOHUCaQ47g4xj22crlae6LrMf27Rk1fEoVfgW63fRVBbOMFZtQQJhdDDFZDKoaGfDzpRY1MX3g9nqb2/bwFRrnML5vuRv46kTwTHs1RDBf8Nddi3ptt6XUs2jPB8zASea/Oz/aGJyiAxo2TEUdhHrQipOyNV4PH3N2lDB2gmdboBtFlKM5LQ/vfeaA65cu3r+ABNUs1SRPs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi. > > atomic_t stop_eviction; /* hold when working on inode */ > > struct timespec64 i_crtime; /* file creation time */ > > unsigned int fsflags; /* flags for FS_IOC_[SG]ETFLAGS */ > > +#ifdef CONFIG_TMPFS_QUOTA > > + struct dquot *i_dquot[MAXQUOTAS]; > > Why allocate three dquot pointers here... > > > +#endif > > struct inode vfs_inode; > > }; > > > > @@ -171,4 +174,10 @@ extern int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > > #define SHMEM_QUOTA_MAX_SPC_LIMIT 0x7fffffffffffffffLL /* 2^63-1 */ > > #define SHMEM_QUOTA_MAX_INO_LIMIT 0x7fffffffffffffffLL > > > > +#ifdef CONFIG_TMPFS_QUOTA > > +#define SHMEM_MAXQUOTAS 2 > > ...when you're only allowing user and group quotas? My bad, I should have used SHMEM_MAXQUOTAS to define the i_dquot > > (Or: Why not allow project quotas? But that's outside the scope you > defined.) This is indeed on my plan, which I want to do later, I want to deal with the 'avoid users to consume all memory' issue, then I want to add prjquotas here. I want to limit the scope of this series by now to avoid it snowballing with more and more features. > > --D > > > +extern const struct dquot_operations shmem_quota_operations; > > +extern struct quota_format_type shmem_quota_format; > > +#endif /* CONFIG_TMPFS_QUOTA */ > > + > > #endif > > diff --git a/mm/shmem.c b/mm/shmem.c > > index 88e13930fc013..d7529c883eaf5 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -79,6 +79,7 @@ static struct vfsmount *shm_mnt; > > #include > > #include > > #include > > +#include > > > > #include > > > > @@ -116,10 +117,12 @@ struct shmem_options { > > bool full_inums; > > int huge; > > int seen; > > + unsigned short quota_types; > > #define SHMEM_SEEN_BLOCKS 1 > > #define SHMEM_SEEN_INODES 2 > > #define SHMEM_SEEN_HUGE 4 > > #define SHMEM_SEEN_INUMS 8 > > +#define SHMEM_SEEN_QUOTA 16 > > }; > > > > #ifdef CONFIG_TMPFS > > @@ -211,8 +214,11 @@ static inline int shmem_inode_acct_block(struct inode *inode, long pages) > > if (percpu_counter_compare(&sbinfo->used_blocks, > > sbinfo->max_blocks - pages) > 0) > > goto unacct; > > + if ((err = dquot_alloc_block_nodirty(inode, pages)) != 0) > > + goto unacct; > > percpu_counter_add(&sbinfo->used_blocks, pages); > > - } > > + } else if ((err = dquot_alloc_block_nodirty(inode, pages)) != 0) > > + goto unacct; > > > > return 0; > > > > @@ -226,6 +232,8 @@ static inline void shmem_inode_unacct_blocks(struct inode *inode, long pages) > > struct shmem_inode_info *info = SHMEM_I(inode); > > struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); > > > > + dquot_free_block_nodirty(inode, pages); > > + > > if (sbinfo->max_blocks) > > percpu_counter_sub(&sbinfo->used_blocks, pages); > > shmem_unacct_blocks(info->flags, pages); > > @@ -254,6 +262,47 @@ bool vma_is_shmem(struct vm_area_struct *vma) > > static LIST_HEAD(shmem_swaplist); > > static DEFINE_MUTEX(shmem_swaplist_mutex); > > > > +#ifdef CONFIG_TMPFS_QUOTA > > + > > +static int shmem_enable_quotas(struct super_block *sb, > > + unsigned short quota_types) > > +{ > > + int type, err = 0; > > + > > + sb_dqopt(sb)->flags |= DQUOT_QUOTA_SYS_FILE | DQUOT_NOLIST_DIRTY; > > + for (type = 0; type < SHMEM_MAXQUOTAS; type++) { > > + if (!(quota_types & (1 << type))) > > + continue; > > + err = dquot_load_quota_sb(sb, type, QFMT_SHMEM, > > + DQUOT_USAGE_ENABLED | > > + DQUOT_LIMITS_ENABLED); > > + if (err) > > + goto out_err; > > + } > > + return 0; > > + > > +out_err: > > + pr_warn("tmpfs: failed to enable quota tracking (type=%d, err=%d)\n", > > + type, err); > > + for (type--; type >= 0; type--) > > + dquot_quota_off(sb, type); > > + return err; > > +} > > + > > +static void shmem_disable_quotas(struct super_block *sb) > > +{ > > + int type; > > + > > + for (type = 0; type < SHMEM_MAXQUOTAS; type++) > > + dquot_quota_off(sb, type); > > +} > > + > > +static struct dquot **shmem_get_dquots(struct inode *inode) > > +{ > > + return SHMEM_I(inode)->i_dquot; > > +} > > +#endif /* CONFIG_TMPFS_QUOTA */ > > + > > /* > > * shmem_reserve_inode() performs bookkeeping to reserve a shmem inode, and > > * produces a novel ino for the newly allocated inode. > > @@ -360,7 +409,6 @@ static void shmem_recalc_inode(struct inode *inode) > > freed = info->alloced - info->swapped - inode->i_mapping->nrpages; > > if (freed > 0) { > > info->alloced -= freed; > > - inode->i_blocks -= freed * BLOCKS_PER_PAGE; > > shmem_inode_unacct_blocks(inode, freed); > > } > > } > > @@ -378,7 +426,6 @@ bool shmem_charge(struct inode *inode, long pages) > > > > spin_lock_irqsave(&info->lock, flags); > > info->alloced += pages; > > - inode->i_blocks += pages * BLOCKS_PER_PAGE; > > shmem_recalc_inode(inode); > > spin_unlock_irqrestore(&info->lock, flags); > > > > @@ -394,7 +441,6 @@ void shmem_uncharge(struct inode *inode, long pages) > > > > spin_lock_irqsave(&info->lock, flags); > > info->alloced -= pages; > > - inode->i_blocks -= pages * BLOCKS_PER_PAGE; > > shmem_recalc_inode(inode); > > spin_unlock_irqrestore(&info->lock, flags); > > > > @@ -1133,6 +1179,15 @@ static int shmem_setattr(struct mnt_idmap *idmap, > > } > > } > > > > + /* Transfer quota accounting */ > > + if (i_uid_needs_update(idmap, attr, inode) || > > + i_gid_needs_update(idmap, attr,inode)) { > > + error = dquot_transfer(idmap, inode, attr); > > + > > + if (error) > > + return error; > > + } > > + > > setattr_copy(idmap, inode, attr); > > if (attr->ia_valid & ATTR_MODE) > > error = posix_acl_chmod(idmap, dentry, inode->i_mode); > > @@ -1178,7 +1233,9 @@ static void shmem_evict_inode(struct inode *inode) > > simple_xattrs_free(&info->xattrs); > > WARN_ON(inode->i_blocks); > > shmem_free_inode(inode->i_sb); > > + dquot_free_inode(inode); > > clear_inode(inode); > > + dquot_drop(inode); > > } > > > > static int shmem_find_swap_entries(struct address_space *mapping, > > @@ -1975,7 +2032,6 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, > > > > spin_lock_irq(&info->lock); > > info->alloced += folio_nr_pages(folio); > > - inode->i_blocks += (blkcnt_t)BLOCKS_PER_PAGE << folio_order(folio); > > shmem_recalc_inode(inode); > > spin_unlock_irq(&info->lock); > > alloced = true; > > @@ -2346,9 +2402,10 @@ static void shmem_set_inode_flags(struct inode *inode, unsigned int fsflags) > > #define shmem_initxattrs NULL > > #endif > > > > -static struct inode *shmem_get_inode(struct mnt_idmap *idmap, struct super_block *sb, > > - struct inode *dir, umode_t mode, dev_t dev, > > - unsigned long flags) > > +static struct inode *shmem_get_inode_noquota(struct mnt_idmap *idmap, > > + struct super_block *sb, > > + struct inode *dir, umode_t mode, > > + dev_t dev, unsigned long flags) > > { > > struct inode *inode; > > struct shmem_inode_info *info; > > @@ -2422,6 +2479,37 @@ static struct inode *shmem_get_inode(struct mnt_idmap *idmap, struct super_block > > return inode; > > } > > > > +static struct inode *shmem_get_inode(struct mnt_idmap *idmap, > > + struct super_block *sb, struct inode *dir, > > + umode_t mode, dev_t dev, unsigned long flags) > > +{ > > + int err; > > + struct inode *inode; > > + > > + inode = shmem_get_inode_noquota(idmap, sb, dir, mode, dev, flags); > > + if (IS_ERR(inode)) > > + return inode; > > + > > + err = dquot_initialize(inode); > > + if (err) > > + goto errout; > > + > > + err = dquot_alloc_inode(inode); > > + if (err) { > > + dquot_drop(inode); > > + goto errout; > > + } > > + return inode; > > + > > +errout: > > + inode->i_flags |= S_NOQUOTA; > > + iput(inode); > > + shmem_free_inode(sb); > > + if (err) > > + return ERR_PTR(err); > > + return NULL; > > +} > > + > > #ifdef CONFIG_USERFAULTFD > > int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > > pmd_t *dst_pmd, > > @@ -2525,7 +2613,6 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > > > > spin_lock_irq(&info->lock); > > info->alloced++; > > - inode->i_blocks += BLOCKS_PER_PAGE; > > shmem_recalc_inode(inode); > > spin_unlock_irq(&info->lock); > > > > @@ -3372,6 +3459,7 @@ static ssize_t shmem_listxattr(struct dentry *dentry, char *buffer, size_t size) > > > > static const struct inode_operations shmem_short_symlink_operations = { > > .getattr = shmem_getattr, > > + .setattr = shmem_setattr, > > .get_link = simple_get_link, > > #ifdef CONFIG_TMPFS_XATTR > > .listxattr = shmem_listxattr, > > @@ -3380,6 +3468,7 @@ static const struct inode_operations shmem_short_symlink_operations = { > > > > static const struct inode_operations shmem_symlink_inode_operations = { > > .getattr = shmem_getattr, > > + .setattr = shmem_setattr, > > .get_link = shmem_get_link, > > #ifdef CONFIG_TMPFS_XATTR > > .listxattr = shmem_listxattr, > > @@ -3478,6 +3567,9 @@ enum shmem_param { > > Opt_uid, > > Opt_inode32, > > Opt_inode64, > > + Opt_quota, > > + Opt_usrquota, > > + Opt_grpquota, > > }; > > > > static const struct constant_table shmem_param_enums_huge[] = { > > @@ -3499,6 +3591,11 @@ const struct fs_parameter_spec shmem_fs_parameters[] = { > > fsparam_u32 ("uid", Opt_uid), > > fsparam_flag ("inode32", Opt_inode32), > > fsparam_flag ("inode64", Opt_inode64), > > +#ifdef CONFIG_TMPFS_QUOTA > > + fsparam_flag ("quota", Opt_quota), > > + fsparam_flag ("usrquota", Opt_usrquota), > > + fsparam_flag ("grpquota", Opt_grpquota), > > +#endif > > {} > > }; > > > > @@ -3582,6 +3679,18 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) > > ctx->full_inums = true; > > ctx->seen |= SHMEM_SEEN_INUMS; > > break; > > + case Opt_quota: > > + ctx->seen |= SHMEM_SEEN_QUOTA; > > + ctx->quota_types |= (QTYPE_MASK_USR | QTYPE_MASK_GRP); > > + break; > > + case Opt_usrquota: > > + ctx->seen |= SHMEM_SEEN_QUOTA; > > + ctx->quota_types |= QTYPE_MASK_USR; > > + break; > > + case Opt_grpquota: > > + ctx->seen |= SHMEM_SEEN_QUOTA; > > + ctx->quota_types |= QTYPE_MASK_GRP; > > + break; > > } > > return 0; > > > > @@ -3681,6 +3790,12 @@ static int shmem_reconfigure(struct fs_context *fc) > > goto out; > > } > > > > + if (ctx->seen & SHMEM_SEEN_QUOTA && > > + !sb_any_quota_loaded(fc->root->d_sb)) { > > + err = "Cannot enable quota on remount"; > > + goto out; > > + } > > + > > if (ctx->seen & SHMEM_SEEN_HUGE) > > sbinfo->huge = ctx->huge; > > if (ctx->seen & SHMEM_SEEN_INUMS) > > @@ -3763,6 +3878,9 @@ static void shmem_put_super(struct super_block *sb) > > { > > struct shmem_sb_info *sbinfo = SHMEM_SB(sb); > > > > +#ifdef CONFIG_TMPFS_QUOTA > > + shmem_disable_quotas(sb); > > +#endif > > free_percpu(sbinfo->ino_batch); > > percpu_counter_destroy(&sbinfo->used_blocks); > > mpol_put(sbinfo->mpol); > > @@ -3841,6 +3959,17 @@ static int shmem_fill_super(struct super_block *sb, struct fs_context *fc) > > #endif > > uuid_gen(&sb->s_uuid); > > > > +#ifdef CONFIG_TMPFS_QUOTA > > + if (ctx->seen & SHMEM_SEEN_QUOTA) { > > + sb->dq_op = &shmem_quota_operations; > > + sb->s_qcop = &dquot_quotactl_sysfile_ops; > > + sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP; > > + > > + if (shmem_enable_quotas(sb, ctx->quota_types)) > > + goto failed; > > + } > > +#endif /* CONFIG_TMPFS_QUOTA */ > > + > > inode = shmem_get_inode(&nop_mnt_idmap, sb, NULL, S_IFDIR | sbinfo->mode, 0, > > VM_NORESERVE); > > if (IS_ERR(inode)) { > > @@ -4016,6 +4145,9 @@ static const struct super_operations shmem_ops = { > > #ifdef CONFIG_TMPFS > > .statfs = shmem_statfs, > > .show_options = shmem_show_options, > > +#endif > > +#ifdef CONFIG_TMPFS_QUOTA > > + .get_dquots = shmem_get_dquots, > > #endif > > .evict_inode = shmem_evict_inode, > > .drop_inode = generic_delete_inode, > > @@ -4082,6 +4214,14 @@ void __init shmem_init(void) > > > > shmem_init_inodecache(); > > > > +#ifdef CONFIG_TMPFS_QUOTA > > + error = register_quota_format(&shmem_quota_format); > > + if (error < 0) { > > + pr_err("Could not register quota format\n"); > > + goto out3; > > + } > > +#endif > > + > > error = register_filesystem(&shmem_fs_type); > > if (error) { > > pr_err("Could not register tmpfs\n"); > > @@ -4106,6 +4246,10 @@ void __init shmem_init(void) > > out1: > > unregister_filesystem(&shmem_fs_type); > > out2: > > +#ifdef CONFIG_TMPFS_QUOTA > > + unregister_quota_format(&shmem_quota_format); > > +#endif > > +out3: > > shmem_destroy_inodecache(); > > shm_mnt = ERR_PTR(error); > > } > > -- > > 2.30.2 > > -- Carlos Maiolino