From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A63D2C77B7D for ; Sun, 14 May 2023 05:45:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC1B86B0071; Sun, 14 May 2023 01:45:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E4AC86B0072; Sun, 14 May 2023 01:45:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CEB5A6B0074; Sun, 14 May 2023 01:45:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B864A6B0071 for ; Sun, 14 May 2023 01:45:39 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 8111E16114E for ; Sun, 14 May 2023 05:45:39 +0000 (UTC) X-FDA: 80787773598.29.797C730 Received: from out-8.mta0.migadu.com (out-8.mta0.migadu.com [91.218.175.8]) by imf26.hostedemail.com (Postfix) with ESMTP id 943BB14000A for ; Sun, 14 May 2023 05:45:36 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=O+AZ8Mb+; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf26.hostedemail.com: domain of kent.overstreet@linux.dev designates 91.218.175.8 as permitted sender) smtp.mailfrom=kent.overstreet@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684043136; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Qjw9R07t509s4y9z5LtStC5k9C/e0uq9CvIhkbXJnQc=; b=WYyiJ1l7BjvXYLlLO8BOyxo7mjmp9NKSTMYog9iC0fQPmMrlZ8S77kpv/xBT04J8NsuJI0 +HQ2Hd3JC7L1m0ORPS7vINKrZOS6+ZJ5OQ7yAoVtOgpq54hKdYuM97/+9fsiLir+J8vDyO qIqz5a7eoE8Zlcuusxk3ZibM5VyXT7I= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=O+AZ8Mb+; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf26.hostedemail.com: domain of kent.overstreet@linux.dev designates 91.218.175.8 as permitted sender) smtp.mailfrom=kent.overstreet@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684043136; a=rsa-sha256; cv=none; b=e+xR6exAsumGWDdreC89824L0V6D6lGAcsG20C7FfOEgYyWc+4nXxRyzctjnscTCw38/vP 4TExpRbedpqn0cGGv1UGDg33n0nbypf8xGCY6WYno5cLWBq89SsWQ7wrYKMG4gOJk3SYJf 70f2+LfkpaTqT3srlzHDFVA++VPNgbM= Date: Sun, 14 May 2023 01:45:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1684043134; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Qjw9R07t509s4y9z5LtStC5k9C/e0uq9CvIhkbXJnQc=; b=O+AZ8Mb+2d7xaS2/Gh0spUNLJ9ot0yP8kYnoSU4+o7Lhb7xJ5fQYOv9b7C/5Wfz7SoXD9p YgC1kS5j8+4EAMT6uG7PtwPjb5JHKDnLgZzc0Z11vnbmcNLFHaY0ugHEOF84WYbGz9CnRd 6DtNe8XUAt8LpqtqpceWYji9TIp44cU= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Kent Overstreet To: Eric Biggers Cc: Lorenzo Stoakes , Christoph Hellwig , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-bcachefs@vger.kernel.org, Kent Overstreet , Andrew Morton , Uladzislau Rezki , linux-mm@kvack.org Subject: Re: [PATCH 07/32] mm: Bring back vmalloc_exec Message-ID: References: <20230509165657.1735798-1-kent.overstreet@linux.dev> <20230509165657.1735798-8-kent.overstreet@linux.dev> <20230510064849.GC1851@quark.localdomain> <20230513015752.GC3033@quark.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230513015752.GC3033@quark.localdomain> X-Migadu-Flow: FLOW_OUT X-Stat-Signature: 9igyf9yixk8z64woh9sqoop9894i3th4 X-Rspam-User: X-Rspamd-Queue-Id: 943BB14000A X-Rspamd-Server: rspam07 X-HE-Tag: 1684043136-559232 X-HE-Meta: U2FsdGVkX1/pcG+4/WOXhoOnTAuYMwk5ghrxYUrx5f0zWoN8MC5CpdYdo+azzRmcBE+3cLXE1SaKbFud+pES7c6F49XPTqJ0qqPFdsCHFncgn8PwRhivof32qMl32uT2p8qq/3G/wUhZBxE1VSE6WqVC2UuJNwbbtuisg22FdW9O5A7aFmLvC0ULpazURe6kQ84bMle8nQzs2LOG9ZIuBXYTuZ2QFrq2gzO01lgOQb6+VYkoWpgYHZCp98A1Rc3N/qTEW9Peu1NYbY+zS8FsXXfgeD56qO3zPR/N66oNJgC/UYZtROtVzldEWf0+KOVdl0y7wVpx97GJeQDcBmHxlu+yfgztCm3/gxkVroG9FRr2yGjUlfPIgPfr/4D5Zll2sls7YgeU9W/Lr+cPG0uNJT+ezSz2uAAotUEj0Cbr2y9MTa8DPntDIXK0zB5AhZBOk2xUL69eX4PeXasS8NRv1xwz08N5NQKg5bh642x8a6pfocvxS5gGarEuziQUVtrUdv29S9UOFqTL9zyp9yOUf7nVg47KPDebt60fkX8JGGW3dgrt8MVbfP+q7s47c7XJUWciWeFEVfIMc+7YZou8qfdcGMq298cxt/66oCpoSGo3N67MCi4tlD7lDdahd3ecO694TDBuxqMEqK49x32+2mVKhTAZ5Js/vXG3KcmZsoGJgcpz2nsEyqndVP0SerPTPTSZNrrplOR7ReeVbK3YqPmP8mLwkRerkiHqS1TTrRq8slF05uTJJHkmBGX4HPGQeKg8hunrUlBPbr5XtavVpD2uUlZyX/J+gH8bJ7HnM2QG6HXsA2+Ma9eGFbXsz50+qC/Z1rBIx7C3lZUbYOxzbj90+4YTzNFJESaKQcSddO3WZh6D1o4RQ8frf0LdPvA+lwXlpXgpySetgEbVqupB9MrP4myzgbTClDMnUf0d4dBTfhtiUO7LzGMdfVOAkYFH34bYvTztylgWRqQCB5c yF9AUtbm NGtsybyrkUCEWKwm1efDZJxxlT3SqHs4WAdaHvoe0HyhtjSdZG0ncIpo337sR5pjeHlLH6S7RSkK82mu1LJaUA02DsFLsbubsuNlnNbZ8TQHH+UcAnRq3KqFOIAPUBWFq5iDirraaE2qMzablLJm4DzXXj5hajQtzDi5LPfAjeBCE2x0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, May 12, 2023 at 06:57:52PM -0700, Eric Biggers wrote: > First, I wanted to mention that decoding of variable-length fields has been > extensively studied for decompression algorithms, e.g. for Huffman decoding. > And it turns out that it can be done branchlessly. The basic idea is that you > have a branchless refill step that looks like the following: > > #define REFILL_BITS_BRANCHLESS() \ > bitbuf |= get_unaligned_u64(p) << bitsleft; \ > p += 7 - ((bitsleft >> 3) & 0x7); \ > bitsleft |= 56; > > That branchlessly ensures that 'bitbuf' contains '56 <= bitsleft <= 63' bits. > Then, the needed number of bits can be removed and returned: > > #define READ_BITS(n) \ > REFILL_BITS_BRANCHLESS(); \ > tmp = bitbuf & (((u64)1 << (n)) - 1); \ > bitbuf >>= (n); \ > bitsleft -= (n); \ > tmp > > If you're interested, I can give you some references about the above method. I might be interested in those references, new bit tricks and integer encodings are always fun :) > But, I really just wanted to mention it for completeness, since I think you'd > actually want to go in a slightly different direction, since (a) you have all > the field widths available from the beginning, as opposed to being interleaved > into the bitstream itself (as is the case in Huffman decoding for example), so > you're not limited to serialized decoding of each field, (b) your fields are up > to 96 bits, and (c) you've selected a bitstream convention that seems to make it > such that your stream *must* be read in aligned units of u64, so I don't think > something like REFILL_BITS_BRANCHLESS() could work for you anyway. > > What I would suggest instead is preprocessing the list of 6 field lengths to > create some information that can be used to extract all 6 fields branchlessly > with no dependencies between different fields. (And you clearly *can* add a > preprocessing step, as you already have one -- the dynamic code generator.) > > So, something like the following: > > const struct field_info *info = &format->fields[0]; > > field0 = (in->u64s[info->word_idx] >> info->shift1) & info->mask; > field0 |= in->u64s[info->word_idx - 1] >> info->shift2; > > ... but with the code for all 6 fields interleaved. > > On modern CPUs, I think that would be faster than your current C code. > > You could do better by creating variants that are specialized for specific > common sets of parameters. During "preprocessing", you would select a variant > and set an enum accordingly. During decoding, you would switch on that enum and > call the appropriate variant. (This could also be done with a function pointer, > of course, but indirect calls are slow these days...) testing random btree updates: dynamically generated unpack: rand_insert: 20.0 MiB with 1 threads in 33 sec, 1609 nsec per iter, 607 KiB per sec old C unpack: rand_insert: 20.0 MiB with 1 threads in 35 sec, 1672 nsec per iter, 584 KiB per sec the Eric Biggers special: rand_insert: 20.0 MiB with 1 threads in 35 sec, 1676 nsec per iter, 583 KiB per sec Tested two versions of your approach, one without a shift value, one where we use a shift value to try to avoid unaligned access - second was perhaps 1% faster so it's not looking good. This benchmark doesn't even hit on unpack_key() quite as much as I thought, so the difference is significant. diff --git a/fs/bcachefs/bkey.c b/fs/bcachefs/bkey.c index 6d3a1c096f..128d96766c 100644 --- a/fs/bcachefs/bkey.c +++ b/fs/bcachefs/bkey.c @@ -7,6 +7,8 @@ #include "bset.h" #include "util.h" +#include + #undef EBUG_ON #ifdef DEBUG_BKEYS @@ -19,9 +21,35 @@ const struct bkey_format bch2_bkey_format_current = BKEY_FORMAT_CURRENT; struct bkey_format_processed bch2_bkey_format_postprocess(const struct bkey_format f) { - return (struct bkey_format_processed) { - .f = f, - }; + struct bkey_format_processed ret = { .f = f, .aligned = true }; +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + unsigned offset = f.key_u64s * 64; +#else + unsigned offset = KEY_PACKED_BITS_START; +#endif + + for (unsigned i = 0; i < BKEY_NR_FIELDS; i++) { + unsigned bits = f.bits_per_field[i]; + + if (bits & 7) { + ret.aligned = false; + break; + } + +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + offset -= bits; +#endif + + ret.shift[i] = min(offset & 63, 64 - bits); + ret.offset[i] = (offset - ret.shift[i]) / 8; + ret.mask[i] = bits ? ~0ULL >> (64 - bits) : 0; + +#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ + offset += bits; +#endif + } + + return ret; } void bch2_bkey_packed_to_binary_text(struct printbuf *out, @@ -191,6 +219,19 @@ static u64 get_inc_field(struct unpack_state *state, unsigned field) return v + offset; } +__always_inline +static u64 get_aligned_field(const struct bkey_format_processed *f, + const struct bkey_packed *in, + unsigned field_idx) +{ + u64 v = get_unaligned((u64 *) (((u8 *) in->_data) + f->offset[field_idx])); + + v >>= f->shift[field_idx]; + v &= f->mask[field_idx]; + + return v + le64_to_cpu(f->f.field_offset[field_idx]); +} + __always_inline static bool set_inc_field(struct pack_state *state, unsigned field, u64 v) { @@ -269,45 +310,57 @@ bool bch2_bkey_transform(const struct bkey_format *out_f, return true; } -struct bkey __bch2_bkey_unpack_key(const struct bkey_format_processed *format_p, +struct bkey __bch2_bkey_unpack_key(const struct bkey_format_processed *format, const struct bkey_packed *in) { - const struct bkey_format *format = &format_p->f; - struct unpack_state state = unpack_state_init(format, in); struct bkey out; - EBUG_ON(format->nr_fields != BKEY_NR_FIELDS); - EBUG_ON(in->u64s < format->key_u64s); + EBUG_ON(format->f.nr_fields != BKEY_NR_FIELDS); + EBUG_ON(in->u64s < format->f.key_u64s); EBUG_ON(in->format != KEY_FORMAT_LOCAL_BTREE); - EBUG_ON(in->u64s - format->key_u64s + BKEY_U64s > U8_MAX); + EBUG_ON(in->u64s - format->f.key_u64s + BKEY_U64s > U8_MAX); - out.u64s = BKEY_U64s + in->u64s - format->key_u64s; + out.u64s = BKEY_U64s + in->u64s - format->f.key_u64s; out.format = KEY_FORMAT_CURRENT; out.needs_whiteout = in->needs_whiteout; out.type = in->type; out.pad[0] = 0; + if (likely(format->aligned)) { +#define x(id, field) out.field = get_aligned_field(format, in, id); + bkey_fields() +#undef x + } else { + struct unpack_state state = unpack_state_init(&format->f, in); + #define x(id, field) out.field = get_inc_field(&state, id); - bkey_fields() + bkey_fields() #undef x + } return out; } -struct bpos __bkey_unpack_pos(const struct bkey_format_processed *format_p, +struct bpos __bkey_unpack_pos(const struct bkey_format_processed *format, const struct bkey_packed *in) { - const struct bkey_format *format = &format_p->f; - struct unpack_state state = unpack_state_init(format, in); struct bpos out; - EBUG_ON(format->nr_fields != BKEY_NR_FIELDS); - EBUG_ON(in->u64s < format->key_u64s); + EBUG_ON(format->f.nr_fields != BKEY_NR_FIELDS); + EBUG_ON(in->u64s < format->f.key_u64s); EBUG_ON(in->format != KEY_FORMAT_LOCAL_BTREE); - out.inode = get_inc_field(&state, BKEY_FIELD_INODE); - out.offset = get_inc_field(&state, BKEY_FIELD_OFFSET); - out.snapshot = get_inc_field(&state, BKEY_FIELD_SNAPSHOT); + if (likely(format->aligned)) { + out.inode = get_aligned_field(format, in, BKEY_FIELD_INODE); + out.offset = get_aligned_field(format, in, BKEY_FIELD_OFFSET); + out.snapshot = get_aligned_field(format, in, BKEY_FIELD_SNAPSHOT); + } else { + struct unpack_state state = unpack_state_init(&format->f, in); + + out.inode = get_inc_field(&state, BKEY_FIELD_INODE); + out.offset = get_inc_field(&state, BKEY_FIELD_OFFSET); + out.snapshot = get_inc_field(&state, BKEY_FIELD_SNAPSHOT); + } return out; } diff --git a/fs/bcachefs/btree_types.h b/fs/bcachefs/btree_types.h index 58ce60c37e..38c3ec6852 100644 --- a/fs/bcachefs/btree_types.h +++ b/fs/bcachefs/btree_types.h @@ -70,6 +70,10 @@ struct btree_bkey_cached_common { struct bkey_format_processed { struct bkey_format f; + bool aligned; + u8 offset[6]; + u8 shift[6]; + u64 mask[6]; }; struct btree { diff --git a/fs/bcachefs/btree_update_interior.h b/fs/bcachefs/btree_update_interior.h index dcfd7ceacc..72aedc1e34 100644 --- a/fs/bcachefs/btree_update_interior.h +++ b/fs/bcachefs/btree_update_interior.h @@ -181,7 +181,11 @@ static inline void btree_node_reset_sib_u64s(struct btree *b) static inline void *btree_data_end(struct bch_fs *c, struct btree *b) { - return (void *) b->data + btree_bytes(c); + /* + * __bch2_bkey_unpack_key() may read up to 8 bytes past the end of the + * input buffer: + */ + return (void *) b->data + btree_bytes(c) - 8; } static inline struct bkey_packed *unwritten_whiteouts_start(struct bch_fs *c,