workflows.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Willy Tarreau <w@1wt.eu>
To: Konstantin Ryabitsev <konstantin@linuxfoundation.org>
Cc: Mark Brown <broonie@kernel.org>,
	users@kernel.org, tools@kernel.org, workflows@vger.kernel.org
Subject: Re: Toy/demo: using ChatGPT to summarize lengthy LKML threads (b4 integration)
Date: Wed, 28 Feb 2024 18:58:05 +0100	[thread overview]
Message-ID: <Zd90LW6jZvBBP7X1@1wt.eu> (raw)
In-Reply-To: <20240228-urban-petrel-of-serenity-037e7d@lemur>

On Wed, Feb 28, 2024 at 12:52:43PM -0500, Konstantin Ryabitsev wrote:
> On Wed, Feb 28, 2024 at 04:29:53PM +0100, Willy Tarreau wrote:
> > > Another use for this that I could think is a way to summarize digests.
> > > Currently, if you choose a digest subscription, you will receive a single
> > > email with message subjects and all the new messages as individual
> > > attachments. It would be interesting to see if we can send out a "here's
> > > what's new" summary with links to threads instead.
> > 
> > Indeed!
> > 
> > > The challenge would be to do it in a way that doesn't bankrupt LFIT in the
> > > process. :)
> > 
> > That's exactly why it would make sense to invest in one large machine
> > and let it operate locally while "only" paying the power bill.
> 
> I'm not sure how realistic this is, if it takes 10 minutes to process a single
> 4000-word thread. :)

I know. People are getting way better perfs with GPUs as well as on Macs
particularly. I have not investigated such options at all, I'm only
relying on commodity hardware. I shared the commands so that those
interested and with the hardware can attempt it as well. I don't know
how far we can shrink that time.

> With ChatGPT it would probably cost thousands of dollars
> daily if we did this for large lists (and it doesn't really make sense to do
> this on small lists anyway, as the whole purpose behind the idea is to
> summarize lists with lots of traffic).

Sure.

> For the moment, I will document how I got this working and maybe look into
> further shrinking the amount of data that would be needed to be sent to the
> LLM. I will definitely need to make it easy to use a local model, since
> relying on a proprietary service (of questionable repute in the eyes of many)
> would not be in the true spirit of what we are all trying to do here.

I tend to think that these solutions will evolve very quickly both hosted
and local, and it's prudent not to stick to a single approach anyway.

> As I
> said, I was mostly toying around with $25 worth credits that I had with
> OpenAI.

And that was a great experience showing really interesting results!

Cheers,
Willy

  reply	other threads:[~2024-02-28 17:58 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-27 22:32 Konstantin Ryabitsev
2024-02-27 23:35 ` Junio C Hamano
2024-02-28  0:43 ` Linus Torvalds
2024-02-28 20:46   ` Shuah Khan
2024-02-29  0:33   ` James Bottomley
2024-02-28  5:00 ` Willy Tarreau
2024-02-28 14:03   ` Mark Brown
2024-02-28 14:39     ` Willy Tarreau
2024-02-28 15:22     ` Konstantin Ryabitsev
2024-02-28 15:29       ` Willy Tarreau
2024-02-28 17:52         ` Konstantin Ryabitsev
2024-02-28 17:58           ` Willy Tarreau [this message]
2024-02-28 19:16             ` Konstantin Ryabitsev
2024-02-28 15:04   ` Hannes Reinecke
2024-02-28 15:15     ` Willy Tarreau
2024-02-28 17:43     ` Jonathan Corbet
2024-02-28 18:52       ` Alex Elder
2024-02-28 18:55 ` Bart Van Assche
2024-02-29  7:18   ` Hannes Reinecke
2024-02-29  8:37     ` Theodore Ts'o
2024-03-01  1:13     ` Bart Van Assche
2024-02-29  9:30   ` James Bottomley
2024-02-28 19:32 ` Luis Chamberlain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zd90LW6jZvBBP7X1@1wt.eu \
    --to=w@1wt.eu \
    --cc=broonie@kernel.org \
    --cc=konstantin@linuxfoundation.org \
    --cc=tools@kernel.org \
    --cc=users@kernel.org \
    --cc=workflows@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox