| From: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
|---|---|
| To: | Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com> |
| Cc: | Andrew Dunstan <andrew(at)dunslane(dot)net>, Shinya Kato <shinya11(dot)kato(at)gmail(dot)com>, Manni Wood <manni(dot)wood(at)enterprisedb(dot)com>, KAZAR Ayoub <ma_kazar(at)esi(dot)dz>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: Speed up COPY FROM text/CSV parsing using SIMD |
| Date: | 2025-11-24 21:59:21 |
| Message-ID: | aSTVOe6BIe5f1l3i@nathan |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Thu, Nov 20, 2025 at 03:55:43PM +0300, Nazir Bilal Yavuz wrote:
> On Thu, 20 Nov 2025 at 00:01, Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote:
>> + /* Load a chunk of data into a vector register */
>> + vector8_load(&chunk, (const uint8 *) ©_input_buf[input_buf_ptr]);
>>
>> In other places, processing 2 or 4 vectors of data at a time has proven
>> faster. Have you tried that here?
>
> Sorry, I could not find the related code piece. I only saw the
> vector8_load() inside of hex_decode_safe() function and its comment
> says:
>
> /*
> * We must process 2 vectors at a time since the output will be half the
> * length of the input.
> */
>
> But this does not mention any speedup from using 2 vectors at a time.
> Could you please show the related code?
See pg_lfind32().
--
nathan
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Michael Paquier | 2025-11-24 22:00:41 | Re: warning on the current head |
| Previous Message | Joel Jacobson | 2025-11-24 21:53:40 | Re: [PATCH] Avoid pallocs in async.c's SignalBackends critical section |