| From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
|---|---|
| To: | pramod gupta <mail2sony2010(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org |
| Subject: | Re: How did VACUUM ANALYZE reclaim large TOAST bloat at disk level in PostgreSQL 16? |
| Date: | 2026-01-15 16:51:19 |
| Message-ID: | fbd82423-5553-4c84-9ceb-c3032b597bc4@aklaver.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On 1/15/26 08:42, pramod gupta wrote:
> Hello Everyone,
>
> I am encountering the following error while using the *pg_ai_query*
> extension.
>
> /test=# SELECT generate_query('show recent orders', null, 'gemini');
> ERROR: Query generation failed: No API key available for gemini
> provider. Please provide API key as parameter or configure it in
> ~/.pg_ai.config.
> test=#
> test=# SELECT generate_query('show recent orders', null,
> 'google_ai_studio');
> ERROR: Query generation failed: API key required. Pass as parameter or
> set OpenAI, Anthropic, or Gemini API key in ~/.pg_ai.config.
> test=#/
>
> have verified all required permissions and confirmed that the
> configuration is correctly placed; however, I continue to receive the
> above error when using the *pg_ai.config* file.
>
> If I execute the query by explicitly passing the API key, it works as
> expected:
>
> /select generate_query('I want to count the rows in orders
> tables','MYAPIKEY','gemini');/
>
>
> This works perfectly. I would appreciate guidance on how to configure
> this so that the API key does not need to be passed explicitly in the
> query. Please let me know if there is any alternative configuration or
> recommended approach to achieve this.
It is AI it should know the answer.
That being said.
1) Did you, in the config file, follow the format here?:
https://benodiwal.github.io/pg_ai_query/configuration.html.
2) Is the client you are running the query as in the same location as
~/.pg_ai.config?
>
> Thanks in advance.
> Pramod Gupta
> ---------------------------------------------------------------------------------------------------------------------------------------
>
> On Mon, Dec 29, 2025 at 9:23 PM pramod gupta <mail2sony2010(at)gmail(dot)com
> <mailto:mail2sony2010(at)gmail(dot)com>> wrote:
>
> Hello Everyone,
>
> We have a table with a total size of ~628 GB, out of which ~601 GB
> was TOAST data.
> After running VACUUM ANALYZE on a weekly basis, the table size
> reduced significantly to ~109 GB, indicating a large amount of bloat
> removal.
>
> I would like to understand:
>
> How was VACUUM ANALYZE able to reclaim such a large amount of space,
> especially for TOAST data?
>
> Under what conditions does PostgreSQL reclaim disk space without
> requiring VACUUM FULL or CLUSTER?
>
> Is this behavior expected in PostgreSQL 16, particularly for heavily
> updated or deleted TOASTed columns?
>
> Any insights or documentation references would be greatly appreciated.
>
> PostgreSQL version: 16
>
> Thanks in advance.
> Pramod Gupta
>
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Adrian Klaver | 2026-01-15 17:38:50 | Re: How did VACUUM ANALYZE reclaim large TOAST bloat at disk level in PostgreSQL 16? |
| Previous Message | pramod gupta | 2026-01-15 16:42:36 | Re: How did VACUUM ANALYZE reclaim large TOAST bloat at disk level in PostgreSQL 16? |