Re: Reasoning behind process instead of thread based

From: Doug McNaught <doug(at)mcnaught(dot)org>
To: nd02tsk(at)student(dot)hig(dot)se
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Reasoning behind process instead of thread based
Date: 2004-10-27 16:12:06
Message-ID: 87hdoggmy1.fsf@asmodeus.mcnaught.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

nd02tsk(at)student(dot)hig(dot)se writes:

> "The developers agree that multiple processes provide
> more benefits (mostly in stability and robustness) than costs (more
> connection startup costs). The startup costs are easily overcome by
> using connection pooling.
> "
>
> Please explain why it is more stable and robust?

Because threads share the same memory space, a runaway thread can
corrupt the entire system by writing to the wrong part of memory.
With separate processes, the only data that is shared is that which is
meant to be shared, which reduces the potential for such damage.

> "Also, each query can only use one processor; a single query can't be
> executed in parallel across many CPUs. However, several queries running
> concurrently will be spread across the available CPUs."
>
> And it is because of the PostgreSQL process architecture that a query
> can't be executed by many CPU:s right?

There's no theoretical reason that a query couldn't be split across
multiple helper processes, but no one's implemented that feature--it
would be a pretty major job.

> Also, MySQL has a library for embedded aplications, the say:
>
> "We also provide MySQL Server as an embedded multi-threaded library that
> you can link into your application to get a smaller, faster,
> easier-to-manage product."
>
> Do PostgreSQL offer anything similar?

No. See the archives for extensive discussion of why PG doesn't do
this.

-Doug

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Scott Marlowe 2004-10-27 16:21:27 Re: Reasoning behind process instead of thread based
Previous Message f-f 2004-10-27 16:07:54 Re: