On Thu, Mar 29, 2012 at 6:37 PM, Dobes Vandermeer <dobesv(at)gmail(dot)com> wrote:
> On Fri, Mar 30, 2012 at 3:59 AM, Daniel Farina <daniel(at)heroku(dot)com> wrote:
>> On Thu, Mar 29, 2012 at 12:57 PM, Daniel Farina <daniel(at)heroku(dot)com>
>> More technical concerns:
>> > * Protocol compression -- but a bit of sand in the gears is *which*
>> > compression -- for database workloads, the performance of zlib can be
>> > a meaningful bottleneck.
> I think if performance is the issue, people should use the native protocol.
> This HTTP thing should be more of a RAD / prototyping thing, I think. So
> people can be in their comfort zone when talking to the server.
No. I do not think so. I think a reasonable solution (part of what MS
is actually proposing to the IETF) is to make compression optional, or
have clients support an LZ77 family format like Snappy. I get the
impression that SPDY is waffling a little on this fact, it mandates
compression, and definitely zlib, but is less heavy handed about
pushing, say Snappy. Although I can understand why a
Google-originated technology probably doesn't want to push another
Google-originated implementation so hard, it would have been
convenient for me for Snappy to have become a more common format.
> Isn't the URL good enough (/databases/<dbname>) or are you talking about
> having some some of "virtual host" setup where you have multiple sets of
> databases available on the same port?
Virtual hosts. Same port.
>> > * A standard frame extension format. For example, last I checked
>> > Postgres-XC, it required snapshot information to be passed, and it'd
>> > be nice that instead of having to hack the protocol that they could
>> > attach an X-Snapshot-Info header, or whatever. This could also be a
>> > nice way to pass out-of-band hint information of all sorts.
> I am sorry to admit I don't understand the terms "frame extension format" or
> "Postgres-XC" in this paragraph ... help?
I'm being vague. Postgres-XC is a project to provide a shared-nothing
sync-rep cluster for Postgres. My last understanding of it is that it
needed to pass snapshot information between nodes, and FEBE was
expanded to make room for this, breaking compatibility, as well as
probably being at least a small chore. It'd be nice if it wasn't
necessary to do that and they had a much easier path to multiplex
additional information into the connection.
For my own purposes, I'm intensely interest in lacing the connection with:
* EXPLAIN ANALYZE returns when the query has already run, getting both
the actual timings *and* the results to the client.
* Partition IDs, whereby you can find the right database and
(potentially!) even influence how the queries are scoped to a tenant
* Read-only vs. Write workload: As is well established, it's hard to
know a-priori if a query is going to do a write. Fine. Let the client
tag it, signal an error if something is wrong.
Yes, theoretically all these features -- or just a general
multiplexing scheme -- can be added to FEBE, but if we're even going
to consider such an upheaval, maybe we can get *lot* more bang for our
buck by trying to avoid being unnecessarily different from the most
common application-level protocol in existence, causing extraneous
work for router and proxy authors. Notably, a vanilla Postgres
database knows nothing about these extension headers.
>> > * HTTP -- and *probably* its hypothetical progeny -- are more common
>> > than FEBE packets, and a lot of incidental complexity of writing
>> > routers is reduced by the commonality of routing HTTP traffic.
> This is an interesting comment. I'm not sure how to test whether an HTTP
> based protocol will be better supported than a proprietary one, but I think
> we have enough other reasons that we can move forward. Well we have the
> reason that there's some kind of love affair with HTTP based protocols going
> on out there ... might as well ride the wave while it's still rising (I
At its core, what may be growing unnecessary is FEBE's own mechanism
for delimiting messages. All the other protocol actions -- such as
shipping Binds, Executes, Describes, et al, are not going to be
obsoleted or even changed by laying web-originated technologies under
Consider the wealth of projects, products, and services that filter
HTTP vs FEBE, many quite old, now. In my mind, "wave" might be better
rendered "tsunami". The very real problem, as I see it, is that
classic, stateless HTTP would be just too slow to be practical.
> As for SPDY I can see how it may be helpful but as it is basically a
> different way to send HTTP requests (from what I understand) the migration
> to SPDY is mainly a matter of adding support for it to whatever HTTP library
> is used.
I think SPDY or like-protocols (there's only one other I can think of,
the very recently introduced and hilariously branded "S&M" from
Microsoft. It's memorable, at least) are the only things that appear
to give a crisp treatment to interactive, stateful workloads involving
back-and-forth between client and server with multiplexing, fixing
some problems with the hacks in HTTP-land from before.
> Anyone have a thought on whether, for the HTTP server itself, it should be
> integrated right into the PostgreSQL server itself? Or would it be better
> to have a separate process that proxies requests to PostgreSQL using the
> existing protocol? Is there an API that can be used in both cases
> semi-transparently (i.e. the functions have the same name when linked right
> in, or when calling via a socket)?
If SPDY/HTTP2.0 were more common/existent in the latter case, I'd
advocate for swallowing it and making it just part of the monolithic
system. But it is still a time of transition, and jumping the gun on
it would be expensive.
In response to
pgsql-hackers by date
|Next:||From: Kyotaro HORIGUCHI||Date: 2012-03-30 04:30:35|
|Subject: Re: Speed dblink using alternate libpq tuple storage |
|Previous:||From: Peter Eisentraut||Date: 2012-03-30 02:38:06|
|Subject: Re: Odd out of memory problem.|