Skip site navigation (1) Skip section navigation (2)

Re: [SQL] faster output from php and postgres

From: Chadwick Rolfs <cmr(at)shell(dot)gis(dot)net>
To: Jean-Luc Lachance <jllachan(at)nsd(dot)ca>
Cc: Frank Bax <fbax(at)sympatico(dot)ca>, pgsql-php(at)postgresql(dot)org,pgsql-sql(at)postgresql(dot)org
Subject: Re: [SQL] faster output from php and postgres
Date: 2003-05-27 20:29:16
Message-ID: Pine.GSO.4.05.10305271613130.10146-100000@shell.gis.net (view raw or flat)
Thread:
Lists: pgsql-phppgsql-sql
Yes, this may be better than foreach()ing through each publication
returned.

But a plpgsql function may suit these needs much better.

On Tue, 27 May 2003, Jean-Luc Lachance wrote:

> KISS
> 
> why not use PHP to concatenate the authors while pub_id is the same???
> If you insist on having each author in its own column, 
> put them at the end and concatenate with <TD>.
> 
> jll
> 
> 
> Chadwick Rolfs wrote:
> > 
> > So, I have the same problem, but I need all authors for each publication
> > to show up in it's own column.  I tried the full join query from a
> > suggestion off pgsql-sql, but it only returns ONE author id TWICE instead
> > of ALL authors at once.
> > 
> > I'll do some RTFMing of the joins.. and post any results I get
> > 
> > BUT, right now, looping over each publication with php isn't taking that
> > long.  I would like to know how to make this query, though!
> > 
> > Please let me know how to get a result like:
> > ____________________________________________________
> > |All Authors|Title|Source|Year|Type|Length|Keywords|
> > ----------------------------------------------------
> > 
> > If there is a way on the SQL side to do this, that is ;>
> > 
> > Here we go:
> > 
> > CREATE TABLE "author" (
> >         "auth_id" integer DEFAULT
> > nextval('"author_temp_auth_id_seq"'::text) NOT NULL,
> >         "first" text,
> >         "last" text,
> >         "auth_last_updated" timestamp with time zone,
> >         Constraint "author_temp_pkey" Primary Key ("auth_id")
> > );
> > 
> > CREATE UNIQUE INDEX auth_id_author_key ON author USING btree (auth_id);
> > 
> > CREATE UNIQUE INDEX auth_last_updated_author_key ON author USING btree
> > (auth_last_updated);
> > 
> > CREATE TABLE "publication" (
> >         "copyis" text,
> >         "pub_id" integer DEFAULT nextval('publication_pub_id_seq'::text)
> > NOT NULL,
> >         "title" text,
> >         "source" text,
> >         "year" text,
> >         "month" text,
> >         "length" text,
> >         "type" text,
> >         "keywords" text,
> >         "copyright" text,
> >         "abstract" text,
> >         "pdformat" text,
> >         "pub_last_updated" timestamp with time zone
> > );
> > 
> > CREATE UNIQUE INDEX publication_pub_id_key ON publication USING btree
> > (pub_id);
> > 
> > CREATE INDEX keywords_publication_key ON publication USING btree
> > (keywords);
> > 
> > CREATE UNIQUE INDEX pub_last_updated_publication_ke ON publication USING
> > btree (pub_last_updated);
> > 
> > CREATE UNIQUE INDEX pub_id_publication_key ON publication USING btree
> > (pub_id);
> > 
> > CREATE TABLE "pub_auth" (
> >         "pub_auth_id" integer DEFAULT
> > nextval('"pub_auth_temp_pub_auth_id_seq"'::text) NOT NULL,
> >         "pub_id" integer,
> >         "auth_id" integer,
> >         Constraint "pub_auth_temp_pkey" Primary Key ("pub_auth_id")
> > );
> > 
> > CREATE INDEX pub_id_pub_auth_key ON pub_auth USING btree (pub_id);
> > 
> > CREATE INDEX auth_id_pub_auth_key ON pub_auth USING btree (auth_id);
> > 
> > On Sat, 24 May 2003, Frank Bax wrote:
> > 
> > > Finding previous examples of complex joins in archives is not likely an
> > > easy thing to find.
> > >
> > > pg_dump -s -t author -t publication -t pub_auth [database] | grep -v ^--
> > >
> > > Change [database] to the name of your database - this command will dump out
> > > schema relative to your request.  Post the results to this list.  Then ask
> > > us the question "how do I write a SELECT that produces...[ you finish this
> > > sentence]".  Question probably more appropriate to the list you mentioned,
> > > but I expect there are people here who are just as capable of answering the
> > > question.  I've even seen examples where the process goes through several
> > > emails before SQL produces desired results exactly.
> > >
> > >  >How would a join make this easier?
> > >
> > > I have always found that one properly constructed complex query is always
> > > "cheaper" in runtime than numerous queries inside a foreach loop.  Your
> > > final query will likely include joining a table to itself (this can
> > > sometimes be a difficult concept to grasp).
> > >
> > > Frank
> > >
> > >
> > > At 11:50 AM 5/24/03, Chadwick Rolfs wrote:
> > >
> > >
> > > >I'm glad this came up, because I have the same type of problem.  Except,
> > > >I don't see how a join can work... of course, I'm not really schooled in
> > > >this stuff.
> > > >
> > > >I also have three tables: author, publication, and pub_auth.
> > > >
> > > >There are multiple authors for some publications, so it is necessary to
> > > >check each publication selected for ALL authors.  I'm doing this with a
> > > >foreach loop on the result of each publication key returned.
> > > >
> > > >How would a join make this easier?
> > > >
> > > >I'm browsing the pgsql-sql archives now, but that may take a week.  I'm
> > > >not sure what to search for...
> > > >
> > 
> > -Chadwick
> > 
> > ---------------------------(end of broadcast)---------------------------
> > TIP 2: you can get off all lists at once with the unregister command
> >     (send "unregister YourEmailAddressHere" to majordomo(at)postgresql(dot)org)
> 

-Chadwick


In response to

pgsql-php by date

Next:From: andhie adamDate: 2003-05-28 16:04:07
Subject: Please help me ?
Previous:From: Jean-Luc LachanceDate: 2003-05-27 19:35:39
Subject: Re: [SQL] faster output from php and postgres

pgsql-sql by date

Next:From: mallahDate: 2003-05-27 20:55:06
Subject: Getting rid of accents..
Previous:From: Mukta A. TelangDate: 2003-05-27 20:03:00
Subject: Re: Many-to-Many relation

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group