I have a CTE that takes top left and bottom right latitude/longitude values
along with a start and end date and it then calculates the amount of user
requests that came from those coordinates per hourly intervals between the
given start and end date. However, I want to execute this query for about
2600 seperate 4-tuples of lat/lon corner values instead of typing them in
one-by-one. How would I do that? The code is as below:
WITH cal AS (
SELECT generate_series('2011-02-02 00:00:00'::timestamp ,
'2012-04-01 05:00:00'::timestamp ,
'1 hour'::interval) AS stamp
qqq AS (
SELECT date_trunc('hour', calltime) AS stamp, count(*) AS zcount
WHERE calltime >= '2011-02-13 11:59:11'
AND calltime <= '2012-02-13 22:02:21'
AND (calltime::time >= '11:59:11'
AND calltime::time <= '22:02:21')
AND ((extract(DOW from calltime) = 3) /*OR (extract(DOW from calltime) =
AND lat BETWEEN '40' AND '42'
AND lon BETWEEN '28' AND '30'
GROUP BY date_trunc('hour', calltime)
SELECT cal.stamp, COALESCE (qqq.zcount, 0) AS zcount
LEFT JOIN qqq ON cal.stamp = qqq.stamp
WHERE cal.stamp >= '2011-02-13 11:00:00'
AND cal.stamp <= '2012-02-13 22:02:21'
AND ((extract(DOW from cal.stamp) = 3) /*OR (extract(DOW from cal.stamp) =
extract ('hour' from cal.stamp) >= extract ('hour' from '2011-02-13
extract ('hour' from cal.stamp) <= extract ('hour' from '2012-02-13
ORDER BY stamp ASC;
And the sample output for the query above:
"2011-02-16 11:00:00" 0
"2011-02-16 12:00:00" 70
"2011-02-16 13:00:00" 175
"2011-02-16 14:00:00" 97
"2011-02-16 15:00:00" 167
View this message in context: http://postgresql.1045698.n5.nabble.com/Calling-the-CTE-for-multiple-inputs-tp5726661.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.
pgsql-sql by date
|Next:||From: Fabio Ebner - Dna Solution||Date: 2012-10-04 19:40:48|
|Subject: String Search|
|Previous:||From: Samuel Gendler||Date: 2012-10-03 20:05:50|
|Subject: Re: [noob] How to optimize this double pivot query?|