large database

From: "Mihai Popa" <mihai(at)lattica(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: large database
Date: 2012-12-10 20:26:02
Message-ID: 35591.199.243.102.42.1355171162.squirrel@lattica.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

I've recently inherited a project that involves importing a large set of
Access mdb files into a Postgres or MySQL database.
The process is to export the mdb's to comma separated files than import
those into the final database.
We are now at the point where the csv files are all created and amount
to some 300 GB of data.

I would like to get some advice on the best deployment option.

First, the project has been started using MySQL. Is it worth switching
to Postgres and if so, which version should I use?

Second, where should I deploy it? The cloud or a dedicated box?

Amazon seems like the sensible choice; you can scale it up and down as
needed and backup is handled automatically.
I was thinking of an x-large RDS instance with 10000 IOPS and 1 TB of
storage. Would this do, or will I end up with a larger/ more expensive
instance?

Alternatively I looked at a Dell server with 32 GB of RAM and some
really good hard drives. But such a box does not come cheap and I don't
want to keep the pieces if it doesn't cut it

thank you,

--
Mihai Popa <mihai(at)lattica(dot)com>
Lattica, Inc.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message akp geek 2012-12-10 20:38:45 to_tsquery and to_tsvector .. problem with Y
Previous Message Tom Lane 2012-12-10 19:34:53 Re: Literal NUL using COPY TO