Thanks for insights into internal design

From: Typing80wpm(at)aol(dot)com
To: pgsql-general(at)postgresql(dot)org
Subject: Thanks for insights into internal design
Date: 2005-04-28 06:28:25
Message-ID: 1fc.7e9d1b.2fa214c9@aol.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

You give me valuable insight into the inner workings of such software. I am
a firm believer in testing everything with very large files. One might
spend months developing something, and have it in production for a year, and not
realize what will happen when their files (tables) grow to several million
records (rows). And it take so little effort to create large test files.

========================

A long time ago, in a galaxy far, far away, Typing80wpm(at)aol(dot)com wrote:
> I must say one intereting thing. When I downloaded the trial version
> from TheKompany, and asked it to browse a test file in PGSql which I
> loaded with 250,000 rows, it started to read them, and read for a
> long long time (as MSAccess does), but the seemed to get stuck,
> whereas MSAccess is able to browse the entire file. I must
> experiment more with the demo version from theKompany, and also with
> this free version from the site you gave me.

This sort of problem is characteristic of the use of "array" objects
in graphical toolkits.

Suppose you're populating something with 250K rows, perhaps with a
dozen fields per row. In such a case, the toolkit is slinging around
3-4 million objects, and having to evaluate which of them are visible
on screen at any given time.

_Any_ kind of inefficiency in the library, or in the use of the
library, can easily lead to rendering turning out really, really
badly.

The X Window system has gotten heavily criticized for speed problems,
commonly with respect to how Mozilla used to work when rendering large
web pages. Reality was that Mozilla was implemented (this is no
longer true, by the way) atop a platform-independent library called
Rogue Wave which then had a mapping to Motif (which is noted as Not
Everyone's Favorite Graphics Library ;-)) which then rendered things
using X. The True Problem lay somewhere in that set of layers and,
since several of the layers were pretty inscrutable, it was
essentially impractical to address the performance problem.

Much the same thing took place with the Tcl/Tk application, "cbb"
(Check Book Balancer); the Tk 'array' object got to behave
increasingly badly with increasing thousands of rows. And changing
one transaction near the top of an account would lead to cascading
balance updates, therefore walking (linear fashion, more than likely
leading to superlinear resource consumption :-() through the rest of
the transactions to update every single balance...

Gigahertz, Gigabytes, and upgrades may overcome that, to some degree,
but it wouldn't be overly surprising if you were hitting some such
unfortunate case. It might represent something fixed in a later
release of Rekall; it could represent something thorny to resolve.

I would really hate the notion of depending on a GUI to manage
millions of objects in this manner; it is just so easy for it to go
badly.
--
"cbbrowne","@","gmail.com"
http://linuxdatabases.info/info/nonrdbms.html
Rules of the Evil Overlord #10. "I will not interrogate my enemies in
the inner sanctum -- a small hotel well outside my borders will work
just as well." <http://www.eviloverlord.com/>

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Sebastian Böck 2005-04-28 09:33:43 Problem with GIST-index and timestamps
Previous Message NO-fisher-SPAM_PLEASE 2005-04-28 05:46:24 temp tables ORACLE/PGSQL