> If you don't need that level of consistency for your 8MB blobs, write them > to plain files named with some kind of id, and put the id in the database > instead of the blob.
The problem here is that I later need to give access to the database via TCP to an external client. This client will then read out and *wipe* those masses of data asynchronously, while I'll continue to writing into to database.
Separating the data into an ID value (in the database) and ordinary binary files (on disk elsewhere) means, that I need to implement a separate TCP protocol and talk to the client whenever it needs to read/delete the data. I try to avoid that extra task. So postgres shall function here as a communicator, too, not only for saving data to disk.