Re: autocommit problem
This WebDNA talk-list message is from 1998
It keeps the original formatting.
numero = 19090
interpreted = N
texte = At 17:26 Uhr 24.07.1998, Sandra L. Pitner wrote:>It's a huge file (100 Megs) and the last thing I want is for it to>be written to disk often.>...Is there a way to prevent this writing to disk...?What you want to do is (in fact) excessive caching of large blocks of data.I do not recommend that, it is quite dangerous to hold a huge amount ofchanging live data just in memory for a long time...Above a certain size of databases you cannot handle everything withsoftware, you have to layout a proper system including the hardware. WebCatis amazingly fast, but it is impossible to turn a PC into a Cray II bysimply forbidding WebCat to store its data ;-)Large databases are best handled with a RAID 5 system and effectivecaching. And you need a really fast machine with really fast transfer rates.But let us concentrate on software:There are different approaches to handle big databases, it depends on theactions you want to perform. Analyze your needs!Use record numbers (call it SKU, if you want ;-) in ANY database.Before you perform a search, think about [lookup..] which is much faster:If you know there is just one records which matches the criterion,[search...] means wasting cpu and time.example 1:You have many fields, but to search in 3 fields only and display lists witha maximum of 4 fields plus the search fields? Use numbered records, make aseparate database with only record numbers and the 3+4 fields in question.If one clicks on a list item, you identify it by the record number and showthe detail with a super-fast [lookup...]example 2:You discover, that you have a lot of double keywords to search for: copyunique keywords into a new database. A simple trick, but with [lookup..]you instantly know wether it makes sense to do a real search on theoriginal data.example 3:If one clicks on a list item, you have to make a complicated search in ahuge database to find this record: it was your fault - use unique numbersto identify the records, do not perform similar searches twice.example 4:Everything is slow because of so many records and ugly searches: thinkabout splitting the db's - maybe you find some criterion to search just oneof the databases. You can also split them by some common identifiers likecategories and rethink the search system you present the user.example 5:Nothing helps, you can't find a way to speed the thing up: you are probablytired, or you use the wrong program, wrong machine, maybe you would bebetter off to combine your database with an indexing search engine likeUltraSeek - never tried, but could work...Hope that helps, at least it does not hurt ;-)Peter__________________________________________Peter Ostry - po@ostry.com - www.ostry.comOstry & Partner - Ostry Internet SolutionsAuhofstrasse 29 A-1130 Vienna Austriafon ++43-1-8777454 fax ++43-1-8777454-21
Associated Messages, from the most recent to the oldest:
At 17:26 Uhr 24.07.1998, Sandra L. Pitner wrote:>It's a huge file (100 Megs) and the last thing I want is for it to>be written to disk often.>...Is there a way to prevent this writing to disk...?What you want to do is (in fact) excessive caching of large blocks of data.I do not recommend that, it is quite dangerous to hold a huge amount ofchanging live data just in memory for a long time...Above a certain size of databases you cannot handle everything withsoftware, you have to layout a proper system including the hardware. WebCatis amazingly fast, but it is impossible to turn a PC into a Cray II bysimply forbidding WebCat to store its data ;-)Large databases are best handled with a RAID 5 system and effectivecaching. And you need a really fast machine with really fast transfer rates.But let us concentrate on software:There are different approaches to handle big databases, it depends on theactions you want to perform. Analyze your needs!Use record numbers (call it SKU, if you want ;-) in ANY database.Before you perform a search, think about [lookup..] which is much faster:If you know there is just one records which matches the criterion,[search...] means wasting cpu and time.example 1:You have many fields, but to search in 3 fields only and display lists witha maximum of 4 fields plus the search fields? Use numbered records, make aseparate database with only record numbers and the 3+4 fields in question.If one clicks on a list item, you identify it by the record number and showthe detail with a super-fast [lookup...]example 2:You discover, that you have a lot of double keywords to search for: copyunique keywords into a new database. A simple trick, but with [lookup..]you instantly know wether it makes sense to do a real search on theoriginal data.example 3:If one clicks on a list item, you have to make a complicated search in ahuge database to find this record: it was your fault - use unique numbersto identify the records, do not perform similar searches twice.example 4:Everything is slow because of so many records and ugly searches: thinkabout splitting the db's - maybe you find some criterion to search just oneof the databases. You can also split them by some common identifiers likecategories and rethink the search system you present the user.example 5:Nothing helps, you can't find a way to speed the thing up: you are probablytired, or you use the wrong program, wrong machine, maybe you would bebetter off to combine your database with an indexing search engine likeUltraSeek - never tried, but could work...Hope that helps, at least it does not hurt ;-)Peter__________________________________________Peter Ostry - po@ostry.com - www.ostry.comOstry & Partner - Ostry Internet SolutionsAuhofstrasse 29 A-1130 Vienna Austriafon ++43-1-8777454 fax ++43-1-8777454-21
Peter Ostry
DOWNLOAD WEBDNA NOW!
Top Articles:
Talk List
The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...
Related Readings:
No luck with taxes (1997)
Uniqueness of [cart] - revisited (2004)
Cancel Subscription (1996)
GuestBook example (1997)
[LOOKUP] (1997)
WebCat2b13MacPlugIn - [include] (1997)
emailer setup (1997)
[OT] Checkboxes! Javascript? (2005)
WCS Newbie question (1997)
WebCatalog and WebMerchant reviewed by InfoWorld (1997)
FW: Username and password in tcp connect/send (2001)
CommandSecurity? (1997)
Showing unopened cart (1997)
Share cost of training videos! (1998)
# fields limited? (1997)
PIXO with cometsite ... and/or other plugins (1998)
Database Options (1997)
For those of you not on the WebCatalog Beta... (1997)
cart info (1998)
Blowback and budgets. (2000)