Re: autocommit problem

This WebDNA talk-list message is from

1998


It keeps the original formatting.
numero = 19090
interpreted = N
texte = At 17:26 Uhr 24.07.1998, Sandra L. Pitner wrote:>It's a huge file (100 Megs) and the last thing I want is for it to >be written to disk often. >...Is there a way to prevent this writing to disk...? What you want to do is (in fact) excessive caching of large blocks of data. I do not recommend that, it is quite dangerous to hold a huge amount of changing live data just in memory for a long time... Above a certain size of databases you cannot handle everything with software, you have to layout a proper system including the hardware. WebCat is amazingly fast, but it is impossible to turn a PC into a Cray II by simply forbidding WebCat to store its data ;-) Large databases are best handled with a RAID 5 system and effective caching. And you need a really fast machine with really fast transfer rates.But let us concentrate on software: There are different approaches to handle big databases, it depends on the actions you want to perform. Analyze your needs!Use record numbers (call it SKU, if you want ;-) in ANY database. Before you perform a search, think about [lookup..] which is much faster: If you know there is just one records which matches the criterion, [search...] means wasting cpu and time.example 1: You have many fields, but to search in 3 fields only and display lists with a maximum of 4 fields plus the search fields? Use numbered records, make a separate database with only record numbers and the 3+4 fields in question. If one clicks on a list item, you identify it by the record number and show the detail with a super-fast [lookup...]example 2: You discover, that you have a lot of double keywords to search for: copy unique keywords into a new database. A simple trick, but with [lookup..] you instantly know wether it makes sense to do a real search on the original data.example 3: If one clicks on a list item, you have to make a complicated search in a huge database to find this record: it was your fault - use unique numbers to identify the records, do not perform similar searches twice.example 4: Everything is slow because of so many records and ugly searches: think about splitting the db's - maybe you find some criterion to search just one of the databases. You can also split them by some common identifiers like categories and rethink the search system you present the user.example 5: Nothing helps, you can't find a way to speed the thing up: you are probably tired, or you use the wrong program, wrong machine, maybe you would be better off to combine your database with an indexing search engine like UltraSeek - never tried, but could work... Hope that helps, at least it does not hurt ;-) Peter__________________________________________ Peter Ostry - po@ostry.com - www.ostry.com Ostry & Partner - Ostry Internet Solutions Auhofstrasse 29 A-1130 Vienna Austria fon ++43-1-8777454 fax ++43-1-8777454-21 Associated Messages, from the most recent to the oldest:

    
  1. Re: autocommit problem (Angel Bennett 1998)
  2. Re: autocommit problem (Angel Bennett 1998)
  3. Re: autocommit problem (Angel Bennett 1998)
  4. Re: autocommit problem (Angel Bennett 1998)
  5. Re: autocommit problem (Kenneth Grome 1998)
  6. Re: autocommit problem (PCS Technical Support 1998)
  7. Re: autocommit problem (Kenneth Grome 1998)
  8. Re: autocommit problem (Sandra L. Pitner 1998)
  9. Re: autocommit problem (Sandra L. Pitner 1998)
  10. Re: autocommit problem (Peter Ostry 1998)
  11. Re: autocommit problem (PCS Technical Support 1998)
  12. Re: autocommit problem (Sandra L. Pitner 1998)
  13. Re: autocommit problem (Kenneth Grome 1998)
  14. Re: autocommit problem (PCS Technical Support 1998)
  15. Re: autocommit problem (Gary Richter 1998)
  16. Re: autocommit problem (Sandra L. Pitner 1998)
  17. autocommit problem (Sandra L. Pitner 1998)
At 17:26 Uhr 24.07.1998, Sandra L. Pitner wrote:>It's a huge file (100 Megs) and the last thing I want is for it to >be written to disk often. >...Is there a way to prevent this writing to disk...? What you want to do is (in fact) excessive caching of large blocks of data. I do not recommend that, it is quite dangerous to hold a huge amount of changing live data just in memory for a long time... Above a certain size of databases you cannot handle everything with software, you have to layout a proper system including the hardware. WebCat is amazingly fast, but it is impossible to turn a PC into a Cray II by simply forbidding WebCat to store its data ;-) Large databases are best handled with a RAID 5 system and effective caching. And you need a really fast machine with really fast transfer rates.But let us concentrate on software: There are different approaches to handle big databases, it depends on the actions you want to perform. Analyze your needs!Use record numbers (call it SKU, if you want ;-) in ANY database. Before you perform a search, think about [lookup..] which is much faster: If you know there is just one records which matches the criterion, [search...] means wasting cpu and time.example 1: You have many fields, but to search in 3 fields only and display lists with a maximum of 4 fields plus the search fields? Use numbered records, make a separate database with only record numbers and the 3+4 fields in question. If one clicks on a list item, you identify it by the record number and show the detail with a super-fast [lookup...]example 2: You discover, that you have a lot of double keywords to search for: copy unique keywords into a new database. A simple trick, but with [lookup..] you instantly know wether it makes sense to do a real search on the original data.example 3: If one clicks on a list item, you have to make a complicated search in a huge database to find this record: it was your fault - use unique numbers to identify the records, do not perform similar searches twice.example 4: Everything is slow because of so many records and ugly searches: think about splitting the db's - maybe you find some criterion to search just one of the databases. You can also split them by some common identifiers like categories and rethink the search system you present the user.example 5: Nothing helps, you can't find a way to speed the thing up: you are probably tired, or you use the wrong program, wrong machine, maybe you would be better off to combine your database with an indexing search engine like UltraSeek - never tried, but could work... Hope that helps, at least it does not hurt ;-) Peter__________________________________________ Peter Ostry - po@ostry.com - www.ostry.com Ostry & Partner - Ostry Internet Solutions Auhofstrasse 29 A-1130 Vienna Austria fon ++43-1-8777454 fax ++43-1-8777454-21 Peter Ostry

DOWNLOAD WEBDNA NOW!

Top Articles:

Talk List

The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...

Related Readings:

No luck with taxes (1997) Uniqueness of [cart] - revisited (2004) Cancel Subscription (1996) GuestBook example (1997) [LOOKUP] (1997) WebCat2b13MacPlugIn - [include] (1997) emailer setup (1997) [OT] Checkboxes! Javascript? (2005) WCS Newbie question (1997) WebCatalog and WebMerchant reviewed by InfoWorld (1997) FW: Username and password in tcp connect/send (2001) CommandSecurity? (1997) Showing unopened cart (1997) Share cost of training videos! (1998) # fields limited? (1997) PIXO with cometsite ... and/or other plugins (1998) Database Options (1997) For those of you not on the WebCatalog Beta... (1997) cart info (1998) Blowback and budgets. (2000)