Re: autocommit problem

This WebDNA talk-list message is from

1998


It keeps the original formatting.
numero = 19092
interpreted = N
texte = Actually the database has been working fine for months at 100 Megs as a static, never changed by web access, inventory.db for www.musicianstore.com with over 200K SKUs. It loads up into RAM and keeps on ticking until I tell it to refresh. It can't really be split apart without making several virtual stores because of price lookups,etc. Speed isn't a problem at all on a Mac G3 and separate graphics server. I did find that narrowing searches with categories and subcategories help speed.What I'm trying to do now is keep dynamic records of most popular items by counting how often people look at certain SKUs. I'd like to have searches based on most popular sheetmusic for certain style, etc. It would have been nice to have the count field in the same database with the subcategory, category and other search fields. That way I could return the results in most popular sort order for any given search criteria. To build another database with all the searchfields and count data for all SKUs probably is a non-neglible amount of RAM. Don't think WebCat can return results sorted on data in another database with minimal info (ie sku and count). And the searches would get alot more complicated with lookups, etc. for prices, descriptive info, etc. I could have lived with dynamic data only occasionally flushed to disk because of the nature of the data (loosing a couple hours or a day or so of data wouldn't change results that much), plus server seems rock solid (not to tempt fate here).Back to the drawing board on this one! Thanks for your thoughts!Sandy >At 17:26 Uhr 24.07.1998, Sandra L. Pitner wrote: > >>It's a huge file (100 Megs) and the last thing I want is for it to >>be written to disk often. >>...Is there a way to prevent this writing to disk...? > > >What you want to do is (in fact) excessive caching of large blocks of data. >I do not recommend that, it is quite dangerous to hold a huge amount of >changing live data just in memory for a long time... >Above a certain size of databases you cannot handle everything with >software, you have to layout a proper system including the hardware. WebCat >is amazingly fast, but it is impossible to turn a PC into a Cray II by >simply forbidding WebCat to store its data ;-) >Large databases are best handled with a RAID 5 system and effective >caching. And you need a really fast machine with really fast transfer rates. > >But let us concentrate on software: >There are different approaches to handle big databases, it depends on the >actions you want to perform. Analyze your needs! > >Use record numbers (call it SKU, if you want ;-) in ANY database. >Before you perform a search, think about [lookup..] which is much faster: >If you know there is just one records which matches the criterion, >[search...] means wasting cpu and time. > >example 1: >You have many fields, but to search in 3 fields only and display lists with >a maximum of 4 fields plus the search fields? Use numbered records, make a >separate database with only record numbers and the 3+4 fields in question. >If one clicks on a list item, you identify it by the record number and show >the detail with a super-fast [lookup...] > >example 2: >You discover, that you have a lot of double keywords to search for: copy >unique keywords into a new database. A simple trick, but with [lookup..] >you instantly know wether it makes sense to do a real search on the >original data. > >example 3: >If one clicks on a list item, you have to make a complicated search in a >huge database to find this record: it was your fault - use unique numbers >to identify the records, do not perform similar searches twice. > >example 4: >Everything is slow because of so many records and ugly searches: think >about splitting the db's - maybe you find some criterion to search just one >of the databases. You can also split them by some common identifiers like >categories and rethink the search system you present the user. > >example 5: >Nothing helps, you can't find a way to speed the thing up: you are probably >tired, or you use the wrong program, wrong machine, maybe you would be >better off to combine your database with an indexing search engine like >UltraSeek - never tried, but could work... > > >Hope that helps, at least it does not hurt ;-) >Peter > >__________________________________________ >Peter Ostry - po@ostry.com - www.ostry.com >Ostry & Partner - Ostry Internet Solutions >Auhofstrasse 29 A-1130 Vienna Austria >fon ++43-1-8777454 fax ++43-1-8777454-21 > Associated Messages, from the most recent to the oldest:

    
  1. Re: autocommit problem (Angel Bennett 1998)
  2. Re: autocommit problem (Angel Bennett 1998)
  3. Re: autocommit problem (Angel Bennett 1998)
  4. Re: autocommit problem (Angel Bennett 1998)
  5. Re: autocommit problem (Kenneth Grome 1998)
  6. Re: autocommit problem (PCS Technical Support 1998)
  7. Re: autocommit problem (Kenneth Grome 1998)
  8. Re: autocommit problem (Sandra L. Pitner 1998)
  9. Re: autocommit problem (Sandra L. Pitner 1998)
  10. Re: autocommit problem (Peter Ostry 1998)
  11. Re: autocommit problem (PCS Technical Support 1998)
  12. Re: autocommit problem (Sandra L. Pitner 1998)
  13. Re: autocommit problem (Kenneth Grome 1998)
  14. Re: autocommit problem (PCS Technical Support 1998)
  15. Re: autocommit problem (Gary Richter 1998)
  16. Re: autocommit problem (Sandra L. Pitner 1998)
  17. autocommit problem (Sandra L. Pitner 1998)
Actually the database has been working fine for months at 100 Megs as a static, never changed by web access, inventory.db for www.musicianstore.com with over 200K SKUs. It loads up into RAM and keeps on ticking until I tell it to refresh. It can't really be split apart without making several virtual stores because of price lookups,etc. Speed isn't a problem at all on a Mac G3 and separate graphics server. I did find that narrowing searches with categories and subcategories help speed.What I'm trying to do now is keep dynamic records of most popular items by counting how often people look at certain SKUs. I'd like to have searches based on most popular sheetmusic for certain style, etc. It would have been nice to have the count field in the same database with the subcategory, category and other search fields. That way I could return the results in most popular sort order for any given search criteria. To build another database with all the searchfields and count data for all SKUs probably is a non-neglible amount of RAM. Don't think WebCat can return results sorted on data in another database with minimal info (ie sku and count). And the searches would get alot more complicated with lookups, etc. for prices, descriptive info, etc. I could have lived with dynamic data only occasionally flushed to disk because of the nature of the data (loosing a couple hours or a day or so of data wouldn't change results that much), plus server seems rock solid (not to tempt fate here).Back to the drawing board on this one! Thanks for your thoughts!Sandy >At 17:26 Uhr 24.07.1998, Sandra L. Pitner wrote: > >>It's a huge file (100 Megs) and the last thing I want is for it to >>be written to disk often. >>...Is there a way to prevent this writing to disk...? > > >What you want to do is (in fact) excessive caching of large blocks of data. >I do not recommend that, it is quite dangerous to hold a huge amount of >changing live data just in memory for a long time... >Above a certain size of databases you cannot handle everything with >software, you have to layout a proper system including the hardware. WebCat >is amazingly fast, but it is impossible to turn a PC into a Cray II by >simply forbidding WebCat to store its data ;-) >Large databases are best handled with a RAID 5 system and effective >caching. And you need a really fast machine with really fast transfer rates. > >But let us concentrate on software: >There are different approaches to handle big databases, it depends on the >actions you want to perform. Analyze your needs! > >Use record numbers (call it SKU, if you want ;-) in ANY database. >Before you perform a search, think about [lookup..] which is much faster: >If you know there is just one records which matches the criterion, >[search...] means wasting cpu and time. > >example 1: >You have many fields, but to search in 3 fields only and display lists with >a maximum of 4 fields plus the search fields? Use numbered records, make a >separate database with only record numbers and the 3+4 fields in question. >If one clicks on a list item, you identify it by the record number and show >the detail with a super-fast [lookup...] > >example 2: >You discover, that you have a lot of double keywords to search for: copy >unique keywords into a new database. A simple trick, but with [lookup..] >you instantly know wether it makes sense to do a real search on the >original data. > >example 3: >If one clicks on a list item, you have to make a complicated search in a >huge database to find this record: it was your fault - use unique numbers >to identify the records, do not perform similar searches twice. > >example 4: >Everything is slow because of so many records and ugly searches: think >about splitting the db's - maybe you find some criterion to search just one >of the databases. You can also split them by some common identifiers like >categories and rethink the search system you present the user. > >example 5: >Nothing helps, you can't find a way to speed the thing up: you are probably >tired, or you use the wrong program, wrong machine, maybe you would be >better off to combine your database with an indexing search engine like >UltraSeek - never tried, but could work... > > >Hope that helps, at least it does not hurt ;-) >Peter > >__________________________________________ >Peter Ostry - po@ostry.com - www.ostry.com >Ostry & Partner - Ostry Internet Solutions >Auhofstrasse 29 A-1130 Vienna Austria >fon ++43-1-8777454 fax ++43-1-8777454-21 > Sandra L. Pitner

DOWNLOAD WEBDNA NOW!

Top Articles:

Talk List

The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...

Related Readings:

WebCat2 beta 11 - new prefs ... (1997) Re[2]: Enhancement Request for WebCatalog-NT (1996) [applescript] (1999) Chat Area (2000) A little syntax help (1997) Signal Raised (1997) includes and cart numbers (1997) [date format] w/in sendmail (1997) taxTotal, too (1997) how to determine the actual file format of an image file?- done (2002) Diners Club card problems (1999) HTML encoding in URLs (1997) SSL and 2.1.1 .acgi (1998) New Zealand [OT - was Car Database] (2002) OK, here goes... (1997) Robert Minor duplicate mail (1997) Variable Prices (1998) Out of the woodwork (2007) # fields limited? (1997) Fwd: Image Pirating [protecting against] (2003)