Re: [WebDNA] Append speed limits

This WebDNA talk-list message is from

2008


It keeps the original formatting.
numero = 101425
interpreted = N
texte = > There are tricks you can use to speed up the appends. For > one, you could close the DB and use APPENDFILE instead. Unless I'm missing something, it seems to me that append will be faster than appendfile because append is performed in RAM and appendfile must write to the hard drive. This depends upon the prefs of course, because if safewrite is on webdna is going to write the data to disk every time anyways -- so I will probably have to turn safewrite off to achieve the best performance. But in my situation, if I use webdna I must also run periodic sorts to determine the specific position of certain records in the sorted db, and in this case it seems that opening and closing the db frequently is 'undesirable'. I haven't run my tests yet because one of my computers has been unavailable for the last few days. When it's available again I'll run some tests, but based on what the folks in another forum have been saying, I'm probably going to need something a whole lot more powerful than webdna -- maybe a custom C application running on a high-performance quad-core machine using 2-3 cores for RAM data storage and one core for periodic snapshots. Others have suggested Erlang rather than C because it's designed specifically for high performance in a transparent manner over multiple core. The good thing is that I only need this capacity for about an hour at a time, then I can write the data to disk afterwards. This means no data gets written to disk during each hour-long 'event' -- it all gets stored in RAM, probably in a red-black tree to keep it sorted and provide the fastest insert/remove performance. I was initially attracted to webdna as a possible tool for this task because of its direct RAM access. But it's not written for multi-core processors and with 70+ million appends / replaces per hour I think I may need an app that's optimized for this specific task and takes full advantage of multiple cores. Sincerely, Ken Grome Associated Messages, from the most recent to the oldest:

    
  1. Re: [WebDNA] Append speed limits (christophe.billiottet@webdna.us 2008)
  2. Re: [WebDNA] Append speed limits (Kenneth Grome 2008)
  3. Re: [WebDNA] Append speed limits (christophe.billiottet@webdna.us 2008)
  4. Re: [WebDNA] Append speed limits (Kenneth Grome 2008)
  5. RE: [WebDNA] Append speed limits ("Olin Lagon" 2008)
  6. Re: [WebDNA] Append speed limits (Kenneth Grome 2008)
  7. Re: [WebDNA] Append speed limits (Brian Fries 2008)
  8. Re: [WebDNA] Append speed limits (Kenneth Grome 2008)
  9. Re: [WebDNA] Append speed limits (Stuart Tremain 2008)
  10. [WebDNA] Append speed limits (Kenneth Grome 2008)
> There are tricks you can use to speed up the appends. For > one, you could close the DB and use APPENDFILE instead. Unless I'm missing something, it seems to me that append will be faster than appendfile because append is performed in RAM and appendfile must write to the hard drive. This depends upon the prefs of course, because if safewrite is on webdna is going to write the data to disk every time anyways -- so I will probably have to turn safewrite off to achieve the best performance. But in my situation, if I use webdna I must also run periodic sorts to determine the specific position of certain records in the sorted db, and in this case it seems that opening and closing the db frequently is 'undesirable'. I haven't run my tests yet because one of my computers has been unavailable for the last few days. When it's available again I'll run some tests, but based on what the folks in another forum have been saying, I'm probably going to need something a whole lot more powerful than webdna -- maybe a custom C application running on a high-performance quad-core machine using 2-3 cores for RAM data storage and one core for periodic snapshots. Others have suggested Erlang rather than C because it's designed specifically for high performance in a transparent manner over multiple core. The good thing is that I only need this capacity for about an hour at a time, then I can write the data to disk afterwards. This means no data gets written to disk during each hour-long 'event' -- it all gets stored in RAM, probably in a red-black tree to keep it sorted and provide the fastest insert/remove performance. I was initially attracted to webdna as a possible tool for this task because of its direct RAM access. But it's not written for multi-core processors and with 70+ million appends / replaces per hour I think I may need an app that's optimized for this specific task and takes full advantage of multiple cores. Sincerely, Ken Grome Kenneth Grome

DOWNLOAD WEBDNA NOW!

Top Articles:

Talk List

The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...

Related Readings:

Navigator 4.01 (1997) Trouble with formula.db + more explanation (1997) PCS Frames (1997) MacAuthorize hub, no modal password dialog? (1997) Roundup function? (1997) One more try (1997) 2nd try:Webcat interfering with Webstar? (1998) Database Updates (1997) [TCPSend] and whois? (1999) OT: SSL Certs (2005) Trouble with formula.db (1997) WebCat Redundancy (2000) searching for items that begin with a number (2004) quitting (1997) database freeze (1997) WebCat2.0 acgi vs plugin (1997) [WebDNA] END processing (2014) Car Database (2002) Num Sort Descending (2004) One cent added (1998)