Re: [WebDNA] HTTP Streaming - POSSIBLE!

This WebDNA talk-list message is from

2010


It keeps the original formatting.
numero = 105578
interpreted = N
texte = > Let's say a folder is created for each auction item. There is only one auction item so this part is easy ... :) > Then every time a new bid comes in it writes a > new sequential file. We will have anywhere from 1000 to 5000 bidders submitting bids during the one-hour auction. My estimate of the average number of bids received per second is 55. This means 55 new files written per second "on average" ... But this is only an average. During the early part of the auction only one or two bids may come in each second, or perhaps only one or two bids each minute. It's when we're nearing the end of the auction that I expect to receive thousands of bids per second. > Now when a new bidder views a potential item it > would search the number of files in that auctions > directory and would keep the connection open with a > waitforfile command until a new bid comes in. > Then when the file is created it simply refreshes the > page and waits for the next bid sequential file. My concern here is that there will be tens of thousands of "bid files" created during the one-hour auction, and maybe webdna won't be able to count them fast enough to respond quickly, especially near the end of the auction when thousands of bidders are online and submitting new bids rapidly. > This solves your server load issues because it doesn't > demand packets to constantly be exchanged. Yes, it solves the network part of the server load issue. But it burdens webdna with a task that invokes the OS to count files on disk -- and write new files in rapid succession -- instead of keeping all this data in a ram-cached webdna database. Because of all the disk access inherent in counting files on disk and writing new files all the time, I'm not convinced that webdna will keep up. Webdna is fast when writing to its own ram-cached databases since this is all done in memory. If this data is not flushed to disk until *after* the auction there would be very few disk disk hits during the auction. But my gut tells me that writing every new bid to a separate file on disk, and then invoking the OS to count those files thousands of times per second (one count for each of the thousands of bidders who are online and waiting for the next file to be written) is just going to bog down the server and fail miserably. I could be wrong of course, but I have enough doubts about this type of system to avoid proceeding with it -- at least until someone comes up with an argument that convinces me otherwise. Sincerely, Kenneth Grome Associated Messages, from the most recent to the oldest:

    
  1. Re: [WebDNA] HTTP Streaming - POSSIBLE! (Kenneth Grome 2010)
  2. Re: [WebDNA] HTTP Streaming - POSSIBLE! (Scott Walters 2010)
  3. Re: [WebDNA] HTTP Streaming - POSSIBLE! (Kenneth Grome 2010)
  4. Re: [WebDNA] HTTP Streaming - POSSIBLE! (Kenneth Grome 2010)
  5. Re: [WebDNA] HTTP Streaming - POSSIBLE! (Scott Walters 2010)
  6. Re: [WebDNA] HTTP Streaming - POSSIBLE! (Christer Olsson 2010)
  7. Re: [WebDNA] HTTP Streaming - POSSIBLE! (Kenneth Grome 2010)
  8. RE: [WebDNA] HTTP Streaming - POSSIBLE! ("Olin Lagon" 2010)
  9. Re: [WebDNA] HTTP Streaming - POSSIBLE! (Scott Walters 2010)
> Let's say a folder is created for each auction item. There is only one auction item so this part is easy ... :) > Then every time a new bid comes in it writes a > new sequential file. We will have anywhere from 1000 to 5000 bidders submitting bids during the one-hour auction. My estimate of the average number of bids received per second is 55. This means 55 new files written per second "on average" ... But this is only an average. During the early part of the auction only one or two bids may come in each second, or perhaps only one or two bids each minute. It's when we're nearing the end of the auction that I expect to receive thousands of bids per second. > Now when a new bidder views a potential item it > would search the number of files in that auctions > directory and would keep the connection open with a > waitforfile command until a new bid comes in. > Then when the file is created it simply refreshes the > page and waits for the next bid sequential file. My concern here is that there will be tens of thousands of "bid files" created during the one-hour auction, and maybe webdna won't be able to count them fast enough to respond quickly, especially near the end of the auction when thousands of bidders are online and submitting new bids rapidly. > This solves your server load issues because it doesn't > demand packets to constantly be exchanged. Yes, it solves the network part of the server load issue. But it burdens webdna with a task that invokes the OS to count files on disk -- and write new files in rapid succession -- instead of keeping all this data in a ram-cached webdna database. Because of all the disk access inherent in counting files on disk and writing new files all the time, I'm not convinced that webdna will keep up. Webdna is fast when writing to its own ram-cached databases since this is all done in memory. If this data is not flushed to disk until *after* the auction there would be very few disk disk hits during the auction. But my gut tells me that writing every new bid to a separate file on disk, and then invoking the OS to count those files thousands of times per second (one count for each of the thousands of bidders who are online and waiting for the next file to be written) is just going to bog down the server and fail miserably. I could be wrong of course, but I have enough doubts about this type of system to avoid proceeding with it -- at least until someone comes up with an argument that convinces me otherwise. Sincerely, Kenneth Grome Kenneth Grome

DOWNLOAD WEBDNA NOW!

Top Articles:

Talk List

The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...

Related Readings:

DreamWeaver (2002) [WebDNA] Question on table search (2011) Nested search (1997) New Site Announcement (1998) WebCat2 Append problem (B14Macacgi) (1997) emailer w/F2 (1997) Hiding HTML and page breaks (1997) websitepro/webcat/registry? (1998) WebCatalog for guestbook ? (1997) Document Contains No Data! (1997) Emailer compatibility..... (1998) Normalizing Dates and Phone numbers (2000) Off Topic: Frames Killer? (1998) RE: Formulas.db + Users.db (1997) old cart file deletion (2000) Postdata expired from cache (2004) no global [username] or [password] displayed ... (1997) Trouble with formula.db (1997) expansion domain freak out (2003) [Sum] function? (1997)