Re: Caching pages...again
This WebDNA talk-list message is from 2001
It keeps the original formatting.
numero = 36927
interpreted = N
texte = I'm really not sure what this discussion thread's main issue,I use the following meta tag to force the browser to load thepage from my server and not from the cache.
-- Anup Setty ------- Original Message -----From: John Peacock
To: WebCatalog Talk Sent: Tuesday, July 03, 2001 9:47 AMSubject: Re: Caching pages...again> Glenn Busbin wrote:> >> > >> > >> > >This has nothing to do with caching.> >> > Google says it sure has to do with caching on proxy servers.> >>> ROBOTS != caching servers>> Most HTTP caching servers operate in transparent mode; by this I mean> that the client requests a page through the server and the server sees> whether it has that page cached. If you are unlucky (or gullible)> enough to use AOL, all of your accesses are through a caching server.>> Robots or spiders, on the other hand, do not (as a rule) keep a cache> of the pages they visit (think of the storage requirements). They> only apply whatever indexing algorythm they use to the page and store> the indexed information for later searching.>> Google, unlike most search engines, does keep a copy of the original> page available for viewing _when the original page is no longer> available_. Here is a representative search from Google:>> University Press of America, Inc.: Catalog/Advanced Search> Click here for details on Web Discount. University Press of America,> Inc. Catalog> / Advanced Search. Click Here for Search Instructions ...> www.univpress.com/Catalog/ - 18k - Cached - Similar pages> ------>> If you click on the first line, you get the actual site; if you> click on the hyperlink (unlined) labled Cached you get a copy of> the page as it was when the spider walked the site.>> This is not a feature pf caching proxy servers at all; this is a> feature of Google specifically. QED>> > >> > >> > >Use MIME headers correctly if you don't want any caching done.> > >> >> > Will all servers obey MIME headers for cache rules?> >>> The combination of the two headers I described in the other thread is> the only way _I_ know of to consistently defeat (or correctly apply)> caching proxy services. If you do not use both, you are letting> yourself in for whatever heuristic the caching server uses to determine> whether the page should be fresh or cached.>> John>> --> John Peacock> Director of Information Research and Technology> Rowman & Littlefield Publishing Group> 4720 Boston Way> Lanham, MD 20706> 301-459-3366 x.5010> fax 301-429-5747>> -------------------------------------------------------------> This message is sent to you because you are subscribed to> the mailing list .> To unsubscribe, E-mail to: > To switch to the DIGEST mode, E-mail to> Web Archive of this list is at: http://search.smithmicro.com/-------------------------------------------------------------This message is sent to you because you are subscribed to the mailing list .To unsubscribe, E-mail to: To switch to the DIGEST mode, E-mail to Web Archive of this list is at: http://search.smithmicro.com/
Associated Messages, from the most recent to the oldest:
I'm really not sure what this discussion thread's main issue,I use the following meta tag to force the browser to load thepage from my server and not from the cache.-- Anup Setty ------- Original Message -----From: John Peacock To: WebCatalog Talk Sent: Tuesday, July 03, 2001 9:47 AMSubject: Re: Caching pages...again> Glenn Busbin wrote:> >> > >> > >> > >This has nothing to do with caching.> >> > Google says it sure has to do with caching on proxy servers.> >>> ROBOTS != caching servers>> Most HTTP caching servers operate in transparent mode; by this I mean> that the client requests a page through the server and the server sees> whether it has that page cached. If you are unlucky (or gullible)> enough to use AOL, all of your accesses are through a caching server.>> Robots or spiders, on the other hand, do not (as a rule) keep a cache> of the pages they visit (think of the storage requirements). They> only apply whatever indexing algorythm they use to the page and store> the indexed information for later searching.>> Google, unlike most search engines, does keep a copy of the original> page available for viewing _when the original page is no longer> available_. Here is a representative search from Google:>> University Press of America, Inc.: Catalog/Advanced Search> Click here for details on Web Discount. University Press of America,> Inc. Catalog> / Advanced Search. Click Here for Search Instructions ...> www.univpress.com/Catalog/ - 18k - Cached - Similar pages> ------>> If you click on the first line, you get the actual site; if you> click on the hyperlink (unlined) labled Cached you get a copy of> the page as it was when the spider walked the site.>> This is not a feature pf caching proxy servers at all; this is a> feature of Google specifically. QED>> > >> > >> > >Use MIME headers correctly if you don't want any caching done.> > >> >> > Will all servers obey MIME headers for cache rules?> >>> The combination of the two headers I described in the other thread is> the only way _I_ know of to consistently defeat (or correctly apply)> caching proxy services. If you do not use both, you are letting> yourself in for whatever heuristic the caching server uses to determine> whether the page should be fresh or cached.>> John>> --> John Peacock> Director of Information Research and Technology> Rowman & Littlefield Publishing Group> 4720 Boston Way> Lanham, MD 20706> 301-459-3366 x.5010> fax 301-429-5747>> -------------------------------------------------------------> This message is sent to you because you are subscribed to> the mailing list .> To unsubscribe, E-mail to: > To switch to the DIGEST mode, E-mail to> Web Archive of this list is at: http://search.smithmicro.com/-------------------------------------------------------------This message is sent to you because you are subscribed to the mailing list .To unsubscribe, E-mail to: To switch to the DIGEST mode, E-mail to Web Archive of this list is at: http://search.smithmicro.com/
Anup Setty
DOWNLOAD WEBDNA NOW!
Top Articles:
Talk List
The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...
Related Readings:
Re:no [search] with NT (1997)
calculate age (2003)
Assign Variable Value (1998)
[WebDNA] Upgrade pricing (2008)
WebCat2b13MacPlugIn - More limits on [include] (1997)
WebCat2b13MacPlugIn - [include] doesn't allow creator (1997)
merchant accts. (1997)
Repeating Fields (1997)
Cart Transfer from Un-Secure to Secure (2000)
can WC render sites out? (1997)
Multiple Hideif peramiters (2001)
all records returned. (1997)
[/application] error? (1997)
price formula (1999)
RE: [WebDNA] Poll: Cloud hosting (2016)
Need relative path explanation (1997)
[lookup] is case-sensitive, [lookup] is case sensitive... (2003)
[WebDNA] need decoded url in sendmail (2015)
Redirect and passing more than one variable... (2002)
Checkboxes and neSKUdata=[blank] (1998)