Re: [WebDNA] Pretty URLS

This WebDNA talk-list message is from

2011


It keeps the original formatting.
numero = 106577
interpreted = N
texte = --20cf300fb333fb4c0804a1e833e5 Content-Type: text/plain; charset=ISO-8859-1 Hey all, This is my first post but I've been a longtime follower- Have you considered using the .htaccess file and creating rewrite rules? We have recently started experimenting with this and as far as I can tell the only downside is that it requires a good understanding of regular expressions to set up. From .htaccess: RewriteEngine on RewriteRule ^info/([a-z\-]+)$ info/$1/ [NC] RewriteRule ^info/([^/]+)/ info/pages.html?page=$1 [NC] What it does: Takes our pretty URL: http://www.knifecenter.com/info/who-we-are then rewrites it and makes this request from the server: http://www.knifecenter.com/info/pages.html?page=who-we-are and returns it to the user, none the wiser about what went on behind the scenes. pages.html is used as a template for informational pages. It provides a framework and uses the value of variable [page] to populate the contents- [page].html as an include. basically: [include file=header.html] [include file=/info/[page].html] [include file=footer.html] Regards, Daniel Meola daniel@knifecenter.com 301-486-0901 On Wed, Apr 27, 2011 at 11:21 AM, Brian B. Burton wrote: > the website in question sells replacement parts, so skus = couple thousand. > Things they fit into = couple hundred thousand. Oh, and people google > search for the thing the parts fit into. So I'd have a few hundred thousand > static pages, which although automated, is still quite messy. > > > On Apr 27, 2011, at 10:13 AM, Kenneth Grome wrote: > > > This looks like a PITA to create and manage. I guess I've > > outgrown the desire to complicate things just because I can. > > Why not just use a daily script to generate a folder full of > > static pages, one for each sku, and be done with it? > > > > Sincerely, > > Kenneth Grome > > > > > > > >> What I think I'd like to do is tie into the page not > >> found system, i.e. have the server send my 404 request > >> to instead of error.html to URLs.tpl > >> > >> that way all pages act as they do currently BUT > >> for any "pretty" URL, it gets "not found" and rerouted to > >> URLs.tpl > >> > >> and inside that file i want to do something like: (yes > >> this is 100% wrong, just typing outloud here) [showif > >> [url][thispage][/url]=[grep]('notebook_battery/$')[/grep > >> ]] [include file=alphamfg.tpl&_CID=2][/showif] [showif > >> [url][thispage][/url]=[grep]('('notebook_battery/(?P >>> \w+)/$')[/grep]] [include > >> file=pickmodel&_CID=2&_MFG=[MFG]][/showif] [showif > >> [url][thispage][/url]=[grep]('('notebook_battery/(?P >>> [^/]+)/(?P\d+).*')[/grep]] [include > >> file=modelinfo&_CID=2&_MFG=[MFG]&_FID=[FID]][/showif] > >> > >> (and have a last rule that actually redirects to a 404 > >> page...) > >> > >> I want to figure out how to use an include so that i'm > >> specifically not rewriting and redirecting the URL (thus > >> making it ugly) > >> > >> Anyone currently doing anything like this? > >> If I can figure out how to do it, is anyone else > >> interested in the code? > >> > >> > >> > >> > >> Brian B. Burton > >> brian@burtons.com > >> > >> ================================= > >> time is precious. waste it wisely > >> ================================= > >> > >> On Apr 27, 2011, at 6:22 AM, William DeVaul wrote: > >>> On Tue, Apr 26, 2011 at 11:38 PM, Kenneth Grome > > wrote: > >>>>> I'm not sure why you'd leave the humans ugly URLs. > >>>> > >>>> Because those URLs are the default URLs for WebDNA. > >>> > >>> I thought the parameterized URLs were a convention that > >>> came about in the early days of the Internet. Seems > >>> the convention is ripe for change. > >>> > >>> In general, I'm for programmer convenience versus > >>> optimization for the computer. But I'd put user > >>> convenience above the programmer's. In some > >>> frameworks the default is "prettier" to the benefit of > >>> users and programmers. > >>> > >>>>> The search engines like keywords. > >>>> > >>>> They get plenty of keywords in the static pages. > >>> > >>> I think it is about quality of the keyword placement > >>> (in incoming links, in the domain, in the URLs, in > >>> "important" tags e.g.

). > >> > >> --------------------------------------------------------- > >> This message is sent to you because you are subscribed to > >> the mailing list . > >> To unsubscribe, E-mail to: > >> archives: http://mail.webdna.us/list/talk@webdna.us > >> Bug Reporting: support@webdna.us > > --------------------------------------------------------- > > This message is sent to you because you are subscribed to > > the mailing list . > > To unsubscribe, E-mail to: > > archives: http://mail.webdna.us/list/talk@webdna.us > > Bug Reporting: support@webdna.us > > > > > --------------------------------------------------------- > This message is sent to you because you are subscribed to > the mailing list . > To unsubscribe, E-mail to: > archives: http://mail.webdna.us/list/talk@webdna.us > Bug Reporting: support@webdna.us > -- Daniel Meola 301-486-0901 daniel@knifecenter.com --20cf300fb333fb4c0804a1e833e5 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hey all,

This is my first post but I've been a longtime follower-
=

Have you considered using the .htaccess file and creating re= write rules? We have recently started experimenting with this and as far as= I can tell the only downside is that it requires a good understanding of r= egular expressions to set up.

From .htaccess:

RewriteEngine = on
RewriteRule ^info/([a-z\-]+)$ info/$1/ [NC]
RewriteR= ule ^info/([^/]+)/ info/pages.html?page=3D$1 [NC]

What it does:

then rewrites it and makes this request from the server:=A0http://www.knifecenter.com/info/pages.h= tml?page=3Dwho-we-are
and returns it to the user, none the wiser about what went on behind t= he scenes.=A0

pages.html is used as a templat= e for informational pages. It provides a framework and uses the value of va= riable [page]=A0
to populate the conte= nts- [page].html as an include.
basically:=A0
[in= clude file=3Dheader.html]
[include file=3D/info/[page].html]
[include file=3Dfooter.html]

Regards,
Daniel Meola

<= div class=3D"gmail_quote">On Wed, Apr 27, 2011 at 11:21 AM, Brian B. Burton= <brian@burtons.c= om> wrote:
the website in question sells replacement p= arts, so skus =3D couple thousand. Things they fit into =A0=3D couple hundr= ed thousand. Oh, and people google search for the thing the parts fit into.= So I'd have a few hundred thousand static pages, which although automa= ted, is still quite messy.


On Apr 27, 2011, at 10:13 AM, Kenneth Grome wrote:

> This looks like a PITA to create and manage. =A0I guess I've
> outgrown the desire to complicate things just because I can.
> Why not just use a daily script to generate a folder full of
> static pages, one for each sku, and be done with it?
>
> Sincerely,
> Kenneth Grome
>
>
>
>> What I think I'd like to do is tie into the page not
>> found system, i.e. have the server send my 404 request
>> to instead of error.html to URLs.tpl
>>
>> that way all pages act as they do currently BUT
>> for any "pretty" URL, it gets "not found" and = rerouted to
>> URLs.tpl
>>
>> and inside that file i want to do something like: (yes
>> this is 100% wrong, just typing outloud here) [showif
>> [url][thispage][/url]=3D[grep]('notebook_battery/$')[/grep=
>> ]] =A0 =A0[include file=3Dalphamfg.tpl&_CID=3D2][/showif] [sho= wif
>> [url][thispage][/url]=3D[grep]('('notebook_battery/(?P<= MFG
>>> \w+)/$')[/grep]] =A0 =A0 [include
>> file=3Dpickmodel&_CID=3D2&_MFG=3D[MFG]][/showif] [showif >> [url][thispage][/url]=3D[grep]('('notebook_battery/(?P<= MFG
>>> [^/]+)/(?P<FID>\d+).*')[/grep]] =A0 =A0[include
>> file=3Dmodelinfo&_CID=3D2&_MFG=3D[MFG]&_FID=3D[FID]][/= showif]
>>
>> (and have a last rule that actually redirects to a 404
>> page...)
>>
>> I want to figure out how to use an include so that i'm
>> specifically not rewriting and redirecting the URL (thus
>> making it ugly)
>>
>> Anyone currently doing anything like this?
>> If I can figure out how to do it, is anyone else
>> interested in the code?
>>
>>
>>
>>
>> Brian B. Burton
>> brian@burtons.com
>>
>> =A0 =A0 =A0 =A0 =A0 =A0=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 time is precious. waste it= wisely
>> =A0 =A0 =A0 =A0 =A0 =A0=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>
>> On Apr 27, 2011, at 6:22 AM, William DeVaul wrote:
>>> On Tue, Apr 26, 2011 at 11:38 PM, Kenneth Grome
> <kengrome@gmail.com> w= rote:
>>>>> I'm not sure why you'd leave the humans ugly U= RLs.
>>>>
>>>> Because those URLs are the default URLs for WebDNA.
>>>
>>> I thought the parameterized URLs were a convention that
>>> came about in the early days of the Internet. =A0Seems
>>> the convention is ripe for change.
>>>
>>> In general, I'm for programmer convenience versus
>>> optimization for the computer. =A0But I'd put user
>>> convenience above the programmer's. =A0In some
>>> frameworks the default is "prettier" to the benefit = of
>>> users and programmers.
>>>
>>>>> The search engines like keywords.
>>>>
>>>> They get plenty of keywords in the static pages.
>>>
>>> I think it is about quality of the keyword placement
>>> (in incoming links, in the domain, in the URLs, in
>>> "important" tags e.g. <h1>).
>>
>> ---------------------------------------------------------
>> This message is sent to you because you are subscribed to
>> the mailing list <talk@webdna= .us>.
>> To unsubscribe, E-mail to: <talk-leave@webdna.us>
>> archives: http://mail.webdna.us/list/talk@webdna.us
>> Bug Reporting: support@webdna= .us
> ---------------------------------------------------------
> This message is sent to you because you are subscribed to
> the mailing list <talk@webdna.us<= /a>>.
> To unsubscribe, E-mail to: <
talk-leave@webdna.us>
> archives: http://mail.webdna.us/list/talk@webdna.us
> Bug Reporting: support@webdna.us<= /a>
>


---------------------------------------------------------
This message is sent to you because you are subscribed to
the mailing list <
talk@webdna.us&g= t;.
To unsubscribe, E-mail to: <talk= -leave@webdna.us>
archives: http://mail.webdna.us/list/talk@webdna.us
Bug Reporting: support@webdna.us



--
=
Daniel Meola
= 301-486-0901
<= br> --20cf300fb333fb4c0804a1e833e5-- Associated Messages, from the most recent to the oldest:

    
  1. Re: [WebDNA] Pretty URLS (William DeVaul 2011)
  2. Re: [WebDNA] Pretty URLS (sgbc cebu 2011)
  3. Re: [WebDNA] Pretty URLS (Tom Duke 2011)
  4. Re: [WebDNA] Pretty URLS (William DeVaul 2011)
  5. Re: [WebDNA] Pretty URLS (Stuart Tremain 2011)
  6. Re: [WebDNA] Pretty URLS (Kenneth Grome 2011)
  7. Re: [WebDNA] Pretty URLS (Brian Fries 2011)
  8. Re: [WebDNA] Pretty URLS (Kenneth Grome 2011)
  9. Re: [WebDNA] Pretty URLS (Govinda 2011)
  10. Re: [WebDNA] Pretty URLS (Daniel Meola 2011)
  11. Re: [WebDNA] Pretty URLS ("Brian B. Burton" 2011)
  12. Re: [WebDNA] Pretty URLS (Kenneth Grome 2011)
  13. Re: [WebDNA] Pretty URLS ("Brian B. Burton" 2011)
  14. Re: [WebDNA] Pretty URLS (William DeVaul 2011)
  15. Re: [WebDNA] Pretty URLS (Kenneth Grome 2011)
  16. Re: [WebDNA] Pretty URLS (William DeVaul 2011)
  17. Re: [WebDNA] Pretty URLS (Govinda 2011)
  18. Re: [WebDNA] Pretty URLS (Kenneth Grome 2011)
  19. Re: [WebDNA] Pretty URLS (William DeVaul 2011)
  20. Re: [WebDNA] Pretty URLS (Stuart Tremain 2011)
  21. [WebDNA] Pretty URLS ("Brian B. Burton" 2011)
--20cf300fb333fb4c0804a1e833e5 Content-Type: text/plain; charset=ISO-8859-1 Hey all, This is my first post but I've been a longtime follower- Have you considered using the .htaccess file and creating rewrite rules? We have recently started experimenting with this and as far as I can tell the only downside is that it requires a good understanding of regular expressions to set up. From .htaccess: RewriteEngine on RewriteRule ^info/([a-z\-]+)$ info/$1/ [NC] RewriteRule ^info/([^/]+)/ info/pages.html?page=$1 [NC] What it does: Takes our pretty URL: http://www.knifecenter.com/info/who-we-are then rewrites it and makes this request from the server: http://www.knifecenter.com/info/pages.html?page=who-we-are and returns it to the user, none the wiser about what went on behind the scenes. pages.html is used as a template for informational pages. It provides a framework and uses the value of variable [page] to populate the contents- [page].html as an include. basically: [include file=header.html] [include file=/info/[page].html] [include file=footer.html] Regards, Daniel Meola daniel@knifecenter.com 301-486-0901 On Wed, Apr 27, 2011 at 11:21 AM, Brian B. Burton wrote: > the website in question sells replacement parts, so skus = couple thousand. > Things they fit into = couple hundred thousand. Oh, and people google > search for the thing the parts fit into. So I'd have a few hundred thousand > static pages, which although automated, is still quite messy. > > > On Apr 27, 2011, at 10:13 AM, Kenneth Grome wrote: > > > This looks like a PITA to create and manage. I guess I've > > outgrown the desire to complicate things just because I can. > > Why not just use a daily script to generate a folder full of > > static pages, one for each sku, and be done with it? > > > > Sincerely, > > Kenneth Grome > > > > > > > >> What I think I'd like to do is tie into the page not > >> found system, i.e. have the server send my 404 request > >> to instead of error.html to URLs.tpl > >> > >> that way all pages act as they do currently BUT > >> for any "pretty" URL, it gets "not found" and rerouted to > >> URLs.tpl > >> > >> and inside that file i want to do something like: (yes > >> this is 100% wrong, just typing outloud here) [showif > >> [url][thispage][/url]=[grep]('notebook_battery/$')[/grep > >> ]] [include file=alphamfg.tpl&_CID=2][/showif] [showif > >> [url][thispage][/url]=[grep]('('notebook_battery/(?P >>> \w+)/$')[/grep]] [include > >> file=pickmodel&_CID=2&_MFG=[MFG]][/showif] [showif > >> [url][thispage][/url]=[grep]('('notebook_battery/(?P >>> [^/]+)/(?P\d+).*')[/grep]] [include > >> file=modelinfo&_CID=2&_MFG=[MFG]&_FID=[FID]][/showif] > >> > >> (and have a last rule that actually redirects to a 404 > >> page...) > >> > >> I want to figure out how to use an include so that i'm > >> specifically not rewriting and redirecting the URL (thus > >> making it ugly) > >> > >> Anyone currently doing anything like this? > >> If I can figure out how to do it, is anyone else > >> interested in the code? > >> > >> > >> > >> > >> Brian B. Burton > >> brian@burtons.com > >> > >> ================================= > >> time is precious. waste it wisely > >> ================================= > >> > >> On Apr 27, 2011, at 6:22 AM, William DeVaul wrote: > >>> On Tue, Apr 26, 2011 at 11:38 PM, Kenneth Grome > > wrote: > >>>>> I'm not sure why you'd leave the humans ugly URLs. > >>>> > >>>> Because those URLs are the default URLs for WebDNA. > >>> > >>> I thought the parameterized URLs were a convention that > >>> came about in the early days of the Internet. Seems > >>> the convention is ripe for change. > >>> > >>> In general, I'm for programmer convenience versus > >>> optimization for the computer. But I'd put user > >>> convenience above the programmer's. In some > >>> frameworks the default is "prettier" to the benefit of > >>> users and programmers. > >>> > >>>>> The search engines like keywords. > >>>> > >>>> They get plenty of keywords in the static pages. > >>> > >>> I think it is about quality of the keyword placement > >>> (in incoming links, in the domain, in the URLs, in > >>> "important" tags e.g.

). > >> > >> --------------------------------------------------------- > >> This message is sent to you because you are subscribed to > >> the mailing list . > >> To unsubscribe, E-mail to: > >> archives: http://mail.webdna.us/list/talk@webdna.us > >> Bug Reporting: support@webdna.us > > --------------------------------------------------------- > > This message is sent to you because you are subscribed to > > the mailing list . > > To unsubscribe, E-mail to: > > archives: http://mail.webdna.us/list/talk@webdna.us > > Bug Reporting: support@webdna.us > > > > > --------------------------------------------------------- > This message is sent to you because you are subscribed to > the mailing list . > To unsubscribe, E-mail to: > archives: http://mail.webdna.us/list/talk@webdna.us > Bug Reporting: support@webdna.us > -- Daniel Meola 301-486-0901 daniel@knifecenter.com --20cf300fb333fb4c0804a1e833e5 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hey all,

This is my first post but I've been a longtime follower-
=

Have you considered using the .htaccess file and creating re= write rules? We have recently started experimenting with this and as far as= I can tell the only downside is that it requires a good understanding of r= egular expressions to set up.

From .htaccess:

RewriteEngine = on
RewriteRule ^info/([a-z\-]+)$ info/$1/ [NC]
RewriteR= ule ^info/([^/]+)/ info/pages.html?page=3D$1 [NC]

What it does:

then rewrites it and makes this request from the server:=A0http://www.knifecenter.com/info/pages.h= tml?page=3Dwho-we-are
and returns it to the user, none the wiser about what went on behind t= he scenes.=A0

pages.html is used as a templat= e for informational pages. It provides a framework and uses the value of va= riable [page]=A0
to populate the conte= nts- [page].html as an include.
basically:=A0
[in= clude file=3Dheader.html]
[include file=3D/info/[page].html]
[include file=3Dfooter.html]

Regards,
Daniel Meola

<= div class=3D"gmail_quote">On Wed, Apr 27, 2011 at 11:21 AM, Brian B. Burton= <brian@burtons.c= om> wrote:
the website in question sells replacement p= arts, so skus =3D couple thousand. Things they fit into =A0=3D couple hundr= ed thousand. Oh, and people google search for the thing the parts fit into.= So I'd have a few hundred thousand static pages, which although automa= ted, is still quite messy.


On Apr 27, 2011, at 10:13 AM, Kenneth Grome wrote:

> This looks like a PITA to create and manage. =A0I guess I've
> outgrown the desire to complicate things just because I can.
> Why not just use a daily script to generate a folder full of
> static pages, one for each sku, and be done with it?
>
> Sincerely,
> Kenneth Grome
>
>
>
>> What I think I'd like to do is tie into the page not
>> found system, i.e. have the server send my 404 request
>> to instead of error.html to URLs.tpl
>>
>> that way all pages act as they do currently BUT
>> for any "pretty" URL, it gets "not found" and = rerouted to
>> URLs.tpl
>>
>> and inside that file i want to do something like: (yes
>> this is 100% wrong, just typing outloud here) [showif
>> [url][thispage][/url]=3D[grep]('notebook_battery/$')[/grep=
>> ]] =A0 =A0[include file=3Dalphamfg.tpl&_CID=3D2][/showif] [sho= wif
>> [url][thispage][/url]=3D[grep]('('notebook_battery/(?P<= MFG
>>> \w+)/$')[/grep]] =A0 =A0 [include
>> file=3Dpickmodel&_CID=3D2&_MFG=3D[MFG]][/showif] [showif >> [url][thispage][/url]=3D[grep]('('notebook_battery/(?P<= MFG
>>> [^/]+)/(?P<FID>\d+).*')[/grep]] =A0 =A0[include
>> file=3Dmodelinfo&_CID=3D2&_MFG=3D[MFG]&_FID=3D[FID]][/= showif]
>>
>> (and have a last rule that actually redirects to a 404
>> page...)
>>
>> I want to figure out how to use an include so that i'm
>> specifically not rewriting and redirecting the URL (thus
>> making it ugly)
>>
>> Anyone currently doing anything like this?
>> If I can figure out how to do it, is anyone else
>> interested in the code?
>>
>>
>>
>>
>> Brian B. Burton
>> brian@burtons.com
>>
>> =A0 =A0 =A0 =A0 =A0 =A0=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 time is precious. waste it= wisely
>> =A0 =A0 =A0 =A0 =A0 =A0=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>
>> On Apr 27, 2011, at 6:22 AM, William DeVaul wrote:
>>> On Tue, Apr 26, 2011 at 11:38 PM, Kenneth Grome
> <kengrome@gmail.com> w= rote:
>>>>> I'm not sure why you'd leave the humans ugly U= RLs.
>>>>
>>>> Because those URLs are the default URLs for WebDNA.
>>>
>>> I thought the parameterized URLs were a convention that
>>> came about in the early days of the Internet. =A0Seems
>>> the convention is ripe for change.
>>>
>>> In general, I'm for programmer convenience versus
>>> optimization for the computer. =A0But I'd put user
>>> convenience above the programmer's. =A0In some
>>> frameworks the default is "prettier" to the benefit = of
>>> users and programmers.
>>>
>>>>> The search engines like keywords.
>>>>
>>>> They get plenty of keywords in the static pages.
>>>
>>> I think it is about quality of the keyword placement
>>> (in incoming links, in the domain, in the URLs, in
>>> "important" tags e.g. <h1>).
>>
>> ---------------------------------------------------------
>> This message is sent to you because you are subscribed to
>> the mailing list <talk@webdna= .us>.
>> To unsubscribe, E-mail to: <talk-leave@webdna.us>
>> archives: http://mail.webdna.us/list/talk@webdna.us
>> Bug Reporting: support@webdna= .us
> ---------------------------------------------------------
> This message is sent to you because you are subscribed to
> the mailing list <talk@webdna.us<= /a>>.
> To unsubscribe, E-mail to: <
talk-leave@webdna.us>
> archives: http://mail.webdna.us/list/talk@webdna.us
> Bug Reporting: support@webdna.us<= /a>
>


---------------------------------------------------------
This message is sent to you because you are subscribed to
the mailing list <
talk@webdna.us&g= t;.
To unsubscribe, E-mail to: <talk= -leave@webdna.us>
archives: http://mail.webdna.us/list/talk@webdna.us
Bug Reporting: support@webdna.us



--
=
Daniel Meola
= 301-486-0901
<= br> --20cf300fb333fb4c0804a1e833e5-- Daniel Meola

DOWNLOAD WEBDNA NOW!

Top Articles:

Talk List

The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...

Related Readings:

Summing fields (1997) IIS 4 (1998) [WebDNA] How to valuate a domain name? (2010) [Case] on OS X (2000) Possible Bug in 2.0b15.acgi (1997) Layers,Dreamweaver and WebCat Code (2000) HomePage Caution (1997) Signal Raised (1997) Snake Bites (1997) Quit revisited (1997) WebCat2 Append problem (B14Macacgi) (1997) Purchase Plugin Missing (1996) Draft Manual, Tutorial, and more (1997) Remote administration (1998) Cart doesn't interpret tag! (1997) Location of Browser Info.txt file (1997) Configuring E-mail (1997) SSL and reg web* (1997) For those of you not on the WebCatalog Beta... (1997) RE: Adding a product from another site (1997)