Re: Dealing with return characters

This WebDNA talk-list message is from

2004


It keeps the original formatting.
numero = 58734
interpreted = N
texte = Great cartoon! I'm almost embarrassed to show you what worked: [text]stuff=[include file=db/illinois.txt][/text] [text]stuff=[unurl][grep search=\%\0\D\%\0\A\%\0\D\%\0\A&replace=NEWRECORD][url][stuff][/url][/ grep][/unurl][/text] [text]stuff=[unurl][grep search=\%\0\D\%\0\A&replace=%09][url][stuff][/url][/grep][/unurl][/ text] [text]stuff=[unurl][grep search=NEWRECORD\%\0\9&replace=%0A%0D][url][stuff][/url][/grep][/ unurl][/text] [text]stuff=[unurl][grep search=NEWRECORD&replace=][url][stuff][/url][/grep][/unurl][/text] [writefile file=temp.db][stuff][/writefile] On Jul 9, 2004, at 2:54 PM, Brian B. Burton wrote: > Depends on the line endings of your file. Did the text file come from > a PC (windows/dos), a unix box, or a mac (probably not would be my > guess.) > > The code should clean up windows double-line feeds. > > anyway, try substituting in one of these three lines for the first > line, and see if it makes a difference. > [text]stuff=[grep > search=%0A%0D%0A%0D&replace=NEWRECORD][stuff][/grep][/text] > [text]stuff=[grep search=%0A%0A&replace=NEWRECORD][stuff][/grep][/text] > [text]stuff=[grep search=%0D%0D&replace=NEWRECORD][stuff][/grep][/text] > > > Brian B. Burton > > On Jul 9, 2004, at 2:43 PM, Patrick McCormick wrote: > >> Spoke too soon. >> >> Grep seems only to replace the first occurrence and none of the rest. >> Any reason for this? >> >> >> On Jul 9, 2004, at 2:02 PM, Brian B. Burton wrote: >> >>> here's the code you need based on a thingy i made to clean up that >>> excuse MS calls a tab-delimited text file that Excel produces. >>> >>> [text]stuff=[include file=startfile][/text] >>> >>> [text]stuff=[grep >>> search=%0A%0D%0A%0D&replace=NEWRECORD][stuff][/grep][/text] >>> [text]stuff=[grep search=%0A%0D&replace=%09][stuff][/grep][/text] >>> [text]stuff=[grep >>> search=NEWRECORD&replace=%0A%0D][stuff][/grep][/text] >>> >>> [writefile file=output.db&secure=F][stuff][/writefile] >>> >>> >>> >>> Brian B. Burton >>> Burton Logistics >>> 973-263-3036 973-296-6862 (cell) >>> Specializing in website design and development to make your >>> customers exclaim: >>> "Out of all the websites I visit, yours is the easiest to use!" >>> >>> >>> >>> >>> On Jul 9, 2004, at 1:50 PM, Patrick McCormick wrote: >>> >>>> I need to populate a db from text files. Text file has return >>>> character between what will be each field and two returns between >>>> what will become records: >>>> >>>> Name1 >>>> Addr1 >>>> City1, St1, Zip1 >>>> >>>> Name2 >>>> Addr2 >>>> City2, St2, Zip2 >>>> >>>> . >>>> . >>>> . >>>> >>>> I can convert all the returns to tabs or something else, but I need >>>> to distinguish between single and double returns to maintain >>>> individual records. What's the easiest way? >>>> >>>> Thanks, >>>> Pat >>>> >>>> >>>> ------------------------------------------------------------- >>>> This message is sent to you because you are subscribed to >>>> the mailing list . >>>> To unsubscribe, E-mail to: >>>> To switch to the DIGEST mode, E-mail to >>>> >>>> Web Archive of this list is at: http://webdna.smithmicro.com/ >>>> >>> >>> >>> >>> ------------------------------------------------------------- >>> This message is sent to you because you are subscribed to >>> the mailing list . >>> To unsubscribe, E-mail to: >>> To switch to the DIGEST mode, E-mail to >>> >>> Web Archive of this list is at: http://webdna.smithmicro.com/ >>> >> >> >> ------------------------------------------------------------- >> This message is sent to you because you are subscribed to >> the mailing list . >> To unsubscribe, E-mail to: >> To switch to the DIGEST mode, E-mail to >> >> Web Archive of this list is at: http://webdna.smithmicro.com/ >> > > > > ------------------------------------------------------------- > This message is sent to you because you are subscribed to > the mailing list . > To unsubscribe, E-mail to: > To switch to the DIGEST mode, E-mail to > > Web Archive of this list is at: http://webdna.smithmicro.com/ > ------------------------------------------------------------- This message is sent to you because you are subscribed to the mailing list . To unsubscribe, E-mail to: To switch to the DIGEST mode, E-mail to Web Archive of this list is at: http://webdna.smithmicro.com/ Associated Messages, from the most recent to the oldest:

    
  1. Re: Dealing with return characters ( Brian B. Burton 2004)
  2. Re: Dealing with return characters ( Patrick McCormick 2004)
  3. Re: Dealing with return characters ( Larry Hewitt 2004)
  4. Re: Dealing with return characters ( Donovan Brooke 2004)
  5. Re: Dealing with return characters ( Patrick McCormick 2004)
  6. Re: Dealing with return characters ( Brian B. Burton 2004)
  7. Re: Dealing with return characters ( Brian B. Burton 2004)
  8. Re: Dealing with return characters ( Patrick McCormick 2004)
  9. Re: Dealing with return characters ( Patrick McCormick 2004)
Great cartoon! I'm almost embarrassed to show you what worked: [text]stuff=[include file=db/illinois.txt][/text] [text]stuff=[unurl][grep search=\%\0\D\%\0\A\%\0\D\%\0\A&replace=NEWRECORD][url][stuff][/url][/ grep][/unurl][/text] [text]stuff=[unurl][grep search=\%\0\D\%\0\A&replace=%09][url][stuff][/url][/grep][/unurl][/ text] [text]stuff=[unurl][grep search=NEWRECORD\%\0\9&replace=%0A%0D][url][stuff][/url][/grep][/ unurl][/text] [text]stuff=[unurl][grep search=NEWRECORD&replace=][url][stuff][/url][/grep][/unurl][/text] [writefile file=temp.db][stuff][/writefile] On Jul 9, 2004, at 2:54 PM, Brian B. Burton wrote: > Depends on the line endings of your file. Did the text file come from > a PC (windows/dos), a unix box, or a mac (probably not would be my > guess.) > > The code should clean up windows double-line feeds. > > anyway, try substituting in one of these three lines for the first > line, and see if it makes a difference. > [text]stuff=[grep > search=%0A%0D%0A%0D&replace=NEWRECORD][stuff][/grep][/text] > [text]stuff=[grep search=%0A%0A&replace=NEWRECORD][stuff][/grep][/text] > [text]stuff=[grep search=%0D%0D&replace=NEWRECORD][stuff][/grep][/text] > > > Brian B. Burton > > On Jul 9, 2004, at 2:43 PM, Patrick McCormick wrote: > >> Spoke too soon. >> >> Grep seems only to replace the first occurrence and none of the rest. >> Any reason for this? >> >> >> On Jul 9, 2004, at 2:02 PM, Brian B. Burton wrote: >> >>> here's the code you need based on a thingy i made to clean up that >>> excuse MS calls a tab-delimited text file that Excel produces. >>> >>> [text]stuff=[include file=startfile][/text] >>> >>> [text]stuff=[grep >>> search=%0A%0D%0A%0D&replace=NEWRECORD][stuff][/grep][/text] >>> [text]stuff=[grep search=%0A%0D&replace=%09][stuff][/grep][/text] >>> [text]stuff=[grep >>> search=NEWRECORD&replace=%0A%0D][stuff][/grep][/text] >>> >>> [writefile file=output.db&secure=F][stuff][/writefile] >>> >>> >>> >>> Brian B. Burton >>> Burton Logistics >>> 973-263-3036 973-296-6862 (cell) >>> Specializing in website design and development to make your >>> customers exclaim: >>> "Out of all the websites I visit, yours is the easiest to use!" >>> >>> >>> >>> >>> On Jul 9, 2004, at 1:50 PM, Patrick McCormick wrote: >>> >>>> I need to populate a db from text files. Text file has return >>>> character between what will be each field and two returns between >>>> what will become records: >>>> >>>> Name1 >>>> Addr1 >>>> City1, St1, Zip1 >>>> >>>> Name2 >>>> Addr2 >>>> City2, St2, Zip2 >>>> >>>> . >>>> . >>>> . >>>> >>>> I can convert all the returns to tabs or something else, but I need >>>> to distinguish between single and double returns to maintain >>>> individual records. What's the easiest way? >>>> >>>> Thanks, >>>> Pat >>>> >>>> >>>> ------------------------------------------------------------- >>>> This message is sent to you because you are subscribed to >>>> the mailing list . >>>> To unsubscribe, E-mail to: >>>> To switch to the DIGEST mode, E-mail to >>>> >>>> Web Archive of this list is at: http://webdna.smithmicro.com/ >>>> >>> >>> >>> >>> ------------------------------------------------------------- >>> This message is sent to you because you are subscribed to >>> the mailing list . >>> To unsubscribe, E-mail to: >>> To switch to the DIGEST mode, E-mail to >>> >>> Web Archive of this list is at: http://webdna.smithmicro.com/ >>> >> >> >> ------------------------------------------------------------- >> This message is sent to you because you are subscribed to >> the mailing list . >> To unsubscribe, E-mail to: >> To switch to the DIGEST mode, E-mail to >> >> Web Archive of this list is at: http://webdna.smithmicro.com/ >> > > > > ------------------------------------------------------------- > This message is sent to you because you are subscribed to > the mailing list . > To unsubscribe, E-mail to: > To switch to the DIGEST mode, E-mail to > > Web Archive of this list is at: http://webdna.smithmicro.com/ > ------------------------------------------------------------- This message is sent to you because you are subscribed to the mailing list . To unsubscribe, E-mail to: To switch to the DIGEST mode, E-mail to Web Archive of this list is at: http://webdna.smithmicro.com/ Patrick McCormick

DOWNLOAD WEBDNA NOW!

Top Articles:

Talk List

The WebDNA community talk-list is the best place to get some help: several hundred extremely proficient programmers with an excellent knowledge of WebDNA and an excellent spirit will deliver all the tips and tricks you can imagine...

Related Readings:

Webcat/javascript interactive pulldowns Q (2002) Bug Report, maybe (1997) BBEdit and WebCatalog 2.0? (1997) PROBLEMS WITH WEBCAT LINUX (2000) WebDNA-Talk Digest mode broken (1997) WebDNA5 & Tiger (2006) Snake Bites (1997) (More) Question about searching combination of fields (2003) Nested search (1997) Where's Cart Created ? (1997) Frames and WebCat (1997) Emailer [cart] file names (1997) Re:2nd WebCatalog2 Feature Request (1996) Bug Report, maybe (1997) [WebDNA] Secure Cookies (2009) apparently to problem isolated (1997) SiteEdit NewFile.html ? (1997) apparently to problem isolated (1997) Thanks for tips, more quest (1997) AOL (1999)