# PaCkAgE DaTaStReAm GNUwget 1 1806 # end of header 07070100056595000081a4000000000000000100000001372ff225000000b6000000660000004500000000000000000000001000000004GNUwget/pkginfoPKG=GNUwget NAME=wget ARCH=i86pc VERSION=1.5.3 CATEGORY=application VENDOR=Free Software Foundation EMAIL=steve@smc.vnet.net PSTAMP=Steve Christensen BASEDIR=/usr/local CLASSES=none 07070100056594000081a4000000000000000100000001372ff22600000adf000000660000004500000000000000000000000f00000004GNUwget/pkgmap: 1 1806 1 d none bin 0755 bin bin 1 f none bin/wget 0755 bin bin 136524 17141 925888989 1 d none doc 0755 bin bin 1 d none doc/wget 0755 bin bin 1 f none doc/wget/AUTHORS 0644 bin bin 500 44221 925889003 1 f none doc/wget/COPYING 0644 bin bin 17976 27646 925889003 1 f none doc/wget/ChangeLog 0644 bin bin 6085 34629 925889012 1 f none doc/wget/INSTALL 0644 bin bin 3118 3266 925889003 1 f none doc/wget/MACHINES 0644 bin bin 676 54250 925889003 1 f none doc/wget/MAILING-LIST 0644 bin bin 654 53710 925889003 1 f none doc/wget/Makefile 0644 bin bin 3823 46883 925889012 1 f none doc/wget/Makefile.in 0644 bin bin 3670 32964 925889012 1 f none doc/wget/NEWS 0644 bin bin 7194 32302 925889003 1 f none doc/wget/README 0644 bin bin 3529 48393 925889003 1 f none doc/wget/TODO 0644 bin bin 2500 17925 925889003 1 f none doc/wget/ansi2knr.1 0644 bin bin 910 15052 925889012 1 f none doc/wget/sample.wgetrc 0644 bin bin 3313 26425 925889012 1 f none doc/wget/texinfo.tex 0644 bin bin 169682 15581 925889012 1 f none doc/wget/wget.info 0644 bin bin 2500 15530 925889012 1 f none doc/wget/wget.info-1 0644 bin bin 50818 33296 925889012 1 f none doc/wget/wget.info-2 0644 bin bin 38612 44172 925889012 1 f none doc/wget/wget.info-3 0644 bin bin 28100 16071 925889012 1 f none doc/wget/wget.texi 0644 bin bin 113001 34990 925889012 1 d none etc 0755 bin bin 1 f none etc/wgetrc 0644 bin bin 3313 26425 925888990 1 d none info 0755 bin bin 1 f none info/wget.info 0644 bin bin 2500 15530 925888989 1 f none info/wget.info-1 0644 bin bin 50818 33296 925888989 1 f none info/wget.info-2 0644 bin bin 38612 44172 925888990 1 f none info/wget.info-3 0644 bin bin 28100 16071 925888990 1 i pkginfo 182 15191 925889061 1 d none share 0755 bin bin 1 d none share/locale 0755 bin bin 1 d none share/locale/cs 0755 bin bin 1 d none share/locale/cs/LC_MESSAGES 0755 bin bin 1 f none share/locale/cs/LC_MESSAGES/wget.mo 0644 bin bin 22385 32565 925888991 1 d none share/locale/de 0755 bin bin 1 d none share/locale/de/LC_MESSAGES 0755 bin bin 1 f none share/locale/de/LC_MESSAGES/wget.mo 0644 bin bin 22919 36530 925888991 1 d none share/locale/hr 0755 bin bin 1 d none share/locale/hr/LC_MESSAGES 0755 bin bin 1 f none share/locale/hr/LC_MESSAGES/wget.mo 0644 bin bin 21666 50544 925888991 1 d none share/locale/it 0755 bin bin 1 d none share/locale/it/LC_MESSAGES 0755 bin bin 1 f none share/locale/it/LC_MESSAGES/wget.mo 0644 bin bin 22670 51363 925888992 1 d none share/locale/no 0755 bin bin 1 d none share/locale/no/LC_MESSAGES 0755 bin bin 1 f none share/locale/no/LC_MESSAGES/wget.mo 0644 bin bin 21485 45883 925888991 1 d none share/locale/pt_BR 0755 bin bin 1 d none share/locale/pt_BR/LC_MESSAGES 0755 bin bin 1 f none share/locale/pt_BR/LC_MESSAGES/wget.mo 0644 bin bin 21537 2425 925888992 07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000b00000000TRAILER!!!07070100056595000081a4000000000000000100000001372ff225000000b6000000660000004500000000000000000000000800000004pkginfoPKG=GNUwget NAME=wget ARCH=i86pc VERSION=1.5.3 CATEGORY=application VENDOR=Free Software Foundation EMAIL=steve@smc.vnet.net PSTAMP=Steve Christensen BASEDIR=/usr/local CLASSES=none 07070100056594000081a4000000000000000100000001372ff22600000adf000000660000004500000000000000000000000700000004pkgmap: 1 1806 1 d none bin 0755 bin bin 1 f none bin/wget 0755 bin bin 136524 17141 925888989 1 d none doc 0755 bin bin 1 d none doc/wget 0755 bin bin 1 f none doc/wget/AUTHORS 0644 bin bin 500 44221 925889003 1 f none doc/wget/COPYING 0644 bin bin 17976 27646 925889003 1 f none doc/wget/ChangeLog 0644 bin bin 6085 34629 925889012 1 f none doc/wget/INSTALL 0644 bin bin 3118 3266 925889003 1 f none doc/wget/MACHINES 0644 bin bin 676 54250 925889003 1 f none doc/wget/MAILING-LIST 0644 bin bin 654 53710 925889003 1 f none doc/wget/Makefile 0644 bin bin 3823 46883 925889012 1 f none doc/wget/Makefile.in 0644 bin bin 3670 32964 925889012 1 f none doc/wget/NEWS 0644 bin bin 7194 32302 925889003 1 f none doc/wget/README 0644 bin bin 3529 48393 925889003 1 f none doc/wget/TODO 0644 bin bin 2500 17925 925889003 1 f none doc/wget/ansi2knr.1 0644 bin bin 910 15052 925889012 1 f none doc/wget/sample.wgetrc 0644 bin bin 3313 26425 925889012 1 f none doc/wget/texinfo.tex 0644 bin bin 169682 15581 925889012 1 f none doc/wget/wget.info 0644 bin bin 2500 15530 925889012 1 f none doc/wget/wget.info-1 0644 bin bin 50818 33296 925889012 1 f none doc/wget/wget.info-2 0644 bin bin 38612 44172 925889012 1 f none doc/wget/wget.info-3 0644 bin bin 28100 16071 925889012 1 f none doc/wget/wget.texi 0644 bin bin 113001 34990 925889012 1 d none etc 0755 bin bin 1 f none etc/wgetrc 0644 bin bin 3313 26425 925888990 1 d none info 0755 bin bin 1 f none info/wget.info 0644 bin bin 2500 15530 925888989 1 f none info/wget.info-1 0644 bin bin 50818 33296 925888989 1 f none info/wget.info-2 0644 bin bin 38612 44172 925888990 1 f none info/wget.info-3 0644 bin bin 28100 16071 925888990 1 i pkginfo 182 15191 925889061 1 d none share 0755 bin bin 1 d none share/locale 0755 bin bin 1 d none share/locale/cs 0755 bin bin 1 d none share/locale/cs/LC_MESSAGES 0755 bin bin 1 f none share/locale/cs/LC_MESSAGES/wget.mo 0644 bin bin 22385 32565 925888991 1 d none share/locale/de 0755 bin bin 1 d none share/locale/de/LC_MESSAGES 0755 bin bin 1 f none share/locale/de/LC_MESSAGES/wget.mo 0644 bin bin 22919 36530 925888991 1 d none share/locale/hr 0755 bin bin 1 d none share/locale/hr/LC_MESSAGES 0755 bin bin 1 f none share/locale/hr/LC_MESSAGES/wget.mo 0644 bin bin 21666 50544 925888991 1 d none share/locale/it 0755 bin bin 1 d none share/locale/it/LC_MESSAGES 0755 bin bin 1 f none share/locale/it/LC_MESSAGES/wget.mo 0644 bin bin 22670 51363 925888992 1 d none share/locale/no 0755 bin bin 1 d none share/locale/no/LC_MESSAGES 0755 bin bin 1 f none share/locale/no/LC_MESSAGES/wget.mo 0644 bin bin 21485 45883 925888991 1 d none share/locale/pt_BR 0755 bin bin 1 d none share/locale/pt_BR/LC_MESSAGES 0755 bin bin 1 f none share/locale/pt_BR/LC_MESSAGES/wget.mo 0644 bin bin 21537 2425 925888992 07070100056596000041ed000000000000000100000007372ff22600000000000000660000004500000000000000000000000600000004reloc07070100056597000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000000a00000004reloc/bin07070100056598000081ed000000020000000200000001372ff1dd0002154c000000660000004500000000000000000000000f00000004reloc/bin/wgetELF,44 (44OOPP* ,T̛+/usr/lib/ld.so.1-r'v|\9N2w  ?XZ{T 8fU [ pDAPo]iujz$^(~ R#h%q`sn*5&=0QF!7>/SW+t3"H<}KMxYE,J@ylmOCIGL4daB)kcb16:V;_.egԀ<4į ,    P*+,HVPVXV`V- Lh  w*,o 1U ;y Dk V[b" j@xnܖX ,| < R ̶|[   ; lTy8  e $,X ,̷1p{!6" B Xl3 f\oP*ܴ` 6 Xy H \DE 4}  h  Oe l ,+2̲7L@\ PB \< e\, nLuR }@+ j8  ' |L   H, {S <05 $# *G 3 ? 9|@xlj R\] b ql|k d{   f h{(m 0N 8 zV $} ||} *K< 3f ;D DKȸ' Z c\ ml{x8uq h 7 ` ̵(3 2 0< ܳĽ | 4+ d ! /5\ @+I|XF Q{Q [bD k@{  {]g ,1 ܶp{ a <   ,  c l9 !l)36  =^s M Wl _ , j  wt  Ty!0l&  Lf <M ̴ܷX L 4 '\ 4/ @ Ld} dLk Lh G Lg |B 4H |\V &  пP  "t{ 4pK =Z EL4 \<, gO n {7   |]L B L   Xy !L   X |_&   <  4 DV- 3 < lC tN N ! [ H n j# {  \    z ̳ `{ " ܵ ؽ8 8 ,  , \frontcmpsocketgetoptftp_last_respline_startload_fileskip_urlmd5_process_bytesreadgmtime_mcountoptretrieve_from_filerbuf_discardstrtokfreadmkalldirselapsed_timegetpwuidcloseportftp_listskip_protorealhostvfprintftoupper_environ_endhtml_baseconvert_all_linksstrdupopt_urlfork_iobunique_name__register_frame_infoget_urls_file__flsbuf_GLOBAL_OFFSET_TABLE_sleepremove_linkpwd_cuserid__ctypeencode_stringabortgethostbynamedigest_authentication_encoderecursive_retrievelegiblemkdirsetvalsignalacceptstrcasecmpatexitexitstrerrordebug_logprintfclean_hostsin_slistlogflushoptoptuerrmsghas_wildcards_ptolowermd5_init_ctxaccessmalloc_xstatskip_lwssprintfbindtextdomainsymlink_nunameheader_strdupnewurlrbuf_flushgettextbindcontains_unsafe_initftp_pasvtouchfwritemd5_process_blocktextdomainratemake_directorysetsockoptmd5_finish_ctxexec_nameurlprotofputsstore_hostaddressaccept_domainh_errnosame_hostadd_urlfile_non_directory_pfree_netrcrecursive_resettime_fxstatlong_to_stringhome_dirstr_urlftp_retrgetuidno_proxy_matchftp_restlogprintfsave_log_psearch_netrcngethostbynamefcloseurl_filenamegetenvfree_vec_getopt_internalstrncatxreallocselectsepstringrecursive_cleanupmktimeprintwhatstrncpyacceptablegethostbyaddrntohsmerge_vecs_DYNAMICcleanuprbuf_peekrenamegetproxyrbuf_initializelog_initstrncmpskip_unameprintf__iobheader_processsetlocaleconvert_linksftp_cwdstrcatstrncasecmpurl_equalinet_addrwriteftp_responsereset_timerreallocstrrchrvsnprintfhttp_loopredirect_outputadd_slistfnmatchheader_getftp_parse_lsfreeurlfile_exists_penvironperrorerrnoxstrdupunlinkftp_loop__fpstartstrchracceptportinet_ntoainitializeutimefreesuffixoptindgetsocknameclose_lxstatftp_typeretrieve_urlstrdupdelimhtmlfindurlcalculate_skey_responseoptarg__deregister_frame_infostrcmpheader_extract_numberopterrftp_indexfree_urlposnumdigitgettimeofday_edata_PROCEDURE_LINKAGE_TABLE_xmallocfopenftp_portpath_simplifybindportrbuf_uninitializetime_strlogputsstrcpyread_whole_linefree_slist_etext_lib_version__eprintffflushsufmatchmake_connectionlog_closeftp_getaddressgetopt_longget_urls_html_ctypeparseurlftp_loginaccdirhtonlmainhtonsreadlinknetrc_listversion_stringchmod__filbufmemcpyparse_lineget_contentsfork_to_backgroundmd5_read_ctxrotate_backupsstrstrlocaltimestrptime_finiireadlistenrbuf_initialized_p_cleanupfprintfconaddrherrmsgisattyiwrite_xmknodconnectlibsocket.so.1SUNW_0.7libnsl.so.1SUNW_0.7libc.so.1SUNW_1.1libsocket.so.1libnsl.so.1libc.so.1  (=  !  (= - 6 '= @ l*t*?+Ty6XyFp{\*`*Rd*h*Sp*x*?|*`*******t**j******P*%**\**i*T*H*5*b******C**+d++ ++++ ++ + $+(+,+0+4+8+e<+@+D+IH+QL+P+T+AX+\+o`+&d+ h+_l+p+/t+x+x|++4++++c+q+O+++++:+<++^+M++,5T*%X*%\*h%`*h%d*h%h*h%p*h %x*h(%|*h0%*h8p%*h@`%*hHP%*hP@%*hX0%*h` %*hh%*hp%*hx%*h%*h%*h%*h%*h%*h%*h%*hp%*h`%*hP%*h@%*h0%*h %*h%*h%*h%*h%*h%*h%*h%*h %*h(%*h0%+h8p%+h@`%+hHP% +hP@%+hX0%+h` %+hh%+hp% +hx%$+h%(+h%,+h%0+h%4+h%8+h%<+h%@+hp%D+h`%H+hP%L+h@%P+h0%T+h %X+h%\+h%`+h%d+h%h+h%l+h%p+h%t+h %x+h(%|+h0%+h8p%+h@`%+hHP%+hP@%+hX0%+h` %+hh%+hp%+hx%+h%+h%+h%+h%+h%+h%+h%+hp%+h`jjRt hf+tUhKETTyRU RP>] P@jUS[ëq4u:P0Ћ08u胻t ,Pǃ4]US[Sq]ÐUS[;q$t,P,P]ÐUS[q]Uu ujÐUu ujÐUu ujÐUuu ujUu uj@ÐUu uj<ÐUu uj8ÐUuu uj1ÐUWVSu]u }EPRupfEPfEjjj=u ?jWP t=0u6h`3e[^_US]E,VxVjjjxЉ, |jEPjhRv} >fxVjd|VPfzVj5V5,M}65,K5,h蔡,5,hlf;ufEEP5V5, }05,5,h",^zVPfj5,| /5,y5,h , ]ÐUWVS}3ۋ Eu⋅ 0 󫍅 E AQP3҃}tR3҃}uRE@P[^_ÐUEj5y5, ~5EP5V5,ЋE |Rh菟  ÐUS]tSSh\,=tP5,h3,]ÐUEUEPRu|VEV3UWVS]} u =0uU=yt: =0uj5yS =t߅u 0VWS =te[^_ÐUWVS}u ]3~m =0uU=yt: =0uj5yW} =t߅u 0SVW =t~+؅e[^_Uu ujÐUu ujÐUu ujÐUuu ujÐUWVSu} E0?t'*c[ \tJMtQE%t/tBE5.,;} t*/UuF8 Et?.u;} tEt/t׊FEt?/tĀ?u?tFG?t*t݄]Eu \uUNt[t:MuE$PWV wGuYMJEt.u;} 6Et /&ƅt->t 8]t@t 8ut8u}[EUUe]tE+Fu\uF]u}t:GFt?3e[^_ÐUU<*t hQj= yuEhxPj = yuMAPPhj聓MAPPQ'؃ wt :tNh=j\hvPjJh } h=jh .Pjxh h=j̏MAPPhPjuhhQհ= yuhuPjI MA8u h{M= yuPhjʑ MAPQ&؃wt :tNh=j謎hPj蚎hYh=jdh ~PjRhh=jMqh0Pj͐whQ-O= yuhhPj蚍 t=xuM= yuh5j\EPQ#؃C=,$\h=j h:Pjh͏h=j،h Pjƌ<h腏Mh=j萌h`h=j|hPjj KEPEPEPEPhWfEffUƒVWh0P VWP؃ t~w t3h=j谋5h{iFPQh?j]hPh=jX50PhGj hQl h=jVWh`Pj譍Wh蠍Q $ h=j茊50Phj@h3Q蠫 = yuh8Pj M= yuhj߉Q؃C=9$h=j訉hh=j萉h h=jx50PhGj,h=jH50qPhVPj h=j5h{CPMqh?j趋`h詋Q* h=j蔈h Pj肈h>Q蠩{= yuhEPj }M= yuMQhj褊 MQQE ؃w t<2t :u&h@Pj芇E t= yuhPjT M=@xt3= yu*}th=jMqhfjɉ MqQ ؃twt?5:u&5h=j贆Mqhw= yuhPj} hwM= yuhj7jQ ؃w- t :tn h=jhPjօLh蕈 Qh=j者h Pjnh-Q茦:h=jhh-PjʇWq'= yuhPjń hw`%u:P؃ u$50Phj< xMt MAMq,=0ytMq,hMq,}tPMq,rug50PMq,h?j蓆=h膆Q! M t]PPhPj%}t'M +EPPh[Pjh=jqthQSPhPj蹅}t(+EP%PhPj苅hPj誂 QQMQM QV MjMM +EP0$=xt MAt V Vub503PMq,hPj评Yh袄Q"1uJ50PQQh@Pj@= yth=j\PQ؋QL tQQh`j˃hkPj `h詃Q882tIPOtQQh`jEhPjdPuu1hQ`= ytzMAtqMq,shMq,W u850PMq,h?jx-ShjgSIV؃uV-[^_ÐUWVS]My,uQMA,=xt8Mq,t&Mq,h>Pjہ'Mq,A xPu MI,PDž`tK\KDdL`,u ytPA,t ,t)CDQ薟tc9K2DQtt cKtcKDžX` =xt:Cu4Mq,t"xPMq,u MXjT=@xjMA$*hHuEP*hFlFp`~$`Qh3WPHQ PQHQWTQh=jW} tM Ad DždSXQLQMQJj Td+XP{DQѝ u# F=9w?$R} tM A9dt$5Hx`QVot+ssh~DQXdPQWTQhVPj~`QPQdM1TQhjn~CuydyC\t $C']Hx9`DQ菜t,t'ssh~DQp/8[^_UWVS]S,UU "B $BsCSs{,WhP}U RjS8US, ='uW؃3ۃ=EX3ҋ~эB~ B$+eEMAvt /u{uW/u{WhMqjEpЃu%Mqh YRjvEXM!jQEP>MY yvt9y~9'e[^_ÐUWVS}U JRWE=xu =xtN؅tH;t;su,shXPj8vEPS ؃뿐[뷐G8}t}ux3]t7jswv t8uEPS؃ϐ[Ȑu3u350 Pw,h?juȋEt U RPW0}tU RjW+BwhiPjHu umyt9y~9'e[^_ÐUWVS}M XQtDž'G8=XxSWT'=@yxt P G,t PW؋TQWS74 uf=@xt]=xu;\PSuRShhPjtShMPjs STQSP"xtu=Xxu =,yttu3PSW SjWG u''uM XQՑthHsƍH[^_ÐUWVSu~^vFt PVt{t_E Íe[^_ÐUVS]t-ssCt PSރuӍe[^Uu ujÐUu ujÐUu ujÐUuu ujÐUWVSu} E(j(C3ۋEH;~eMQ7Lu#FhP6 ~H=u!< u%~%| uDC\= ytPhjXq PhYqYy BBz hRhw%?xe[^_ÐUWVSu] 3҅tэT RӅuRtRVhW= yt3jhW tWhjBphjhmWh59pǍe[^_ÐU WVSEEPEP؃U:2tRRu h>؋3IQSE0PЃ*S}EPEPL؃ 3M2uQ;3tQ3ې,3IQuRЃ tCvуww3<,3ыELUEЀ} wtPA< v uAuEPQV9ȃ t؉MuEPh؋3IQSE0"Ѓ} SS:OSFEPEP؃ tu'&U:2uRRe[^_ÐUWVSufEEP؃ 6ȃu fEfEURffEEPAPAPAPPh[S# Shm؋3IQS6 Ѓ}S<:LS.EPV؃ tu%U:2uRRe[^_ÐUWVSujhr؋3IQS6nЃ}S:SEPV`؃ turU:2tRZ,Jt~¿Yyu%AtiuߐAtTttGYy] tˆAu9,tE ;u+ACE ;~ue[^_ÐU WVSuE EfEfEEPhw؋3IQS6Ѓ}SF:NS6EPV؃ tu%U:2uRRe[^_ÐUWVSuu h|؋3IQS6uЃ}S:bSEPVh؃ tuz9M5uQd 2uQQ QBe[^_ÐUWVSuu hVhVhL؋3IQS6Ѓ} S:KSEPV؃ tu%U:3uRRe[^_ÐUWVSuu h؋3IQS6Ѓ}SF:bS6EPV؃ tu9M5uQ 1uQ Qe[^_ÐUWVSuu h؋3IQS6aЃ}S:bSEPVT؃ tuf9M5uQP 1uQ= Q.e[^_Uu uj@ÐUu uj<ÐUu uj8ÐUuu uj1ÐUWVSu3ۋ3Iw3hEN>r%<EQxtsuE9wuE}E}~Íe[^_ÐUWVS]hSEu$50腾PShjg3EEhEPhf}3ҍBEtM| u U jhEPЃ hEPݾEEEEdt# -t :lt#1Eh.EhEhEh fE@PẼPhfEEEEEEEEN3ۿ,4MQЃtC ~ HUDžtE;Uv:Yyt)tEċt ɉtJ;Uw;U]4hDuKExt"xMQTBЉUEEMQh*EEEEMYyEMTPЉUEMuހ9:u{UEAMYyt%M ETQЉUEEuދM9:u4AMYyt$M ETQЉUEEuރ}tMQh4cEPMQEPh`cbO}3эqU+U)U;u}GM }u:hQ迻؃CPEЃPh4cutuuhbE}3ӍsSЉUSEPR2tgUu]|3/u D3h=u]|3@u D3h!uVEItPU|2*uFD2hWb0E%裹hjtEj}t }u }uEha}u}tDhaUt R誹UЅt R藹MQ苹u]E܉|}u+j EEE@@-MMj oЋEPUE@j躹ЉUS迹ЋMM܋EEMMEEMM}u;J~ RJUBEEUk~ ”UEEE|QUЋEP MQtEPEMQ1Eh[^_ÐUuUu ujPÐUu ujLÐUu ujHÐUuu ujAÐUWVSuM }}A2D D +ʍ9++ϸ#؋Íe[^_UWVS]E ]EEfEU@E33MM쐐jVMQT ?~Ej jMQ2, ;t,{t{tC{uCC CË Cj j MQ ;t*{t{tC{uCC CË Cj jMQ ;t*{t{tC{uCC CË Cj j!MQG ;t*{t{tC{uCC CË Cj j,MQ ;t*{t{tC{uCC CË Cj j7MQ MQh]Ee[^_ÐUWVSuU 3򮉍8}8ۉ<<+:uS:LV3эL V}tEuuVtF 3uV }t1Lu x-t 5V6VEPЃ=Lt_5LE  Vz-u$Q5d{h {Ph{ &QP5d{h@SPh{V`L?1VEVSuV V9uLt}:u;=LtS5d{haѫPh{cL?::u*t LL LVzt LXL;UuA=LtS5d{hCPh{լLE?E8:uE:u LLV]Íe[^_ÐUjuuuu uÐUjjjuu uUu ujhÐUu ujdÐUu uj`ÐUuu ujYÐUWVSu} EPjP3ۃO;~eMQ7u#FhP6ճ ~H=uf< ujEu;t7u8 t-EPVpt==u SE< t-< t)~+| u"DuʸCEH7hR3e[^_UWVS}] ;t*PP;uCG;uހ;uG<:t3WzuWUҍe[^_ÐUSM3YytTBAu9uE 3]ÐUS] u譱]ÐUM@ t t t t+UuÐUuݩÐUu ujئÐUu ujԦÐUu ujЦÐUuu ujɦÐUS]S脩E=tjjEPzS肩]ÐUVS]u tV3tu [ u3e[^UVS]u t Vs_u [ u3e[^UWVS}u Vۨ؃u'V5ViЃt r质؃t V躨؃u 3Cs 0W[ S誨3у+S膨RW W5VЃujjWV5VVe[^_ÐUWVSutvUR0}ajޮ؋U R7UR,CUSs j譮؉_ s U RURCUS\u~ t!v vUR諦|~ ujS؋U R謮UR衮CUS^ C Ee[^_ÐUWVSuVhMV5V؃t{t{VhMVh MVЃuVRSӦ3у+S详RW =sVh`NM S3у+RWڥW5V؃ujWV5VVVjWV5VV3Me[^_ÐUWVS]u SwVoS跋V诋S跌V诌Ӄ;:t Ct:t FttCNu]u#<=tCNMu<>u;=u CN9]u CNS}W肦MA ;>P袝}G CNGMyt'A/A8tCNuCN;>}G t P5MAt P }GG EYyYyt;>tCN]YyuI<>?<= ]Yyu#<>t<=tCNMu<>u;>S}W7MA<=t7Yyt;>tCN}u<><=vCNYyt;>tCN}Mui<>_E<"t<'ub}GWC]Nt%:t< t<#u]CNtM:Auu}G< uJMA]Yyu&Yy<>t<#u]CNt}uM+M}}?UMqq hL 7h}w Gu@h(w3u,Gt P迚SMQz}GhMq h}w՛M9t Yy}tG}MHu}?;uzG}MHt"Yy}tG}MHujhJ}WZ u3}M}EME}:Gu GCN<>t%MAt P藙}G~MA t Pr}Gt P\MAA }G t P/MAt P}Gt P}3h`A3e[^_UWÐUWVS]3&<&u<u <"uCBuՍBP̠ދl<&t<"tB\<tR&BaBmBpB;=&BBg;0t RE Pt RtQۇ蝆h/& =$ytk3ҍZFPQ؏SP = yt&t*WQhj$/PWM A=u[8uh.RVЋE PhR:ЋM Q脆:u!hӅRЋE PRЋM Q=(yPhj .t? u+=Txu"Phh hW3u'Ph#hM yu(Ph hE xu&Ph h ru#Ph hBuwMt7PhT#h*ttCE ;u/Php"h8t 聄=hj)!1t R<DžE Pt RM Qt RE Pt RM AAA蝂h+t*hFPj(葃8hu*gh`͂Pj(8,DžtR(Dž8=cwEtj hRL u M E uM A3E P9tnt R蚂M Qt R脂tPk-hv*2!t"t E H,uxuEM Yth0ЋE X lRӅuh RhPj)@h)t RJtQ/=@xE= yh`Pj4& tJRpPj& tZ+PJPhPj(-=TxthhRj%tRhjg( hj%t R)DžM%=tUM A tP~h'-xMq,w|=0ytEp,x{M ytREp,2؃ug50~RMq,hjQ'}hD't P!ڃ=$ytSQjP胀k\pQ3tVE pPSUXЋM Q p\ЋE P =xuSZ~Sb~tQZ~}hb&E x t -"T[^_UWVSDž8Dž((,Dž$M j*M1Ith(}Pj" My,uA$uP)~MA, x0u MI,0=xMq,lMq,h@|PjJ%M Mq,1t,mtruM P|'? PR0QS4Qh@2Dž<=,yt?TPMq,u&Dž<M,M$Dž8Dž@MM܉ Hxt 9@@Mu ytP,|Mt Mj袅4=@xjMA$vDuPlvDCHCL@~@QhzPW3{ 0QWV4Qhj|#V^{=xu<t8uM M!E@~Mu =xt1Mq,tTPMq,MuEEMMy$t@ =xu $MMQ QMQj<4EtP"M C=9T$,Et PWzEt PGzEt P7zEEEXhjX50yPMq,h`yPj!Et PyEt PyEt PyEEEEu%uhxPj!*P^yEt PNyEt P;yEEEzxM=@xu,jMA$uPiShj Sxuu4Qh(xPj hjEt PxEt PxEt PoxEEE*8uQ=,ytEuh wPj*Et+P(=uh`mwPjA <QDž8M!Dž<Dž@}((9,E=t9$unMq,hvPj Et P[wEt PKwEt P;wEEE'z$QhwvPjh ^vPj2 Et PvEt PvEt PvEEEF=xuK(tB=xuBU܋E;t}u)=t ;|=Pxu(QMq,藂=xtuuhCj='|Et P vEt PuEt PuEEEuEM+ȋPS؋U܋E;uaMPP0QS4Qh`tPj@Q0Quu܋M14Qhjkh}=usMtNR0QS4QhtPj*@Q0Qu܋M14QhjyEy'1;}[4XNMQku}s{;~e[^_UW@x3E@xhrxHxlx|xh`rxxx@yxhj t Ptrxxyyu(VEP5d{hfRh{Zhu?gu7g.u(VEP5d{hfRh{hFEPbx؃-EPfe[^_UWVShst h؅tGu%S5d{h ePh{g SBSafe[^_ÐUWVS]C< t< tt< t<#u Yyt Cu<_t<-tu<=t 3+@P!nM ;s_t-t@F;rM 1G=ttאCu;=tM 1e3ACt CuÀ;t8 t@8uPSnЋEe[^_ÐUVS]u tMtIS=t@\NVS`N#VS5d{h`ZdPh{e3e[^ÐUS] hSft ;1u{uMhSet ;0u {u3+u5d{hcPh{oe3 E]ÐUS] S'Ѓt E%Su5d{hcPh{e3]ÐUVSu ]hVBetSVue[^ÐUS]t Pcu 8l]USE ]8tPolP3u3u]ÐUWVSU EE:tYR+lރt3Et*3эQ~E|/uDuVE0zuЋEE02uEe[^_UWVSu }}3ۋΊtYyt\PA9u;9t yP`=ht6=dt9g=mt =wt>T+Í`+M+Ë7+‹+V}Wh i`Pc3 Ee[^_ÐUhpxu uu3(=pxttx txÐUS] h@Sau'DyLy Hy2hHSauDy LyhOSyauDyxhTSYau"DyLyHy XhYS%at&Su5d{h_Ph{`3#DyLyHy0]UWVSu >u#xt RO_xVru+Vu5d{h^Ph{`3xt3эY3ۋ3эL QR-gЉxVR_3э\ x Cx xDe[^_Uh@yu ut=@yu hh>hAh>hJh>hVh>hcxh>hphh>h{Xh>hH5Lh7hh$hhhhhhh>hh>hh>hh>h7PhpUjYQh>hh>h|j,Qh>h`h>hPh>ha@h>h0h>h h>hh>hh>h5DVhTh@RPhoThQPTjpPh>hh>h5Lh5Lho5Lh[5LhG5Lh35Lh5Lh 5Lh5Lhr5Lh5Lh@E5Lhm5Lhg5Lh8sEPEP5L tuuu4jN5L5d{hRPPh{QjNuPuP45Lh5Lh5Lh$5Lh*LE؊¸ʺ( .w`$hhFh>h:hh.hh"h>h{hh h>hAT}?W5d{h35ORQhQQ5d{h` ORQjME؋E؊,5Lh5Lh5Lh~5Lhpn5Lhi^5LhaN5Lh[>5Lh.5LhA5Lh5Lh3hQP5d{h`NPPjLjhQh} WEPЃ@xu=Dx%@xЅt(=DxthMP+PRjL=,yt'=xth `MPO!jK}}L)EuT=xuK5d{h]#MPOhQO5d{h`LPOj{K=LxtW}<+e3;u}PeԐL} ڋ3у+RSNEԉEFL;u|}EP5x3hn5DVh xtF:-uzux{/hRqLxu5xNjJjjO=thjjOhjjNjj N']trEPjEPEPR*=Xxt"'uEtUuRuYUt RKUt RKЅuxt9EP5xR- }u5xhJPj =Xxu}=xtr=yti5y5y_PjTPhJPj@yt'9y~R_Ph rJPj=8yt2't3e[^_ÐUdS]hjSOMt u  Ph IP]S'JS]Uu ujHÐUu ujHÐUu ujHÐUuu uj}HÐUE#Eg@@ܺ@ vT2@@@UUM BABAB A ÐUWVS}wGG;sG7v x8+Sh@ D7PJ3OKOG C WRGPu Wbe[^_ÐUWVSUZty+ËM M;wEURMQUDPIUMQ4 @v3QށSW?VM\SW^IUrMM)M } @v$UR] SMQ;]e ? } tU RMQEPIM UJe[^_ÐUWVSE U`EeMʉMU:rZJ \ERUUMQ;A}u]\U3#3‹` MM8xj׃`3#3Ë`UU‹\V胅` ω\3#3Ƌ`UUp $`ً3#3Nj` MM0ν`3\#3\`UU8|`3#3Ë` MM\*ƇG` ׉\3#3Ƌ` MMF0`ڋ3#3Nj`UU0F`3\#3\` MM8ؘi`3#3Ë`|U‹\D` ω\3#3Ƌ`xU[`ً3#3Nj` tM0\`3\#3\`pU8"k`3#3Ë` lM\q` ׉\3#3Ƌ` hMCy`ڋ3#3Nj`U0!I`3#\3E8b%3#3E\@@ ω\3#3tQZ^&ً3#3E0Ƕ3#3E8]/3#3xSD ω\3#3ً3#3E03#3|8!3#3h7 ω\3#3E ً3#3E0ZE3#3l83#3E ω\3#3Eogً3#3p0L*33E8B933Eq ω\33t"amً33h0 833E8D꾤33EK ω\33E`Kً33x0p33l8~(33E' ω\33E0ً33E033|8933p ω\33|ً33E0eV 3E8D") 3E*C ω\ 3h#ً 3E09 3p8Y[e 3E ω\ 3x}ً 3E0] 3E8O~o 3, ω\ 3ECً 3l0N 3E8~S 3t5: ϋ 3E*ً 3|0ӆ}u]Uʉ\M9`U:rZ\J P[^_Uu uj>ÐUu uj>ÐUu uj>ÐUuu uj=ÐUWVSu =xJ=0Wuu>40WtV3у +h Sh W>S?xRW)ȃuWN4 4>tE8 }مt` tURQ@ȃt[ ut<t"Qs?u [EYUOCKtBE;}t4>u/4t%;t[ utSE8u[`[^_ÐUWVS}U 2t%{u3=s=s=s jFCCC U 2e[^_U(WVSEEEh uVX=Eu250 EEuu؍uuE]}u ;uEu>_}UtYytCu#.]t(YyuCtttC}u$` }tuV%EЋEPuE luVuV?uVDЋE G}tuVDЋEP)E  }uE }uE }t4uVuVuV5d{h Q;Ph{2}=hxuUR|5@U؋Z;tPO1C;uU21jUR"U؉ UR1U2d:EP54WEEsU؃zu}tP=\xuG} tAURU R،u-h jUR54WsD4WE } =yU؃zDWt'rPnDWt P0Ur9DW5U5CUhPUR=5u4hPUR]V51S5UjV4S05UUR@ u9hPU Rh` jUR54WbC4WE E}hEjUR54W!C4W EM]U؃zu XxEPEuE PQSURvU؃zu Xx}tUR/UU=8ytCEtU2hgtP'(C;u>*u~u eV&6t?j/V#*tjEPV03Ѓ ,j}WV3Ѓ V}W(ЃЉU}u}W=(h\hEPz)Ѓ>u(}W[:j40EE{EP}W00EVc0ЋE}@E֋3I |-YytJIxhEP_'}W8E%W'Ee[^_UWVSu] tqvh@tNRh)3IQRv?'Ѓ uh3Ѕuh!e[^_Uu uj$ÐUu uj$ÐUu uj$ÐUuu uj$ÐUWVS} uE=@xtjURP6 }teUM9u[=WSjh`W';rt=@xtjMQS t W%h h`WUR؃ uh h`WMQ.؃ ~GWSjh`W>';sM=@xtjURS{ t WN%}=@xt jMQjEÍe[^_ÐU @Mu }U Um]EmPh0j U WVS]}3} hw HyM$+ee;}-5LyÙu U BUE @EC;|ދUEPjU Rlw`whwDy‹dwPhHj}3xdwlwhwHyDyE`wtI;|EÙ}ʣlwًMȅ} QhK#Pjhw lw `w} Qhgj dwãdw;Dyuhw=Lyuhqj|˃usQj`hwE@hwUB;Hyu^hw lwAlw} tU R `wQ lw `w} Qhgj Dydw+£dw;3xtXƍe[^_ÐU Sj]S$Epw[Mb+Ӊtw]ÐUVSj]S$E+pwEe)Eu4[Mb++twe[^ÐUU uMR4$Eu$hw8EuɃ$hɃ$hhxw!xwUWVS}uMMu)E}t }} t M E{ ؋}WSMQo E =t*jS }W*PMQhj}t}WI)C(C(3=xt(s>$t5xsm$E MA$3 MK$q#uhPjU\jSV{ E =u{tP{u}W)PVhjVh)Pj jSK7C{uh hjh!h(5EC=uMQEPS=E =5Xx}t Xx}WSRE5Xx}tSC,tL{uFP0t7hFV@ thKV. uM t V}ufu}Wt#uhOPj*}tMQi}}jSEE} t#C,tP&M  } jS}t }M9 MQEe[^_U WVSE} t"5xjP 9PV'U߅yt9yEPjEPEP3"=Xxt"'uEtEuPuEt\=PytSP<*tFuhkPj!u!t50oPhyjeEt PEt P[UWRƍe[^_UE 9Eu hhPjUu ujÐUu ujÐUu ujÐUuu uj}ÐUVS]P=UuUCP=RuACP=Lu0{:u*{tYyStBA:u3e[^ÐUS];t"Ph}tC;u3]ÐU WVSUUU:GU%u'UttYy} Yy|U_EЀ} wXPXЀ w DP_DɋUEGFEEU:UUe[^_ÐUWVSu3ۀ>t"Ph}tFC>uCP"E؃>t\Ph}t;M%CCЀ7< wЀ0CU7 w0CF>uEe[^_ÐU WVSEPU3ۃEuu,V3IQRuVЃ E ECv3ۊt:tU:/t BC t:uu<3:u]CEt%YyuVCS.Ћ}e[^_ÐUWVS}U UWW`u 48 @tB>:u5u1+ߍCPU SWPeU VUU F>@u} t}+]CPSURP#e[^_UWVSU3эY~!tu DD3e[^_ÐU,WVS30VEH39 ;t Fvu 3,4v,VUEp<ȅtUr@ EpUEUrȅtEp Ur!EEpȅtUr EpE3EUJt-QtEp UrEEH tFQ[tUr aEp u} t>tΐxA9uUzuIE8/uA3уQ%C2CFUBRCPTURk ]}3эY}t}IM Et3IME}3IM}IM܋}ыUӉUUUU܍L QSEPWX }t8UREPS? ]؃ t;:CURV;P" ] ;@CUREP;R ];:CEP R;P!UB P!;/CUREP;R ]܃$E8t;/CURS EP UR EP }tUR tV Ǎe[^_ÐUWVS]*jWS# =tjWK jVu  =u,67 ؃jWjV jV3e[^_ÐUWVSu EEU=U3I;vsIQEPR; Ѓ uR}EGENt1t+~&~!YyuPh% tċE;lENN3e[^_UWVSut@>-u~t5h4V ؃u&50 PVh7j03p{EPEPS t>-u~t S uVh? 3Eu֍MM\j؃t_3CE@PuVP E}u]uUMQ+‹M+ȋPV uu8 Ee[^_ÐUjh<MQЃ ujhuEPЃ u E5E~ jMQh<Ѓ uE}umE,V3I;sэQRMQ3.Ѓ t E}v}}m؅ur] uqxua=Dx}FPIVMQSSEPhRjT S3} t:SЃP3JRSE P~؃BSu'SMQh<Rj׭SN؃EPVMQS&ES!FP EVMQPM=(yt[}lFP VMQSEPSׅuRU uRMQh j+,S ]j9 EtC}uEE}3ҋE@MԉpM+MH }tE؀8/t MI EHMMUEPMQ]+ڋE+PMQeEu^Ee[^_ÐUVS]t/s3:Ct P*Sރuэe[^ÐUWVS50y]Ћ}3эLp+ብt+xPEPIЃuE%=uk50y~E^SEPhTtP'VEPhTW WtPjEPhTWWEPd[^_ÐUWS]3эL ;t I/u;u/t 3QS xPW^u3E%=@u W3MWh`ɪWW~؃t50PWhj耪W_Íp[_ÐUU3B/u^S{pxx;~t /uNCu3҃+SV_EP>/uFR E|xt&=hxuPi[؋MQ>]|xx:.uztI3Ӌ}э\+܋EPRhSE\}3҃+܋EPSE&x:.uzt E E]MQm;tJ3ҋэT+V>/uURShWVVntVyV3эQt |/uDMQ:tڋ3ҋRSU>tRVhW,VvǍe[^_UWVSU3=pxtRؿR:uhR؃uT=x?.utC3ҋV,S5xhWS߃=xu=xu=,yu =pxt!S t S tSV S~Ǎe[^_U WVS}E 8/}3IMt)M}</tMu}tEMD}8uEMQ} WPEE!MAQ؋}WE PSÍe[^_UVS]sVss{uhh8hh3YjS-e[^UE=uxu,h( =uxuh 3UE tuP%Y%ÐUWVSMQh5Pjңh4MQ|؃u+50PMQhPj薣EPEPSn SDh MQ&؃u550PMQhPj@u B}u} M Q ;|h@ uM q%=tVR1hu;s& ySP6 SCG;r݋M At@qMQVhS~MQM q V1h耢VKM yuM IM &U+;}*;s$ ySP SCG;rSuhQPj%e[^_ÐUWVSE 8/uPY E8/uhhhhE33] E0Et4 t.:u*t3;uVxe[^ÐUWVSu} ultetYE>tE8u3ۅtǐC8uMD PVpPWMPWƍe[^_ÐUWVSuEtZu&j؋U R\Ct v~uj؉^U R"tU R6~j؋U Rssu"fjq؋UZsU RGGt!u6U Rt'~Fuj)؋U R^CǍe[^_UVS]u t$V34u [u3e[^UVSut^6zVtue[^U(WVSu]Sw]}-uw-wEE}3IMܸ I]+3;}EFA;|M3ɋUEt8VUUUM}+UR;u }t,FFAGE؄uwe[^_UVSugfffu+utA+֋ue[^UWVS}M }-Gً߾gfffM}+U+Ȁ0 Cʅu+K]x ;+΍7AKNyEDe[^_UVS[J_uuu ux VV)UVS[_+pxt Ѓ>ue[^ÐUS[^]Created fd %d. Closing fd %d Master socket fd %d bound. Created socket fd %d. bytecon != NULLftp.c%s:%u: failed assertion `%s' u->local != NULL!((cmd & DO_LIST) && (cmd & DO_RETR))(cmd & (DO_LIST | DO_CWD | DO_RETR | DO_LOGIN)) != 0user && passwdConnecting to %s:%hu... %s: %s socket: %s Connection to %s:%hu refused. Closing fd %d connect: %s connected! Logging in as %s ... Error in server response, closing control connection. Error in server greeting. Write failed, closing control connection. The server refuses login. Login incorrect. Logged in! ==> TYPE %c ... Unknown type `%c', closing control connection. done. ==> CWD not needed. ==> CWD %s ... No such directory `%s'. done. ==> CWD not required. ==> PASV ... Cannot initiate PASV transfer. Cannot parse PASV response. %d.%d.%d.%dWill try connecting to %s:%hu. done. ==> PORT ... Bind error (%s). Invalid PORT. ==> REST %ld ... REST failed, starting from scratch. ==> RETR %s ... No such file `%s'. ==> LIST ... .No such file or directory `%s'. accept: %s abwbLength: %s [%s to go] (unauthoritative) %s: %s, closing control connection. %s (%s) - Data connection: %s; %s (%s) - Control connection closed. Data transfer aborted. r%s `888888888888888888888888888888888888888|4xd|File `%s' already there, not retrieving. (try:%2d)--%s-- %s %s => `%s' %s (%s) - `%s' saved [%ld] %s URL: %s [%ld] -> "%s" [%d] .listingUsing `%s' as listing tmp file. unlink: %s Removed `%s'. Recursion depth %d exceeded max. depth %d. f != NULLLocal file `%s' is more recent, not retrieving. The sizes do not match (local %ld), retrieving. Invalid name of the symlink, skipping. Already have correct symlink %s -> %s Creating symlink %s -> %s symlink: %s Skipping directory `%s'. %s: unknown/unsupported file type. %s: corrupt time-stamp. Unrecognized permissions for %s. Will not retrieve dirs since depth is %d (max %d). //%s%s%sNot descending to `%s' as it is excluded/not-included. Rejecting `%s'. No matches on pattern `%s'. Wrote HTML-ized index to `%s' [%ld]. Wrote HTML-ized index to `%s'. %s %s%s%s PASS--> %s --> PASS Turtle Power! --> %s USER331 opiekey 331 s/key %d,%d,%d,%d,%d,%dPORTPASVTYPECWDRESTRETRLISTDecNovOctSepAugJulJunMayAprMarFebJanrb%s: %s %s total PLAINFILE; DIRECTORY; SYMLINK; UNKOWN; perms %0o; month: %s; day: %d; year: %d (no tm); time: %02d:%02d:%02d (no yr); -> link to: %s ... Ignoring `.' and `..'; trailing `/' on dir. trailing `@' on link. trailing `*' on exec. Skipping. store is `%s' POSIXLY_CORRECT--%s: option `%s' is ambiguous %s: option `--%s' doesn't allow an argument %s: option `%c%s' doesn't allow an argument %s: option `%s' requires an argument %s: unrecognized option `--%s' %s: unrecognized option `%c%s' %s: illegal option -- %c %s: option requires an argument -- %c %s Checking for %s. %s was already used, by that name. This is the first time I hear about host %s by that name. We've dealt with host %s, but under the name %s. Comparing hosts %s and %s... They are quite alike. Since checking is simple, I'd say they are not the same. They are alike, after realhost()->%s. They are not the same (%s, %s). u->host != NULLhost.c%s:%u: failed assertion `%s' %s: Cannot determine user-id. %s: Warning: uname failed: %s %s: Warning: cannot determine local IP address. %s: Warning: cannot reverse-lookup local IP address. %s@%sHost not foundUnknown errorcontentmetabasetdthtablelayerinputlowsrcareabgsoundembedscriptcodeappletoverlayfigiframeframebackgroundbodysrcimghrefaResetting a parser state. URL=HTML parser ends here (state destroyed). wb%s: %s :%s%s%s@ �Index of /%s on %s:%d�


  DecNovOctSepAugJulJunMayAprMarFebJan%d %s %02d %02d:%02d         time unknown       File        Directory   Link        Not sure    %s  (%s bytes)(nil)-> %s
HTTP/bytesnoneu->local != NULLhttp.c%s:%u: failed assertion `%s' Connecting to %s:%hu... %s: %s. socket: %s Connection to %s:%hu refused. Closing fd %d connect: %s connected! HEADGETReferer: %s Pragma: no-cache Range: bytes=%ld- Wget/%sProxy-Authorization*/*%s %s HTTP/1.0 User-Agent: %s Host: %s:%d Accept: %s %s%s%s%s%s%s ---request begin--- %s---request end--- Failed writing HTTP request. ProxyHTTP%s request sent, awaiting response... End of file while parsing headers. Read error (%s) in headers. %d %sNo data receivedMalformed status line(no description)%d %sContent-LengthContent-TypeLocationLast-ModifiedWWW-AuthenticateAccept-RangesContent-RangeAuthorization failed. Unknown authentication scheme. text/html [following]unspecifiedLocation: %s%s Length: (%s to go)ignored [%s] abwb%s: %s %$%%L%Warning: wildcards not supported in HTTP. File `%s' already there, will not retrieve. htmlhtm (try:%2d)--%s-- %s %s => `%s' Cannot write to `%s' (%s). ERROR: Redirection (%d) without location. %s: %s ERROR %d: %s. Last-modified header missing -- time-stamps turned off. Last-modified header invalid -- time-stamp ignored. Local file `%s' is more recent, not retrieving. The sizes do not match (local %ld), retrieving. Remote file is newer, retrieving. %d %s %s (%s) - `%s' saved [%ld/%ld] %s URL:%s [%ld/%ld] -> "%s" [%d] %s (%s) - `%s' saved [%ld] %s URL:%s [%ld] -> "%s" [%d] %s (%s) - Connection closed at byte %ld. %s (%s) - `%s' saved [%ld/%ld]) %s (%s) - Connection closed at byte %ld/%ld. %s (%s) - Read error at byte %ld (%s).%s (%s) - Read error at byte %ld/%ld (%s). :h:h::<;;;;;;;;h:;;;;;;;;;;;;;;;;;;;::;;h:h:;;;;;;;h:;;;h:;;;;::;h:%a, %d %b %Y %T%a, %d-%b-%y %T%a %b %d %T %Y%s:%s%s: Basic %s nonceopaquerealm:Authorization: Digest username="%s", realm="%s", nonce="%s", uri="%s", response="%s", opaque="" BasicDigestAuthorizationwaitverboseuseragentuseproxytriestimestampingtimeoutspiderspanhostssimplehostcheckserverresponsesaveheadersrobotsretrsymlinksremovelistingrelativeonlyrejectrecursivereclevelquotaquietproxyuserproxypasswdpasswdpassiveftpoutputdocumentnumtriesnoproxynoparentnoclobbernetrcmirrorloginlogfilekilllongerinputincludedirectoriesignorelengthhttpuserhttpproxyhttppasswdhtmlifyheaderglobftpproxyforcehtmlfollowftpexcludedomainsexcludedirectoriesdotstyledotspacingdotsinlinedotbytesdomainsdirstructdirprefixdeleteafterdebugcutdirsconvertlinkscontinuecachebasebackupsbackgroundalwaysrestaddhostdiraccept.anonymousno_proxyHOMEWGETRC%s: %s: %s. %s/.wgetrcrb%s: Cannot read %s (%s). %s: Error in %s at line %d. /usr/local/etc/wgetrc%s: Warning: Both system and user wgetrc point to `%s'. %s: BUG: unknown command `%s', value `%s'. onoff%s: %s: Please specify on or off. %s: %s: Invalid specification `%s'. inf%s: Invalid specification `%s' defaultbinarymegagigamicrosaved_log_size >= saved_log_offsetlog.c%s:%u: failed assertion `%s' awwget-log%s: %s: %s /usr/local/share/localewgetUsage: %s [OPTION]... [URL]... GNU Wget %s, a non-interactive network retriever. Mail bug reports and suggestions to . Recursive accept/reject: -A, --accept=LIST list of accepted extensions. -R, --reject=LIST list of rejected extensions. -D, --domains=LIST list of accepted domains. --exclude-domains=LIST comma-separated list of rejected domains. -L, --relative follow relative links only. --follow-ftp follow FTP links from HTML documents. -H, --span-hosts go to foreign hosts when recursive. -I, --include-directories=LIST list of allowed directories. -X, --exclude-directories=LIST list of excluded directories. -nh, --no-host-lookup don't DNS-lookup hosts. -np, --no-parent don't ascend to the parent directory. Recursive retrieval: -r, --recursive recursive web-suck -- use with care!. -l, --level=NUMBER maximum recursion depth (0 to unlimit). --delete-after delete downloaded files. -k, --convert-links convert non-relative links to relative. -m, --mirror turn on options suitable for mirroring. -nr, --dont-remove-listing don't remove `.listing' files. FTP options: --retr-symlinks retrieve FTP symbolic links. -g, --glob=on/off turn file name globbing on or off. --passive-ftp use the "passive" transfer mode. HTTP options: --http-user=USER set http user to USER. --http-passwd=PASS set http password to PASS. -C, --cache=on/off (dis)allow server-cached data (normally allowed). --ignore-length ignore `Content-Length' header field. --header=STRING insert STRING among the headers. --proxy-user=USER set USER as proxy username. --proxy-passwd=PASS set PASS as proxy password. -s, --save-headers save the HTTP headers to file. -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION. Directories: -nd --no-directories don't create directories. -x, --force-directories force creation of directories. -nH, --no-host-directories don't create host directories. -P, --directory-prefix=PREFIX save files to PREFIX/... --cut-dirs=NUMBER ignore NUMBER remote directory components. Download: -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits). -O --output-document=FILE write documents to FILE. -nc, --no-clobber don't clobber existing files. -c, --continue restart getting an existing file. --dot-style=STYLE set retrieval display style. -N, --timestamping don't retrieve files if older than local. -S, --server-response print server response. --spider don't download anything. -T, --timeout=SECONDS set the read timeout to SECONDS. -w, --wait=SECONDS wait SECONDS between retrievals. -Y, --proxy=on/off turn proxy on or off. -Q, --quota=NUMBER set retrieval quota to NUMBER. Logging and input file: -o, --output-file=FILE log messages to FILE. -a, --append-output=FILE append messages to FILE. -d, --debug print debug output. -q, --quiet quiet (no output). -v, --verbose be verbose (this is the default). -nv, --non-verbose turn off verboseness, without being quiet. -i, --input-file=FILE read URL-s from file. -F, --force-html treat input file as HTML. Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc' command. Mandatory arguments to long options are mandatory for short options too. %s%s%s%s%s%s%s%s%s%swaituse-proxyuser-agenttriestimeoutrejectquotaproxy-userproxy-passwdproxyoutput-fileoutput-documentnolevelinput-fileinclude-directorieshttp-userhttp-passwdhtmlifyheaderglobexclude-domainsexclude-directoriesexecutedot-styledomainsdirectory-prefixdelete-aftercut-dirscachebasebackupsappend-outputacceptversionverbosetimestampingspiderspan-hostsserver-responsesave-headersretr-symlinksrelativerecursivequietpassive-ftpnon-verboseno-parentno-host-lookupno-host-directoriesno-directoriesno-clobbermirrorignore-lengthhelpforce-htmlforce-hierforce-directoriesfollow-ftpemail-addressdont-remove-listingdebugconvert-linkscontinuebackgroundhVqvdksxmNWrHSLcFbEY:g:T:U:O:l:n:i:o:a:t:D:A:R:P:B:e:Q:X:I:w:onnoparentdeleteafterretrsymlinksignorelengthpassiveftpnoclobberfollowftpcutdirsoffdirstructaddhostdirremovelistingsimplehostcheck%s forcehtmlspanhostsconvertlinksrelativeonlyserverresponsesaveheadersGNU Wget %s Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. %s Written by Hrvoje Niksic . httpuserhttppasswddotstyleexcludedomainsproxyuserproxypasswdlogfile%s: %s: invalid command includedirectoriesinputreclevel%s: illegal option -- `-n%c' Try `%s --help' for more options. outputdocumentdirprefixuseragentexcludedirectoriesuseproxyCan't be verbose and quiet at the same time. Can't timestamp and not clobber old files at the same time. %s: missing URL solaris2.7DEBUG output created by Wget %s on %s. wbNo URLs found in %s. FINISHED --%s-- Downloaded: %s bytes in %d files Download quota (%s bytes) EXCEEDED! |bbbH`X`bbh`x```b``bc```` aacLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLfLffLf0cdcxcc\ataLfa0dLfLfaLfa\eeeeaee bLfefLfLfLfLfLfLfLfDc,aurl != NULLUh, it is FTP but i'm not in the mood to follow FTP. It doesn't really look like a relative link. I don't like the smell of that domain. Trying to escape parental guidance with no_parent on. %s (%s) is excluded/not-included. htmlhtm%s (%s) does not match acc/rej rules. This is not the same hostname as the parent's. robots.txtStuffing %s because %s forbids it. I've decided to load it -> (null)%s is not text/html so we don't chase. Removing %s. Removing %s since it should be rejected. unlink: %s %s already in list, so we don't load. Rescanning %s It should correspond to %s. I cannot find the corresponding URL. %s flagged for conversion, local %s err == URLOK && u->proto == URLHTTPLoading robots.txt; please ignore errors. rbWget/%sLine: %s (chucked out) User-agent(chucking out since it is not applicable for Wget) DisallowMatching %s against: %s matched. not matched. [%3d%%]Y@ [ skipping %dK ] %5ldK -> ,.%.2f B/s%.2f KB/s%.2f MB/s@@@0AP?>%s: %s. Could not find proxy host. Proxy %s: %s. Proxy %s: Must be HTTP. u->proto != URLFILEretr.c%s:%u: failed assertion `%s' htmlhtm%s: Redirection to itself. Removing %s. unlink: %s Giving up. Retrying. whois++:wais:tn3270:telnet:stanf:snews:shttp:service:rlogin:prospero:path:nntp:news:mid:mailto:lifn:javascript:java:irc:ior:ilu:https:http:hdl:gopher:ftp:finger:file:clsid:cid:ftp://http:// <>"#%{}|\^~[]`@:u != NULLurl.c%s:%u: failed assertion `%s' parseurl ("%s") -> host %s -> port %hu -> opath %s -> dir %s -> file %s -> ndir %s %2F/;type=!"#'(),>`{}|<>rb%s: %s Loaded %s (size %ld). Loaded HTML file %s (size %ld). Error (%s): Link %s without a base provided. Error (%s): Base %s relative, without referer URL. (null)file %s; this_url %s; base %s link: %s; constr: %s %s.%dRemoving %s because of directory danger! %s: %su->dir != NULLu->host != NULL%s/%s%s%s%sindex.htmlhttp_proxyftp_proxyConverting %s... Cannot convert links in %s: %s wbSomething strange is going on. Please investigate.Skipping %s at position %d (flags %d). %sABS2REL: %s to %s at position %d in %s. done. *s1 != '/'../%s: %s: Not enough memory. mallocreallocstrdup%02d:%02d:%02dUnknown/unsupported protocolInvalid port specificationInvalid host namewget-logforkContinuing in background. Output will be written to `%s'. utime: %s Unlinking %s (symlink). Failed to unlink symlink `%s': %s %s.%d1.5.3+²Ҳ"2BRbr³ҳ"2BRbr´Ҵ"2BRbrµҵ"2BRbr¶Ҷ"2BRbr·ҷ"I X d  o4o v< P*įTVPCAABEACEACTADADAADDAGOAIDAIMAIRALLALPAMAMYANANAANDANNANTANYAPEAPSAPTARCAREARKARMARTASASHASKATATEAUGAUKAVEAWEAWKAWLAWNAXAYEBADBAGBAHBAMBANBARBATBAYBEBEDBEEBEGBENBETBEYBIBBIDBIGBINBITBOBBOGBONBOOBOPBOWBOYBUBBUDBUGBUMBUNBUSBUTBUYBYBYECABCALCAMCANCAPCARCATCAWCODCOGCOLCONCOOCOPCOTCOWCOYCRYCUBCUECUPCURCUTDABDADDAMDANDARDAYDEEDELDENDESDEWDIDDIEDIGDINDIPDODOEDOGDONDOTDOWDRYDUBDUDDUEDUGDUNEAREATEDEELEGGEGOELIELKELMELYEMENDESTETCEVAEVEEWEEYEFADFANFARFATFAYFEDFEEFEWFIBFIGFINFIRFITFLOFLYFOEFOGFORFRYFUMFUNFURGABGADGAGGALGAMGAPGASGAYGEEGELGEMGETGIGGILGINGOGOTGUMGUNGUSGUTGUYGYMGYPHAHADHALHAMHANHAPHASHATHAWHAYHEHEMHENHERHEWHEYHIHIDHIMHIPHISHITHOHOBHOCHOEHOGHOPHOTHOWHUBHUEHUGHUHHUMHUTIICYIDAIFIKEILLINKINNIOIONIQIRAIREIRKISITITSIVYJABJAGJAMJANJARJAWJAYJETJIGJIMJOJOBJOEJOGJOTJOYJUGJUTKAYKEGKENKEYKIDKIMKINKITLALABLACLADLAGLAMLAPLAWLAYLEALEDLEELEGLENLEOLETLEWLIDLIELINLIPLITLOLOBLOGLOPLOSLOTLOULOWLOYLUGLYEMAMACMADMAEMANMAOMAPMATMAWMAYMEMEGMELMENMETMEWMIDMINMITMOBMODMOEMOOMOPMOSMOTMOWMUDMUGMUMMYNABNAGNANNAPNATNAYNENEDNEENETNEWNIBNILNIPNITNONOBNODNONNORNOTNOVNOWNUNUNNUTOOAFOAKOAROATODDODEOFOFFOFTOHOILOKOLDONONEORORBOREORROSOTTOUROUTOVAOWOWEOWLOWNOXPAPADPALPAMPANPAPPARPATPAWPAYPEAPEGPENPEPPERPETPEWPHIPIPIEPINPITPLYPOPODPOEPOPPOTPOWPROPRYPUBPUGPUNPUPPUTQUORAGRAMRANRAPRATRAWRAYREBREDREPRETRIBRIDRIGRIMRIORIPROBRODROERONROTROWROYRUBRUERUGRUMRUNRYESACSADSAGSALSAMSANSAPSATSAWSAYSEASECSEESENSETSEWSHESHYSINSIPSIRSISSITSKISKYSLYSOSOBSODSONSOPSOWSOYSPASPYSUBSUDSUESUMSUNSUPTABTADTAGTANTAPTARTEATEDTEETENTHETHYTICTIETIMTINTIPTOTOETOGTOMTONTOOTOPTOWTOYTRYTUBTUGTUMTUNTWOUNUPUSUSEVANVATVETVIEWADWAGWARWASWAYWEWEBWEDWEEWETWHOWHYWINWITWOKWONWOOWOWWRYWUYAMYAPYAWYEYEAYESYETYOUABEDABELABETABLEABUTACHEACIDACMEACREACTAACTSADAMADDSADENAFARAFROAGEEAHEMAHOYAIDAAIDEAIDSAIRYAJARAKINALANALECALGAALIAALLYALMAALOEALSOALTOALUMALVAAMENAMESAMIDAMMOAMOKAMOSAMRAANDYANEWANNAANNEANTEANTIAQUAARABARCHAREAARGOARIDARMYARTSARTYASIAASKSATOMAUNTAURAAUTOAVERAVIDAVISAVONAVOWAWAYAWRYBABEBABYBACHBACKBADEBAILBAITBAKEBALDBALEBALIBALKBALLBALMBANDBANEBANGBANKBARBBARDBAREBARKBARNBARRBASEBASHBASKBASSBATEBATHBAWDBAWLBEADBEAKBEAMBEANBEARBEATBEAUBECKBEEFBEENBEERBEETBELABELLBELTBENDBENTBERGBERNBERTBESSBESTBETABETHBHOYBIASBIDEBIENBILEBILKBILLBINDBINGBIRDBITEBITSBLABBLATBLEDBLEWBLOBBLOCBLOTBLOWBLUEBLUMBLURBOARBOATBOCABOCKBODEBODYBOGYBOHRBOILBOLDBOLOBOLTBOMBBONABONDBONEBONGBONNBONYBOOKBOOMBOONBOOTBOREBORGBORNBOSEBOSSBOTHBOUTBOWLBOYDBRADBRAEBRAGBRANBRAYBREDBREWBRIGBRIMBROWBUCKBUDDBUFFBULBBULKBULLBUNKBUNTBUOYBURGBURLBURNBURRBURTBURYBUSHBUSSBUSTBUSYBYTECADYCAFECAGECAINCAKECALFCALLCALMCAMECANECANTCARDCARECARLCARRCARTCASECASHCASKCASTCAVECEILCELLCENTCERNCHADCHARCHATCHAWCHEFCHENCHEWCHICCHINCHOUCHOWCHUBCHUGCHUMCITECITYCLADCLAMCLANCLAWCLAYCLODCLOGCLOTCLUBCLUECOALCOATCOCACOCKCOCOCODACODECODYCOEDCOILCOINCOKECOLACOLDCOLTCOMACOMBCOMECOOKCOOLCOONCOOTCORDCORECORKCORNCOSTCOVECOWLCRABCRAGCRAMCRAYCREWCRIBCROWCRUDCUBACUBECUFFCULLCULTCUNYCURBCURDCURECURLCURTCUTSDADEDALEDAMEDANADANEDANGDANKDAREDARKDARNDARTDASHDATADATEDAVEDAVYDAWNDAYSDEADDEAFDEALDEANDEARDEBTDECKDEEDDEEMDEERDEFTDEFYDELLDENTDENYDESKDIALDICEDIEDDIETDIMEDINEDINGDINTDIREDIRTDISCDISHDISKDIVEDOCKDOESDOLEDOLLDOLTDOMEDONEDOOMDOORDORADOSEDOTEDOUGDOURDOVEDOWNDRABDRAGDRAMDRAWDREWDRUBDRUGDRUMDUALDUCKDUCTDUELDUETDUKEDULLDUMBDUNEDUNKDUSKDUSTDUTYEACHEARLEARNEASEEASTEASYEBENECHOEDDYEDENEDGEEDGYEDITEDNAEGANELANELBAELLAELSEEMILEMITEMMAENDSERICEROSEVENEVEREVILEYEDFACEFACTFADEFAILFAINFAIRFAKEFALLFAMEFANGFARMFASTFATEFAWNFEARFEATFEEDFEELFEETFELLFELTFENDFERNFESTFEUDFIEFFIGSFILEFILLFILMFINDFINEFINKFIREFIRMFISHFISKFISTFITSFIVEFLAGFLAKFLAMFLATFLAWFLEAFLEDFLEWFLITFLOCFLOGFLOWFLUBFLUEFOALFOAMFOGYFOILFOLDFOLKFONDFONTFOODFOOLFOOTFORDFOREFORKFORMFORTFOSSFOULFOURFOWLFRAUFRAYFREDFREEFRETFREYFROGFROMFUELFULLFUMEFUNDFUNKFURYFUSEFUSSGAFFGAGEGAILGAINGAITGALAGALEGALLGALTGAMEGANGGARBGARYGASHGATEGAULGAURGAVEGAWKGEARGELDGENEGENTGERMGETSGIBEGIFTGILDGILLGILTGINAGIRDGIRLGISTGIVEGLADGLEEGLENGLIBGLOBGLOMGLOWGLUEGLUMGLUTGOADGOALGOATGOERGOESGOLDGOLFGONEGONGGOODGOOFGOREGORYGOSHGOUTGOWNGRABGRADGRAYGREGGREWGREYGRIDGRIMGRINGRITGROWGRUBGULFGULLGUNKGURUGUSHGUSTGWENGWYNHAAGHAASHACKHAILHAIRHALEHALFHALLHALOHALTHANDHANGHANKHANSHARDHARKHARMHARTHASHHASTHATEHATHHAULHAVEHAWKHAYSHEADHEALHEARHEATHEBEHECKHEEDHEELHEFTHELDHELLHELMHERBHERDHEREHEROHERSHESSHEWNHICKHIDEHIGHHIKEHILLHILTHINDHINTHIREHISSHIVEHOBOHOCKHOFFHOLDHOLEHOLMHOLTHOMEHONEHONKHOODHOOFHOOKHOOTHORNHOSEHOSTHOURHOVEHOWEHOWLHOYTHUCKHUEDHUFFHUGEHUGHHUGOHULKHULLHUNKHUNTHURDHURLHURTHUSHHYDEHYMNIBISICONIDEAIDLEIFFYINCAINCHINTOIONSIOTAIOWAIRISIRMAIRONISLEITCHITEMIVANJACKJADEJAILJAKEJANEJAVAJEANJEFFJERKJESSJESTJIBEJILLJILTJIVEJOANJOBSJOCKJOELJOEYJOHNJOINJOKEJOLTJOVEJUDDJUDEJUDOJUDYJUJUJUKEJULYJUNEJUNKJUNOJURYJUSTJUTEKAHNKALEKANEKANTKARLKATEKEELKEENKENOKENTKERNKERRKEYSKICKKILLKINDKINGKIRKKISSKITEKLANKNEEKNEWKNITKNOBKNOTKNOWKOCHKONGKUDOKURDKURTKYLELACELACKLACYLADYLAIDLAINLAIRLAKELAMBLAMELANDLANELANGLARDLARKLASSLASTLATELAUDLAVALAWNLAWSLAYSLEADLEAFLEAKLEANLEARLEEKLEERLEFTLENDLENSLENTLEONLESKLESSLESTLETSLIARLICELICKLIEDLIENLIESLIEULIFELIFTLIKELILALILTLILYLIMALIMBLIMELINDLINELINKLINTLIONLISALISTLIVELOADLOAFLOAMLOANLOCKLOFTLOGELOISLOLALONELONGLOOKLOONLOOTLORDLORELOSELOSSLOSTLOUDLOVELOWELUCKLUCYLUGELUKELULULUNDLUNGLURALURELURKLUSHLUSTLYLELYNNLYONLYRAMACEMADEMAGIMAIDMAILMAINMAKEMALEMALIMALLMALTMANAMANNMANYMARCMAREMARKMARSMARTMARYMASHMASKMASSMASTMATEMATHMAULMAYOMEADMEALMEANMEATMEEKMEETMELDMELTMEMOMENDMENUMERTMESHMESSMICEMIKEMILDMILEMILKMILLMILTMIMIMINDMINEMINIMINKMINTMIREMISSMISTMITEMITTMOANMOATMOCKMODEMOLDMOLEMOLLMOLTMONAMONKMONTMOODMOONMOORMOOTMOREMORNMORTMOSSMOSTMOTHMOVEMUCHMUCKMUDDMUFFMULEMULLMURKMUSHMUSTMUTEMUTTMYRAMYTHNAGYNAILNAIRNAMENARYNASHNAVENAVYNEALNEARNEATNECKNEEDNEILNELLNEONNERONESSNESTNEWSNEWTNIBSNICENICKNILENINANINENOAHNODENOELNOLLNONENOOKNOONNORMNOSENOTENOUNNOVANUDENULLNUMBOATHOBEYOBOEODINOHIOOILYOINTOKAYOLAFOLDYOLGAOLINOMANOMENOMITONCEONESONLYONTOONUSORALORGYOSLOOTISOTTOOUCHOUSTOUTSOVALOVENOVEROWLYOWNSQUADQUITQUODRACERACKRACYRAFTRAGERAIDRAILRAINRAKERANKRANTRARERASHRATERAVERAYSREADREALREAMREARRECKREEDREEFREEKREELREIDREINRENARENDRENTRESTRICERICHRICKRIDERIFTRILLRIMERINGRINKRISERISKRITEROADROAMROARROBEROCKRODEROILROLLROMEROODROOFROOKROOMROOTROSAROSEROSSROSYROTHROUTROVEROWEROWSRUBERUBYRUDERUDYRUINRULERUNGRUNSRUNTRUSERUSHRUSKRUSSRUSTRUTHSACKSAFESAGESAIDSAILSALESALKSALTSAMESANDSANESANGSANKSARASAULSAVESAYSSCANSCARSCATSCOTSEALSEAMSEARSEATSEEDSEEKSEEMSEENSEESSELFSELLSENDSENTSETSSEWNSHAGSHAMSHAWSHAYSHEDSHIMSHINSHODSHOESHOTSHOWSHUNSHUTSICKSIDESIFTSIGHSIGNSILKSILLSILOSILTSINESINGSINKSIRESITESITSSITUSKATSKEWSKIDSKIMSKINSKITSLABSLAMSLATSLAYSLEDSLEWSLIDSLIMSLITSLOBSLOGSLOTSLOWSLUGSLUMSLURSMOGSMUGSNAGSNOBSNOWSNUBSNUGSOAKSOARSOCKSODASOFASOFTSOILSOLDSOMESONGSOONSOOTSORESORTSOULSOURSOWNSTABSTAGSTANSTARSTAYSTEMSTEWSTIRSTOWSTUBSTUNSUCHSUDSSUITSULKSUMSSUNGSUNKSURESURFSWABSWAGSWAMSWANSWATSWAYSWIMSWUMTACKTACTTAILTAKETALETALKTALLTANKTASKTATETAUTTEALTEAMTEARTECHTEEMTEENTEETTELLTENDTENTTERMTERNTESSTESTTHANTHATTHEETHEMTHENTHEYTHINTHISTHUDTHUGTICKTIDETIDYTIEDTIERTILETILLTILTTIMETINATINETINTTINYTIRETOADTOGOTOILTOLDTOLLTONETONGTONYTOOKTOOLTOOTTORETORNTOTETOURTOUTTOWNTRAGTRAMTRAYTREETREKTRIGTRIMTRIOTRODTROTTROYTRUETUBATUBETUCKTUFTTUNATUNETUNGTURFTURNTUSKTWIGTWINTWITULANUNITURGEUSEDUSERUSESUTAHVAILVAINVALEVARYVASEVASTVEALVEDAVEILVEINVENDVENTVERBVERYVETOVICEVIEWVINEVISEVOIDVOLTVOTEWACKWADEWAGEWAILWAITWAKEWALEWALKWALLWALTWANDWANEWANGWANTWARDWARMWARNWARTWASHWASTWATSWATTWAVEWAVYWAYSWEAKWEALWEANWEARWEEDWEEKWEIRWELDWELLWELTWENTWEREWERTWESTWHAMWHATWHEEWHENWHETWHOAWHOMWICKWIFEWILDWILLWINDWINEWINGWINKWINOWIREWISEWISHWITHWOLFWONTWOODWOOLWORDWOREWORKWORMWORNWOVEWRITWYNNYALEYANGYANKYARDYARNYAWLYAWNYEAHYEARYELLYOGAYOKE?-($ $(        ($  (^ZVRNJFB>:62ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/eW^WXWx@Q|xOvxOkLxOc0yP^xQXxOOxOB8yO:xxP4(yO(PyOxQS x@QDy RHyPLyPPVL,X nXV `V  @ ,HVX X l      Թ  %,+xV2V7L AVHR a f, l@ rT x~l l      ,h  , A H 2S =@ JVT eT jh p| v |  ,V      ,      $, '` /P 4V!;D S X ^  d  j8& s`& | VVVV                    (  <  P   4 & 8 5V; EVPWX fX kl q w }b LV M,! ,! @! T! h! ! p" T#0 %#Y 7# ?Ie YC u V6 $W? 7R @\WY S\J\ ZJ cK tLL WN Xs Y Y Y Y Z  W$W(W,W8[ ]I ^ (^ -^ 3^ 9^ ?^* O_ [(_ fQvjX `j `j tj j j @ @ t t t  u  u 0W w{  vl ' z 6 z ; { A { G ({ M U H| d H| i \| o p| u | { U U 4W 8W  G 1 Z b  q  v  |   Լ2 w ,n ln Dn ] w       4 4 6 LVC h N HV\ TVi XVw ~ - L h  w ,o U y k   " @x ܖX  ,|  <   * R 7 ̶@ |[ J  S  ^ ; g p lx Ty 8   e  ,X ̷ p{! " l3 \ P* ܴ` 6 (Xy 0H >\DRDE o4}  h Oe l,̲L\ B < \, LR @+ j8 '.5< ' E|ML\ dl  zH, {S <05 # G  ? |xlj \  lk d{ #)  ;f Ih{Q(m [0N c8 xzV $} ||} K< f D ȸ'  \ l{8uq h 7  ` -̵:(3 C2 T\0< eܳlĽ v| 4+ d  \ +|XF {Q D @{  ]g ,1 "ܶ)p{/ a ><H  V ^e,q { c l9 l6  ^s  l  ,   t  Ty! '0-l& 5< EOLVf a<kM vܷ|̴X L 4 \ /  d} L Lh G +L2g <|B H4H Q|^\Ve &  пP  t{ pK Z 4 <, O  7   0|]L :B IL  U cXy !jL s  }X |_&  <4DV ltN ! H j#  \&/ 5z ;̳B`{ U"^ܵfؽ8 n8 v,} ,\wgetcrt1.scrti.svalues-Xa.ccrtstuff.cgcc2_compiled.p.2__DTOR_LIST__completed.3__do_global_dtors_aux__EH_FRAME_BEGIN__fini_dummyobject.8frame_dummyinit_dummyforce_to_data__CTOR_LIST__cmpt.cgcc2_compiled.statlstatfstatmknodconnect.cgcc2_compiled.statlstatfstatmknodmsocksrv.12addrselect_fdres.21fnmatch.cgcc2_compiled.statlstatfstatmknodftp.cgcc2_compiled.statlstatfstatmknodftp_expected_bytesgetftpfirst_retrieval.14ftp_loop_internalftp_get_listingdepth.19ftp_retrieve_listftp_retrieve_dirsftp_retrieve_globdelelementfreefileinfoftp-basic.cgcc2_compiled.statlstatfstatmknodftp_requestskey_head.14numbuf.25ftp-ls.cgcc2_compiled.statlstatfstatmknodsympermsmonths.12ftp_parse_unix_lsftp-opie.cgcc2_compiled.statlstatfstatmknodWpextractbtoebuf.14getopt.cgcc2_compiled.statlstatfstatmknodmy_indexmy_bcopyexchangefirst_nonoptlast_nonoptnextcharorderingheaders.cgcc2_compiled.statlstatfstatmknodhost.cgcc2_compiled._unameunamestatlstatfstatmknodsearch_hostsearch_addresshlistadd_hlistaddress.32html.cgcc2_compiled.statlstatfstatmknodidmatchhtml_allow.12global_statehtml_quote_stringmonths.19http.cgcc2_compiled.statlstatfstatmknodparse_http_status_linehttp_process_rangehttp_process_nonehttp_process_typegethttpcreate_authorization_linebasic_authentication_encodeknown_authentication_scheme_pfirst_retrieval.20http_atotmmktime_from_utccheck_endtbl.29base64_encodeextract_header_attrdump_hashrealm.38opaque.39nonce.40options.41init.cgcc2_compiled.statlstatfstatmknodcommandscmd_vectorcmd_booleancmd_numbercmd_stringcmd_spec_dirstructcmd_bytescmd_spec_dotstylecmd_directory_vectorcmd_spec_headercmd_spec_htmlifycmd_spec_mirrorcmd_number_infcmd_spec_outputdocumentcmd_spec_recursivecmd_timecmd_spec_useragentcominddefaultswgetrc_file_namerun_wgetrcmyatoicheck_user_specified_headerlog.cgcc2_compiled.statlstatfstatmknodlogfpsaved_log_sizesaved_log_offsetsaved_loglogvprintflog_dumpmain.cgcc2_compiled.statlstatfstatmknodi18n_initializeprint_usageprint_helplong_options.16redirect_output_signalmd5.cgcc2_compiled.statlstatfstatmknodfillbufnetrc.cgcc2_compiled.statlstatfstatmknodprocessed_netrc.10parse_netrcmaybe_add_to_listrbuf.cgcc2_compiled.statlstatfstatmknodrecur.cgcc2_compiled.statlstatfstatmknodforbiddenfirst_timeulisturls_htmlurls_downloadedbase_dirrobots_hostdepthretrieve_robotsrobots_urlparse_robotsrobots_matchretr.cgcc2_compiled.statlstatfstatmknodc.10show_progressprint_percentageline_bytes.15offs.16ndot.17nrow.18internal_secsinternal_msecsres.25url.cgcc2_compiled.statlstatfstatmknodprotostringssup_protosdecode_stringhas_protoprocess_ftp_typeparse_unameparse_dirfindurlconstructcount_slashesmkstructconstruct_relativeutils.cgcc2_compiled.statlstatfstatmknodmemfataltms.22unique_name_1in_acclistproclistmatch_backwardsoutbuf.75version.cgcc2_compiled.libgcc2.cgcc2_compiled.crtstuff.cgcc2_compiled.__do_global_ctors_aux__CTOR_END__init_dummyforce_to_data__DTOR_END____FRAME_END__crtn.ofrontcmpsocketgetoptftp_last_respline_startload_fileskip_urlmd5_process_bytesreadgmtime_mcount_START_optretrieve_from_filerbuf_discardstrtokfreadmkalldirselapsed_timegetpwuidcloseportftp_listskip_protorealhostvfprintftoupper_environ_endhtml_baseconvert_all_linksstrdupopt_urlfork_iobunique_name__register_frame_infoget_urls_file__flsbuf_GLOBAL_OFFSET_TABLE_sleepremove_linkpwd_cuserid__ctypeencode_stringabortgethostbynamedigest_authentication_encoderecursive_retrievelegiblemkdirsetvalsignalacceptstrcasecmpatexitexitstrerrordebug_logprintfclean_hostsin_slistlogflushoptoptuerrmsghas_wildcards_ptolowermd5_init_ctxaccessmalloc_xstatskip_lwssprintfbindtextdomainsymlink_nunameheader_strdupnewurlrbuf_flushgettextbindcontains_unsafe_initftp_pasvtouchfwritemd5_process_blocktextdomainratemake_directorysetsockoptmd5_finish_ctxexec_nameurlprotofputsstore_hostaddressaccept_domainh_errnosame_hostadd_urlfile_non_directory_pfree_netrcrecursive_resettime_fxstatlong_to_stringhome_dirstr_urlftp_retrgetuidno_proxy_matchftp_restlogprintfsave_log_psearch_netrcngethostbynamefcloseurl_filenamegetenv_END_free_vec_getopt_internalstrncatxreallocselectsepstringrecursive_cleanupmktimeprintwhatstrncpyacceptablegethostbyaddrntohsmerge_vecs_DYNAMICcleanuprbuf_peekrenamegetproxyrbuf_initializelog_initstrncmpskip_unameprintf__iobheader_processsetlocaleconvert_linksftp_cwdstrcatstrncasecmpurl_equalinet_addrwriteftp_responsereset_timerreallocstrrchrvsnprintfhttp_loopredirect_outputadd_slistfnmatchheader_getftp_parse_lsfreeurlfile_exists_penvironperrorerrnoxstrdupunlinkftp_loop__fpstartstrchracceptportinet_ntoainitializeutimefreesuffixoptindgetsocknameclose_lxstatftp_typeretrieve_urlstrdupdelimhtmlfindurlcalculate_skey_responseoptarg__deregister_frame_infostrcmpheader_extract_numberopterrftp_indexfree_urlposnumdigitgettimeofday_edata_PROCEDURE_LINKAGE_TABLE_xmallocfopenftp_portpath_simplifybindportrbuf_uninitializetime_strlogputsstrcpyread_whole_linefree_slist_etext_lib_version__eprintffflushsufmatchmake_connectionlog_closeftp_getaddressgetopt_longget_urls_html_ctypeparseurlftp_loginaccdirhtonlmainhtonsreadlinknetrc_listversion_stringchmod__filbufmemcpyparse_lineget_contentsfork_to_backgroundmd5_read_ctxrotate_backupsstrstrlocaltimestrptime_finiireadlistenrbuf_initialized_p_cleanupfprintfconaddrherrmsgisattyiwrite_xmknodconnectGNU C crt1.sas: WorkShop Compilers 4.2 alpha 14 Jun 1996GNU C crti.sas: WorkShop Compilers 4.2 alpha 14 Jun 1996@(#)SunOS 5.7 Beta September 1998GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GCC: (GNU) egcs-2.91.66 19990314 (egcs-1.1.2 release)as: WorkShop Compilers 4.2 alpha 14 Jun 1996GNU C crtn.oas: WorkShop Compilers 4.2 alpha 14 Jun 1996ld: Software Generation Utilities - Solaris/ELF (3.0)I <I48../../gcc/libgcc2.cXGNU C egcs-2.91.66 19990314 (egcs-1.1.2 release)6!3/opt/SOURCES/CDROM/egcs-1.1.2/objdir/gcc3y8__eprintfx!3AQ3,8stringc# 0!8expressionc#  &G8lineU # .u8filenamec# 8rtx_def__eprintf.interp.hash.dynsym.dynstr.SUNW_version.rel.got.rel.bss.rel.plt.plt.text.init.fini.rodata.got.dynamic.data.ctors.dtors.eh_frame.bss.symtab.strtab.comment.stab.index.debug.debug_pubnames.shstrtab.stab.indexstrvalues-Xa.cXt ; O ; V=3.1 ; R=WorkShop Compilers 4.2 30 Oct 1996 C 4.2/builds/s998_10/usr/src/lib/libc/i386; /opt/SUNWspro.40/SC4.2/bin/../SC4.2/bin/cc -O -Xt -D_REENTRANT -Di386 -Iinc -I../inc -DTEXT_DOMAIN='\"SUNW_OST_OSLIB\"' -I/builds/s998_10/proto/root_i386/usr/include -c -o values-Xa.o ../port/gen/values-Xa.c -W0,-xpԀ T << vo44/`- /6 /? į/ H2M,,8QSK#YK_KoN gP*P|l+̛u,) {HVHPVPXVX`V`( \$Q  $  {I 07070100010610000041ed000000000000000100000003372ff22600000000000000660000004500000000000000000000000a00000004reloc/doc07070100056599000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000000f00000004reloc/doc/wget0707010005659a000081a4000000020000000200000001372ff1eb000001f4000000660000004500000000000000000000001700000004reloc/doc/wget/AUTHORSAuthors of GNU Wget. [ Note that this file does not attempt to list all the contributors to Wget; look at the ChangeLog for that. This is a list of people who contributed sizeable amounts of code and assigned the copyright to the FSF. ] Hrvoje Niksic. Designed and implemented Wget. Gordon Matzigkeit. Wrote netrc.c and netrc.h. Darko Budor. Added Windows support, wrote wsstartup.c, wsstartup.h and windecl.h. Junio Hamano. Added support for FTP Opie and HTTP digest authentication. 0707010005659b000081a4000000020000000200000001372ff1eb00004638000000660000004500000000000000000000001700000004reloc/doc/wget/COPYING GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 675 Mass Ave, Cambridge, MA 02139, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) 19yy This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19yy name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. 0707010005659c000081a4000000020000000200000001372ff1f4000017c5000000660000004500000000000000000000001900000004reloc/doc/wget/ChangeLog1998-09-10 Hrvoje Niksic * wget.texi (HTTP Options): Warn against masquerading as Mozilla. 1998-05-24 Hrvoje Niksic * Makefile.in (clean): Remove HTML files. 1998-05-13 Hrvoje Niksic * wget.texi: Various updates. (Proxies): New node. 1998-05-09 Hrvoje Niksic * texinfo.tex: New file. 1998-05-08 Hrvoje Niksic * Makefile.in (dvi): New target. 1998-05-02 Hrvoje Niksic * wget.texi (Recursive Retrieval): Fix typo. Suggested by Francois Pinard. 1998-04-18 Hrvoje Niksic * wget.texi: Fixed @dircategory, courtesy Karl Eichwalder. 1998-03-31 Hrvoje Niksic * Makefile.in: Don't attempt to (un)install the man-page. 1998-03-30 Hrvoje Niksic * wget.1: Removed it. 1998-03-29 Hrvoje Niksic * wget.texi (Invoking): Split into more sections, analogous to output of `wget --help'. (HTTP Options): Document --user-agent. 1998-03-16 Hrvoje Niksic * wget.texi (Contributors): Updated with oodles of new names. 1998-02-22 Karl Eichwalder * Makefile.in (install.info): only info files (no *info.orig, etc.). 1998-01-31 Hrvoje Niksic * Makefile.in (install.wgetrc): Don't use `!'. 1998-01-28 Hrvoje Niksic * wget.texi (Advanced Options): Expanded. 1998-01-25 Hrvoje Niksic * wget.texi (Advanced Options): Document `--cache'. (Contributors): Added Brian. 1997-07-26 Francois Pinard * Makefile.in (install.wgetrc): Print the sample.wgetrc warning only if the files actually differ. 1998-01-23 Hrvoje Niksic * Makefile.in: Use `test ...' rather than `[ ... ]'. * wget.texi (Advanced Options): Explained suffices. 1998-01-23 Karl Heuer * wget.texi (Advanced Options): Updated. 1997-12-18 Hrvoje Niksic * wget.texi (Mailing List): Update. 1997-04-23 Hrvoje Niksic * wget.texi (Advanced Options): Document `--follow-ftp'. 1997-02-17 Hrvoje Niksic * wget.texi (Advanced Options): Document --proxy-user and --proxy-passwd. 1997-02-14 Karl Eichwalder * Makefile.in (install.wgetrc): Never ever nuke an existing rc file. 1997-02-02 Hrvoje Niksic * wget.texi: Updated and revised. * wget.texi (Contributors): Update. (Advanced Options): Removed bogus **/* example. * wget.texi: Use ``...'' instead of "...". 1997-02-01 Hrvoje Niksic * wget.texi (Domain Acceptance): Document --exclude-domains. 1997-01-21 Hrvoje Niksic * wget.texi (Advanced Options): Document --ignore-length. 1997-01-20 Hrvoje Niksic * wget.texi (Time-Stamping): New node. 1997-01-12 Hrvoje Niksic * Makefile.in (distclean): Don't remove wget.info*. 1997-01-08 Hrvoje Niksic * wget.texi (Mailing List): Update archive. (Portability): Update the Windows port by Budor. 1996-12-21 Hrvoje Niksic * wget.texi (Security Considerations): New node. 1996-12-19 Hrvoje Niksic * wget.texi (Advanced Options): Document --passive. 1996-12-12 Dieter Baron * wget.texi (Advanced Usage): Would reference prep instead of wuarchive. 1996-11-25 Hrvoje Niksic * wget.texi (Advanced Options): Documented --retr-symlinks. 1996-11-23 Hrvoje Niksic * wget.texi (Advanced Options): Document --delete-after. 1996-11-22 Hrvoje Niksic * wget.texi (Portability): Add IRIX and FreeBSD as the "regular" platforms. 1996-11-20 Hrvoje Niksic * wget.texi (Advanced Usage): Document dot-style. 1996-11-18 Hrvoje Niksic * wget.texi (Advanced Usage): Dot customization example. (Sample Wgetrc): Likewise. 1996-11-16 Hrvoje Niksic * wget.texi (Wgetrc Syntax): Explained emptying lists. 1996-11-13 Hrvoje Niksic * wget.texi (Advanced Options): Document includes/excludes. (Wgetrc Commands): Likewise. 1996-11-10 Hrvoje Niksic * wget.texi (Advanced Options): Document headers. 1996-11-07 Hrvoje Niksic * sample.wgetrc: Added header examples. 1996-11-06 Hrvoje Niksic * sample.wgetrc: Rewritten. * Makefile.in (install.wgetrc): Install sample.wgetrc. (uninstall.info): Use $(RM). 1996-11-06 Hrvoje Niksic * wget.texi: Docfixes. 1996-11-03 Hrvoje Niksic * wget.texi: Proofread; *many* docfixes. 1996-11-02 Hrvoje Niksic * wget.texi (Introduction): Updated robots mailing list address. 1996-11-01 Hrvoje Niksic * wget.texi: Minor docfixes. 1996-10-26 Hrvoje Niksic * wget.texi (Advanced Usage): Document passwords better. * Makefile.in (distclean): Remove wget.1 on make distclean. * wget.texi (Option Syntax): Explain --. 1996-10-21 Hrvoje Niksic * fetch.texi (No Parent): update. 1996-10-18 Hrvoje Niksic * fetch.texi (Advanced Options): Docfix. 1996-10-17 Tage Stabell-Kulo * geturl.texi (Advanced Options): Sort alphabetically. 1996-10-16 Hrvoje Niksic * geturl.texi (Advanced Options): Describe -nr. (Advanced Usage): Moved -O pipelines to Guru Usage. (Simple Usage): Update. (Advanced Options): Docfix. * Makefile.in (RM): RM = rm -f. 1996-10-15 Hrvoje Niksic * geturl.texi (Guru Usage): Add proxy-filling example. 1996-10-12 Hrvoje Niksic * geturl.texi (Advanced Options): Added --spider. 1996-10-08 Hrvoje Niksic * geturl.texi (Advanced Options): Added -X. * Makefile.in: Added $(srcdir) where appropriate (I hope). 0707010005659d000081a4000000020000000200000001372ff1eb00000c2e000000660000004500000000000000000000001700000004reloc/doc/wget/INSTALL -*- text -*- Installation Procedure 0) Preparation To build and install GNU Wget, you need to unpack the archive (which you have presumably done, since you are reading this), and read on. Like most GNU utilities, Wget uses the GNU Autoconf mechanism for build and installation; those of you familiar with compiling GNU software will feel at home. 1) Configuration To configure Wget, run the configure script provided with the distribution. You may use all the standard arguments configure scripts take. The most important ones are: --help print help message --prefix=PREFIX install architecture-independent files in PREFIX (/usr/local by default) --bindir=DIR user executables in DIR (PREFIX/bin) --infodir=DIR info documentation in DIR [PREFIX/info] --mandir=DIR man documentation in DIR [PREFIX/man] --build=BUILD configure for building on BUILD [BUILD=HOST] --host=HOST configure for HOST [guessed] --target=TARGET configure for TARGET [TARGET=HOST] --enable and --with options recognized (mostly Wget-specific): --with-socks use the socks library --disable-opie disable support for opie or s/key FTP login --disable-digest disable support for HTTP digest authorization --disable-debug disable support for debugging output --disable-nls do not use Native Language Support So, if you want to configure Wget for installation in your home directory, you can type: ./configure --prefix=$HOME You can customize many default settings by editing Makefile and config.h. The program will work very well without your touching these files, but it is useful to have a look at things you can change there. If you use socks, it is useful to add -L/usr/local/lib (or wherever the socks library is installed) to LDFLAGS in Makefile. To configure Wget on Windows, run configure.bat and follow the instructions in the windows/ directory. If this doesn't work for any reason, talk to the Windows developers listed in `windows/README'; I do not maintain the port. 2) Compilation To compile the program, type make and cross your fingers. If you do not have an ANSI compiler, Wget will try to KNR-ize its sources "on the fly". This should make GNU Wget compilable virtually anywhere. After the compilation a ready to use `wget' executable should reside in the src directory. I do not have any kind of test-suite as of this moment, but it should be easy enough to test whether the basic stuff works. 3) Installation Use `make install' to install GNU Wget to directories specified to configure (/usr/local/* by default). The standard installation process will copy the wget binary to /usr/local/bin, install the info pages (wget.info*) to /usr/local/info. You can customize the directories either through the configuration process or making the necessary changes in the Makefile. To delete the files created by Wget installation, you can use make uninstall. 0707010005659e000081a4000000020000000200000001372ff1eb000002a4000000660000004500000000000000000000001800000004reloc/doc/wget/MACHINESThis files lists the architectures on which this version of GNU Wget was tried on. If you compile Wget on a new architecture, please drop me a note, or send a patch to this file. Sun SunOS, Solaris (sparc-sun-solaris*, sparc-sun-sunos*) GNU/Linux (i[3456]86-*-linux*) DEC Ultrix, Digital Unix (mips-dec-ultrix*, alpha-dec-osf*) HP BSD (m68k-hp-bsd) HP HPUX (hppa1.0-hp-hpux7.00, hppa1.1-hp-hpux9.01 and others) IBM AIX (powerpc-ibm-aix4.1.4.0) Amiga NetBSD (m68k-cbm-netbsd1.2) SGI IRIX (mips-sgi-irix4.0.5, mips-sgi-irix5.3) SCO Unix (i586-pc-sco3.2v5.0.4) NeXTStep 3.3 Intel (i386-next-nextstep3) FreeBSD (i386-unknown-freebsd2.2.6) Windows 95/NT (i[3456]86) 0707010005659f000081a4000000020000000200000001372ff1eb0000028e000000660000004500000000000000000000001c00000004reloc/doc/wget/MAILING-LIST -*- text -*- Mailing List Info Thanks to Karsten Thygesen, Wget has its own mailing list for discussion and announcements. The list address is hosted at Sunsite Denmark, . To subscribe, send mail to . The list is fairly low-volume -- one or two messages per day and with sporadic periods of intensive activity. If you are interested in using or hacking Wget, or wish to read the important announcements, you are very welcome to subscribe. The list is archived at . 070701000565a0000081a4000000020000000200000001372ff1f400000eef000000660000004500000000000000000000001800000004reloc/doc/wget/Makefile# Generated automatically from Makefile.in by configure. # Makefile for `wget' utility # Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # # Version: 1.5.3 # SHELL = /bin/sh # Program to format Texinfo source into Info files. MAKEINFO = xemacs -batch -q -no-site-file -eval '(find-file "$(srcdir)/wget.texi")' -l texinfmt -f texinfo-format-buffer -f save-buffer # Program to format Texinfo source into DVI files. TEXI2DVI = texi2dvi # Program to convert DVI files to PostScript DVIPS = dvips -D 300 # Program to convert texinfo files to html TEXI2HTML = texi2html -expandinfo -split_chapter top_srcdir = .. srcdir = . prefix = /usr/local infodir = ${prefix}/info mandir = ${prefix}/man manext = 1 sysconfdir = ${prefix}/etc INSTALL = .././install-sh -c INSTALL_DATA = ${INSTALL} -m 644 RM = rm -f MAN = wget.$(manext) WGETRC = $(sysconfdir)/wgetrc # # Dependencies for building # all: wget.info # wget.cat everything: all wget_us.ps wget_a4.ps wget_toc.html wget.info: wget.texi -$(MAKEINFO) #wget.cat: $(MAN) # nroff -man $(srcdir)/$(MAN) > wget.cat dvi: wget.dvi wget.dvi: wget.texi $(TEXI2DVI) $(srcdir)/wget.texi wget_us.ps: wget.dvi $(DVIPS) -t letter -o $@ wget.dvi wget_a4.ps: wget.dvi $(DVIPS) -t a4 -o $@ wget.dvi wget_toc.html: wget.texi $(TEXI2HTML) $(srcdir)/wget.texi # # Dependencies for installing # # install all the documentation install: install.info install.wgetrc # install.man # uninstall all the documentation uninstall: uninstall.info # uninstall.man # install info pages, creating install directory if necessary install.info: wget.info $(top_srcdir)/mkinstalldirs $(infodir) -for file in $(srcdir)/wget.info $(srcdir)/wget.info-*[0-9]; do \ test -f "$$file" && $(INSTALL_DATA) $$file $(infodir) ; \ done # install man page, creating install directory if necessary #install.man: # $(top_srcdir)/mkinstalldirs $(mandir)/man$(manext) # $(INSTALL_DATA) $(srcdir)/$(MAN) $(mandir)/man$(manext)/$(MAN) # install sample.wgetrc install.wgetrc: $(top_srcdir)/mkinstalldirs $(sysconfdir) @if test -f $(WGETRC); then \ if cmp -s $(srcdir)/sample.wgetrc $(WGETRC); then echo ""; \ else \ echo ' $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(WGETRC).new'; \ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(WGETRC).new; \ echo "WARNING: File \`$(WGETRC)' already exists and is spared."; \ echo " You might want to consider \`$(WGETRC).new',"; \ echo " and merge both into \`$(WGETRC)', for the best."; \ fi; \ else \ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(WGETRC); \ fi # uninstall info pages uninstall.info: $(RM) $(infodir)/wget.info* # uninstall man page #uninstall.man: # $(RM) $(mandir)/man$(manext)/$(MAN) # # Dependencies for cleanup # clean: $(RM) *~ *.bak *.cat *.html $(RM) *.dvi *.aux *.cp *.cps *.fn *.toc *.tp *.vr *.ps *.ky *.pg *.log distclean: clean $(RM) Makefile realclean: distclean $(RM) wget.info* # # Dependencies for maintenance # subdir = doc Makefile: Makefile.in ../config.status cd .. && CONFIG_FILES=$(subdir)/$@ CONFIG_HEADERS= ./config.status 070701000565a1000081a4000000020000000200000001372ff1f400000e56000000660000004500000000000000000000001b00000004reloc/doc/wget/Makefile.in# Makefile for `wget' utility # Copyright (C) 1995, 1996, 1997 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. # # Version: @VERSION@ # SHELL = /bin/sh # Program to format Texinfo source into Info files. MAKEINFO = @MAKEINFO@ # Program to format Texinfo source into DVI files. TEXI2DVI = texi2dvi # Program to convert DVI files to PostScript DVIPS = dvips -D 300 # Program to convert texinfo files to html TEXI2HTML = texi2html -expandinfo -split_chapter top_srcdir = @top_srcdir@ srcdir = @srcdir@ VPATH = @srcdir@ prefix = @prefix@ infodir = @infodir@ mandir = @mandir@ manext = 1 sysconfdir = @sysconfdir@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ RM = rm -f MAN = wget.$(manext) WGETRC = $(sysconfdir)/wgetrc # # Dependencies for building # all: wget.info # wget.cat everything: all wget_us.ps wget_a4.ps wget_toc.html wget.info: wget.texi -$(MAKEINFO) #wget.cat: $(MAN) # nroff -man $(srcdir)/$(MAN) > wget.cat dvi: wget.dvi wget.dvi: wget.texi $(TEXI2DVI) $(srcdir)/wget.texi wget_us.ps: wget.dvi $(DVIPS) -t letter -o $@ wget.dvi wget_a4.ps: wget.dvi $(DVIPS) -t a4 -o $@ wget.dvi wget_toc.html: wget.texi $(TEXI2HTML) $(srcdir)/wget.texi # # Dependencies for installing # # install all the documentation install: install.info install.wgetrc # install.man # uninstall all the documentation uninstall: uninstall.info # uninstall.man # install info pages, creating install directory if necessary install.info: wget.info $(top_srcdir)/mkinstalldirs $(infodir) -for file in $(srcdir)/wget.info $(srcdir)/wget.info-*[0-9]; do \ test -f "$$file" && $(INSTALL_DATA) $$file $(infodir) ; \ done # install man page, creating install directory if necessary #install.man: # $(top_srcdir)/mkinstalldirs $(mandir)/man$(manext) # $(INSTALL_DATA) $(srcdir)/$(MAN) $(mandir)/man$(manext)/$(MAN) # install sample.wgetrc install.wgetrc: $(top_srcdir)/mkinstalldirs $(sysconfdir) @if test -f $(WGETRC); then \ if cmp -s $(srcdir)/sample.wgetrc $(WGETRC); then echo ""; \ else \ echo ' $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(WGETRC).new'; \ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(WGETRC).new; \ echo "WARNING: File \`$(WGETRC)' already exists and is spared."; \ echo " You might want to consider \`$(WGETRC).new',"; \ echo " and merge both into \`$(WGETRC)', for the best."; \ fi; \ else \ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(WGETRC); \ fi # uninstall info pages uninstall.info: $(RM) $(infodir)/wget.info* # uninstall man page #uninstall.man: # $(RM) $(mandir)/man$(manext)/$(MAN) # # Dependencies for cleanup # clean: $(RM) *~ *.bak *.cat *.html $(RM) *.dvi *.aux *.cp *.cps *.fn *.toc *.tp *.vr *.ps *.ky *.pg *.log distclean: clean $(RM) Makefile realclean: distclean $(RM) wget.info* # # Dependencies for maintenance # subdir = doc Makefile: Makefile.in ../config.status cd .. && CONFIG_FILES=$(subdir)/$@ CONFIG_HEADERS= ./config.status 070701000565a2000081a4000000020000000200000001372ff1eb00001c1a000000660000004500000000000000000000001400000004reloc/doc/wget/NEWSGNU Wget NEWS -- history of user-visible changes. Copyright (C) 1997, 1998 Free Software Foundation, Inc. See the end for copying conditions. Please send GNU Wget bug reports to . * Wget 1.5.3 is a bugfix release with no user-visible changes. * Wget 1.5.2 is a bugfix release with no user-visible changes. * Wget 1.5.1 is a bugfix release with no user-visible changes. * Changes in Wget 1.5.0 ** Wget speaks many languages! On systems with gettext(), Wget will output messages in the language set by the current locale, if available. At this time we support Czech, German, Croatian, Italian, Norwegian and Portuguese. ** Opie (Skey) is now supported with FTP. ** HTTP Digest Access Authentication (RFC2069) is now supported. ** The new `-b' option makes Wget go to background automatically. ** The `-I' and `-X' options now accept wildcard arguments. ** The `-w' option now accepts suffixes `s' for seconds, `m' for minutes, `h' for hours, `d' for days and `w' for weeks. ** Upon getting SIGHUP, the whole previous log is now copied to `wget-log'. ** Wget now understands proxy settings with explicit usernames and passwords, e.g. `http://user:password@proxy.foo.com/'. ** You can use the new `--cut-dirs' option to make Wget create less directories. ** The `;type=a' appendix to FTP URLs is now recognized. For instance, the following command will retrieve the welcoming message in ASCII type transfer: wget "ftp://ftp.somewhere.com/welcome.msg;type=a" ** `--help' and `--version' options have been redone to to conform to standards set by other GNU utilities. ** Wget should now be compilable under MS Windows environment. MS Visual C++ and Watcom C have been used successfully. ** If the file length is known, percentages are displayed during download. ** The manual page, now hopelessly out of date, is no longer distributed with Wget. * Wget 1.4.5 is a bugfix release with no user-visible changes. * Wget 1.4.4 is a bugfix release with no user-visible changes. * Changes in Wget 1.4.3 ** Wget is now a GNU utility. ** Can do passive FTP. ** Reads .netrc. ** Info documentation expanded. ** Compiles on pre-ANSI compilers. ** Global wgetrc now goes to /usr/local/etc (i.e. $sysconfdir). ** Lots of bugfixes. * Changes in Wget 1.4.2 ** New mirror site at ftp://sunsite.auc.dk/pub/infosystems/wget/, thanks to Karsten Thygesen. ** Mailing list! Mail to wget-request@sunsite.auc.dk to subscribe. ** New option --delete-after for proxy prefetching. ** New option --retr-symlinks to retrieve symbolic links like plain files. ** rmold.pl -- script to remove files deleted on the remote server ** --convert-links should work now. ** Minor bugfixes. * Changes in Wget 1.4.1 ** Minor bugfixes. ** Added -I (the opposite of -X). ** Dot tracing is now customizable; try wget --dot-style=binary * Changes in Wget 1.4.0 ** Wget 1.4.0 [formerly known as Geturl] is an extensive rewrite of Geturl. Although many things look suspiciously similar, most of the stuff was rewritten, like recursive retrieval, HTTP, FTP and mostly everything else. Wget should be now easier to debug, maintain and, most importantly, use. ** Recursive HTTP should now work without glitches, even with Location changes, server-generated directory listings and other naughty stuff. ** HTTP regetting is supported on servers that support Range specification. WWW authorization is supported -- try wget http://user:password@hostname/ ** FTP support was rewritten and widely enhanced. Globbing should now work flawlessly. Symbolic links are created locally. All the information the Unix-style ls listing can give is now recognized. ** Recursive FTP is supported, e.g. wget -r ftp://gnjilux.cc.fer.hr/pub/unix/util/ ** You can specify "rejected" directories, to which you do not want to enter, e.g. with wget -X /pub ** Time-stamping is supported, with both HTTP and FTP. Try wget -N URL. ** A new texinfo reference manual is provided. It can be read with Emacs, standalone info, or converted to HTML, dvi or postscript. ** Fixed a long-standing bug, so that Wget now works over SLIP connections. ** You can have a system-wide wgetrc (/usr/local/lib/wgetrc by default). Settings in $HOME/.wgetrc override the global ones, of course :-) ** You can set up quota in .wgetrc to prevent sucking too much data. Try `quota = 5M' in .wgetrc (or quota = 100K if you want your sysadmin to like you). ** Download rate is printed after retrieval. ** Wget now sends the `Referer' header when retrieving recursively. ** With the new --no-parent option Wget can retrieve FTP recursively through a proxy server. ** HTML parser, as well as the whole of Wget was rewritten to be much faster and less memory-consuming (yes, both). ** Absolute links can be converted to relative links locally. Check wget -k. ** Wget catches hangup, filtering the output to a log file and resuming work. Try kill -HUP %?wget. ** User-defined headers can be sent. Try wget http://fly.cc.her.hr/ --header='Accept-Charset: iso-8859-2' ** Acceptance/Rejection lists may contain wildcards. ** Wget can display HTTP headers and/or FTP server response with the new `-S' option. It can save the original HTTP headers with `-s'. ** socks library is now supported (thanks to Antonio Rosella ). Configure with --with-socks. ** There is a nicer display of REST-ed output. ** Many new options (like -x to force directory hierarchy, or -m to turn on mirroring options). ** Wget is now distributed under GNU General Public License (GPL). ** Lots of small features I can't remember. :-) ** A host of bugfixes. * Changes in Geturl 1.3 ** Added FTP globbing support (ftp://fly.cc.fer.hr/*) ** Added support for no_proxy ** Added support for ftp://user:password@host/ ** Added support for %xx in URL syntax ** More natural command-line options ** Added -e switch to execute .geturlrc commands from the command-line ** Added support for robots.txt ** Fixed some minor bugs * Geturl 1.2 is a bugfix release with no user-visible changes. * Changes in Geturl 1.1 ** REST supported in FTP ** Proxy servers supported ** GNU getopt used, which enables command-line arguments to be ordered as you wish, e.g. geturl http://fly.cc.fer.hr/ -vo log is the same as geturl -vo log http://fly.cc.fer.hr/ ** Netscape-compatible URL syntax for HTTP supported: host[:port]/dir/file ** NcFTP-compatible colon URL syntax for FTP supported: host:/dir/file ** supported ** autoconf supported ---------------------------------------------------------------------- Copyright information: Copyright (C) 1997, 1998 Free Software Foundation, Inc. Permission is granted to anyone to make or distribute verbatim copies of this document as received, in any medium, provided that the copyright notice and this permission notice are preserved, thus giving the recipient permission to redistribute in turn. Permission is granted to distribute modified versions of this document, or of portions of it, under the above conditions, provided also that they carry prominent notices stating who last changed them. 070701000565a3000081a4000000020000000200000001372ff1eb00000dc9000000660000004500000000000000000000001600000004reloc/doc/wget/README -*- text -*- GNU Wget README GNU Wget is a free network utility to retrieve files from the World Wide Web using HTTP and FTP, the two most widely used Internet protocols. It works non-interactively, thus enabling work in the background, after having logged off. The recursive retrieval of HTML pages, as well as FTP sites is supported -- you can use Wget to make mirrors of archives and home pages, or traverse the web like a WWW robot (Wget understands /robots.txt). Wget works exceedingly well on slow or unstable connections, keeping getting the document until it is fully retrieved. Re-getting files from where it left off works on servers (both HTTP and FTP) that support it. Matching of wildcards and recursive mirroring of directories are available when retrieving via FTP. Both HTTP and FTP retrievals can be time-stamped, thus Wget can see if the remote file has changed since last retrieval and automatically retrieve the new version if it has. Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. If you are behind a firewall that requires the use of a socks style gateway, you can get the socks library and compile wget with support for socks. Most of the features are configurable, either through command-line options, or via initialization file .wgetrc. Wget allows you to install a global startup file (/usr/local/etc/wgetrc by default) for site settings. Wget works under almost all modern Unix variants and, unlike many other similar utilities, is written entirely in C, thus requiring no additional software (like perl). As Wget uses the GNU Autoconf, it is easily built on and ported to other Unix's. Installation procedure is described in the INSTALL file. Like all GNU utilities, the latest version of Wget can be found at the master GNU archive site prep.ai.mit.edu, and its mirrors. For example, Wget 1.5.2 is at: . The latest version is also available via FTP from the maintainer's machine, at: . This location is mirrored at: and . Please report bugs in Wget to . Wget has a own mailing list at . To subscribe, mail to . Wget is free in all senses -- it is freely redistributable, and no payment is required. If you still wish to donate money to the author, or wish to sponsor implementation of specific features, please email me at . AUTHOR: Hrvoje Niksic Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 070701000565a4000081a4000000020000000200000001372ff1eb000009c4000000660000004500000000000000000000001400000004reloc/doc/wget/TODO Hey Emacs, this is -*- outline -*- mode This is the todo list for Wget. I don't have any time-table of when I plan to implement these features; this is just a list of things I'd like to see in Wget. I'll work on some of them myself, and I will accept patches in their direction. The items are not listed in any particular order. Not all of them are user-visible changes. * Make `-k' convert too. * Add option to clobber existing file names (no `.N' suffixes). * Introduce a concept of "boolean" options. For instance, every boolean option `--foo' would have a `--no-foo' equivalent for turning it off. Get rid of `--foo=no' stuff. Short options would be handled as `-x' vs. `-nx'. * Implement "thermometer" display (not all that hard; use an alternative show_progress() if the output goes to a terminal.) * Add option to only list wildcard matches without doing the download. * Add case-insensitivity as an option. * Add option to download all files needed to display a web page (images, etc.) * Handle MIME types correctly. There should be an option to (not) retrieve files based on MIME types, e.g. `--accept-types=image/*'. * Implement "persistent" retrieving. In "persistent" mode Wget should treat most of the errors as transient. * Allow time-stamping by arbitrary date. * Fix Unix directory parser to allow for spaces in file names. * Allow size limit to files. * -k should convert convert relative references to absolute if not downloaded. * Recognize HTML comments correctly. Add more options for handling bogus HTML found all over the 'net. * Implement breadth-first retrieval. * Download to .in* when mirroring. * Add an option to delete or move no-longer-existent files when mirroring. * Implement a switch to avoid downloading multiple files (e.g. x and x.gz). * Implement uploading (--upload URL?) in FTP and HTTP. * Rewrite FTP code to allow for easy addition of new commands. It should probably be coded as a simple DFA engine. * Recognize more FTP servers (VMS). * Make HTTP timestamping use If-Modified-Since facility. * Implement better spider options. * Add more protocols (e.g. gopher and news), implementing them in a modular fashion. * Implement a concept of "packages" a la mirror. * Implement correct RFC1808 URL parsing. * Implement HTTP cookies. * Implement more HTTP/1.1 bells and whistles (ETag, Content-MD5 etc.) * Support SSL encryption through SSLeay. 070701000565a5000081a4000000020000000200000001372ff1f40000038e000000660000004500000000000000000000001a00000004reloc/doc/wget/ansi2knr.1.TH ANSI2KNR 1 "31 December 1990" .SH NAME ansi2knr \- convert ANSI C to Kernighan & Ritchie C .SH SYNOPSIS .I ansi2knr input_file output_file .SH DESCRIPTION If no output_file is supplied, output goes to stdout. .br There are no error messages. .sp .I ansi2knr recognizes functions by seeing a non-keyword identifier at the left margin, followed by a left parenthesis, with a right parenthesis as the last character on the line. It will recognize a multi-line header if the last character on each line but the last is a left parenthesis or comma. These algorithms ignore whitespace and comments, except that the function name must be the first thing on the line. .sp The following constructs will confuse it: .br - Any other construct that starts at the left margin and follows the above syntax (such as a macro or function call). .br - Macros that tinker with the syntax of the function header. 070701000565a6000081a4000000020000000200000001372ff1f400000cf1000000660000004500000000000000000000001d00000004reloc/doc/wget/sample.wgetrc### ### Sample Wget initialization file .wgetrc ### ## You can use this file to change the default behaviour of wget or to ## avoid having to type many many command-line options. This file does ## not contain a comprehensive list of commands -- look at the manual ## to find out what you can put into this file. ## ## Wget initialization file can reside in /usr/local/etc/wgetrc ## (global, for all users) or $HOME/.wgetrc (for a single user). ## ## To use any of the settings in this file, you will have to uncomment ## them (and probably change them). ## ## Global settings (useful for setting up in /usr/local/etc/wgetrc). ## Think well before you change them, since they may reduce wget's ## functionality, and make it behave contrary to the documentation: ## # You can set retrieve quota for beginners by specifying a value # optionally followed by 'K' (kilobytes) or 'M' (megabytes). The # default quota is unlimited. #quota = inf # You can lower (or raise) the default number of retries when # downloading a file (default is 20). #tries = 20 # Lowering the maximum depth of the recursive retrieval is handy to # prevent newbies from going too "deep" when they unwittingly start # the recursive retrieval. The default is 5. #reclevel = 5 # Many sites are behind firewalls that do not allow initiation of # connections from the outside. On these sites you have to use the # `passive' feature of FTP. If you are behind such a firewall, you # can turn this on to make Wget use passive FTP by default. #passive_ftp = off ## ## Local settings (for a user to set in his $HOME/.wgetrc). It is ## *highly* undesirable to put these settings in the global file, since ## they are potentially dangerous to "normal" users. ## ## Even when setting up your own ~/.wgetrc, you should know what you ## are doing before doing so. ## # Set this to on to use timestamping by default: #timestamping = off # It is a good idea to make Wget send your email address in a `From:' # header with your request (so that server administrators can contact # you in case of errors). Wget does *not* send `From:' by default. #header = From: Your Name # You can set up other headers, like Accept-Language. Accept-Language # is *not* sent by default. #header = Accept-Language: en # You can set the default proxy for Wget to use. It will override the # value in the environment. #http_proxy = http://proxy.yoyodyne.com:18023/ # If you do not want to use proxy at all, set this to off. #use_proxy = on # You can customize the retrieval outlook. Valid options are default, # binary, mega and micro. #dot_style = default # Setting this to off makes Wget not download /robots.txt. Be sure to # know *exactly* what /robots.txt is and how it is used before changing # the default! #robots = on # It can be useful to make Wget wait between connections. Set this to # the number of seconds you want Wget to wait. #wait = 0 # You can force creating directory structure, even if a single is being # retrieved, by setting this to on. #dirstruct = off # You can turn on recursive retrieving by default (don't do this if # you are not sure you know what it means) by setting this to on. #recursive = off # To have Wget follow FTP links from HTML files by default, set this # to on: #follow_ftp = off 070701000565a7000081a4000000020000000200000001372ff1f4000296d2000000660000004500000000000000000000001b00000004reloc/doc/wget/texinfo.tex% texinfo.tex -- TeX macros to handle Texinfo files. % $Id: texinfo.tex,v 2.232 1998/05/02 14:14:31 karl Exp $ % % Copyright (C) 1985, 86, 88, 90, 91, 92, 93, 94, 95, 96, 97, 98 % Free Software Foundation, Inc. % % This texinfo.tex file is free software; you can redistribute it and/or % modify it under the terms of the GNU General Public License as % published by the Free Software Foundation; either version 2, or (at % your option) any later version. % % This texinfo.tex file is distributed in the hope that it will be % useful, but WITHOUT ANY WARRANTY; without even the implied warranty % of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU % General Public License for more details. % % You should have received a copy of the GNU General Public License % along with this texinfo.tex file; see the file COPYING. If not, write % to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, % Boston, MA 02111-1307, USA. % % In other words, you are welcome to use, share and improve this program. % You are forbidden to forbid anyone else to use, share and improve % what you give them. Help stamp out software-hoarding! % % Please try the latest version of texinfo.tex before submitting bug % reports; you can get the latest version from: % /home/gd/gnu/doc/texinfo.tex on the GNU machines. % ftp://ftp.gnu.org/pub/gnu/texinfo.tex % (and all GNU mirrors) % ftp://tug.org/tex/texinfo.tex % ftp://ctan.org/macros/texinfo/texinfo.tex % (and all CTAN mirrors, finger ctan@tug.org for a list). % % Send bug reports to bug-texinfo@gnu.org. % Please include a precise test case in each bug report, % including a complete document with which we can reproduce the problem. % % Texinfo macros (with @macro) are *not* supported by texinfo.tex. You % have to run makeinfo -E to expand macros first; the texi2dvi script % does this. % % To process a Texinfo manual with TeX, it's most reliable to use the % texi2dvi shell script that comes with the distribution. For simple % manuals, you can get away with: % tex foo.texi % texindex foo.?? % tex foo.texi % tex foo.texi % dvips foo.dvi -o # or whatever, to process the dvi file. % The extra runs of TeX get the cross-reference information correct. % Sometimes one run after texindex suffices, and sometimes you need more % than two; texi2dvi does it as many times as necessary. % Make it possible to create a .fmt file just by loading this file: % if the underlying format is not loaded, start by loading it now. % Added by gildea November 1993. \expandafter\ifx\csname fmtname\endcsname\relax\input plain\fi % This automatically updates the version number based on RCS. \def\deftexinfoversion$#1: #2 ${\def\texinfoversion{#2}} \deftexinfoversion$Revision: 2.232 $ \message{Loading texinfo package [Version \texinfoversion]:} % If in a .fmt file, print the version number % and turn on active characters that we couldn't do earlier because % they might have appeared in the input file name. \everyjob{\message{[Texinfo version \texinfoversion]}\message{} \catcode`+=\active \catcode`\_=\active} % Save some parts of plain tex whose names we will redefine. \let\ptexb=\b \let\ptexbullet=\bullet \let\ptexc=\c \let\ptexcomma=\, \let\ptexdot=\. \let\ptexdots=\dots \let\ptexend=\end \let\ptexequiv=\equiv \let\ptexexclam=\! \let\ptexi=\i \let\ptexlbrace=\{ \let\ptexrbrace=\} \let\ptexstar=\* \let\ptext=\t % We never want plain's outer \+ definition in Texinfo. % For @tex, we can use \tabalign. \let\+ = \relax \message{Basics,} \chardef\other=12 % If this character appears in an error message or help string, it % starts a new line in the output. \newlinechar = `^^J % Set up fixed words for English if not already set. \ifx\putwordAppendix\undefined \gdef\putwordAppendix{Appendix}\fi \ifx\putwordChapter\undefined \gdef\putwordChapter{Chapter}\fi \ifx\putwordfile\undefined \gdef\putwordfile{file}\fi \ifx\putwordInfo\undefined \gdef\putwordfile{Info}\fi \ifx\putwordMethodon\undefined \gdef\putwordMethodon{Method on}\fi \ifx\putwordon\undefined \gdef\putwordon{on}\fi \ifx\putwordpage\undefined \gdef\putwordpage{page}\fi \ifx\putwordsection\undefined \gdef\putwordsection{section}\fi \ifx\putwordSection\undefined \gdef\putwordSection{Section}\fi \ifx\putwordsee\undefined \gdef\putwordsee{see}\fi \ifx\putwordSee\undefined \gdef\putwordSee{See}\fi \ifx\putwordShortContents\undefined \gdef\putwordShortContents{Short Contents}\fi \ifx\putwordTableofContents\undefined\gdef\putwordTableofContents{Table of Contents}\fi % Ignore a token. % \def\gobble#1{} \hyphenation{ap-pen-dix} \hyphenation{mini-buf-fer mini-buf-fers} \hyphenation{eshell} \hyphenation{white-space} % Margin to add to right of even pages, to left of odd pages. \newdimen \bindingoffset \newdimen \normaloffset \newdimen\pagewidth \newdimen\pageheight % Sometimes it is convenient to have everything in the transcript file % and nothing on the terminal. We don't just call \tracingall here, % since that produces some useless output on the terminal. % \def\gloggingall{\begingroup \globaldefs = 1 \loggingall \endgroup}% \def\loggingall{\tracingcommands2 \tracingstats2 \tracingpages1 \tracingoutput1 \tracinglostchars1 \tracingmacros2 \tracingparagraphs1 \tracingrestores1 \showboxbreadth\maxdimen\showboxdepth\maxdimen }% % For @cropmarks command. % Do @cropmarks to get crop marks. % \newif\ifcropmarks \let\cropmarks = \cropmarkstrue % % Dimensions to add cropmarks at corners. % Added by P. A. MacKay, 12 Nov. 1986 % \newdimen\cornerlong \newdimen\cornerthick \newdimen\topandbottommargin \newdimen\outerhsize \newdimen\outervsize \cornerlong=1pc\cornerthick=.3pt % These set size of cropmarks \outerhsize=7in %\outervsize=9.5in % Alternative @smallbook page size is 9.25in \outervsize=9.25in \topandbottommargin=.75in % Main output routine. \chardef\PAGE = 255 \output = {\onepageout{\pagecontents\PAGE}} \newbox\headlinebox \newbox\footlinebox % \onepageout takes a vbox as an argument. Note that \pagecontents % does insertions, but you have to call it yourself. \def\onepageout#1{% \ifcropmarks \hoffset=0pt \else \hoffset=\normaloffset \fi % \ifodd\pageno \advance\hoffset by \bindingoffset \else \advance\hoffset by -\bindingoffset\fi % % Do this outside of the \shipout so @code etc. will be expanded in % the headline as they should be, not taken literally (outputting ''code). \setbox\headlinebox = \vbox{\let\hsize=\pagewidth \makeheadline}% \setbox\footlinebox = \vbox{\let\hsize=\pagewidth \makefootline}% % {% % Have to do this stuff outside the \shipout because we want it to % take effect in \write's, yet the group defined by the \vbox ends % before the \shipout runs. % \escapechar = `\\ % use backslash in output files. \indexdummies % don't expand commands in the output. \normalturnoffactive % \ in index entries must not stay \, e.g., if % the page break happens to be in the middle of an example. \shipout\vbox{% \ifcropmarks \vbox to \outervsize\bgroup \hsize = \outerhsize \line{\ewtop\hfil\ewtop}% \nointerlineskip \line{% \vbox{\moveleft\cornerthick\nstop}% \hfill \vbox{\moveright\cornerthick\nstop}% }% \vskip\topandbottommargin \line\bgroup \hfil % center the page within the outer (page) hsize. \ifodd\pageno\hskip\bindingoffset\fi \vbox\bgroup \fi % \unvbox\headlinebox \pagebody{#1}% \ifdim\ht\footlinebox > 0pt % Only leave this space if the footline is nonempty. % (We lessened \vsize for it in \oddfootingxxx.) % The \baselineskip=24pt in plain's \makefootline has no effect. \vskip 2\baselineskip \unvbox\footlinebox \fi % \ifcropmarks \egroup % end of \vbox\bgroup \hfil\egroup % end of (centering) \line\bgroup \vskip\topandbottommargin plus1fill minus1fill \boxmaxdepth = \cornerthick \line{% \vbox{\moveleft\cornerthick\nsbot}% \hfill \vbox{\moveright\cornerthick\nsbot}% }% \nointerlineskip \line{\ewbot\hfil\ewbot}% \egroup % \vbox from first cropmarks clause \fi }% end of \shipout\vbox }% end of group with \turnoffactive \advancepageno \ifnum\outputpenalty>-20000 \else\dosupereject\fi } \newinsert\margin \dimen\margin=\maxdimen \def\pagebody#1{\vbox to\pageheight{\boxmaxdepth=\maxdepth #1}} {\catcode`\@ =11 \gdef\pagecontents#1{\ifvoid\topins\else\unvbox\topins\fi % marginal hacks, juha@viisa.uucp (Juha Takala) \ifvoid\margin\else % marginal info is present \rlap{\kern\hsize\vbox to\z@{\kern1pt\box\margin \vss}}\fi \dimen@=\dp#1 \unvbox#1 \ifvoid\footins\else\vskip\skip\footins\footnoterule \unvbox\footins\fi \ifr@ggedbottom \kern-\dimen@ \vfil \fi} } % Here are the rules for the cropmarks. Note that they are % offset so that the space between them is truly \outerhsize or \outervsize % (P. A. MacKay, 12 November, 1986) % \def\ewtop{\vrule height\cornerthick depth0pt width\cornerlong} \def\nstop{\vbox {\hrule height\cornerthick depth\cornerlong width\cornerthick}} \def\ewbot{\vrule height0pt depth\cornerthick width\cornerlong} \def\nsbot{\vbox {\hrule height\cornerlong depth\cornerthick width\cornerthick}} % Parse an argument, then pass it to #1. The argument is the rest of % the input line (except we remove a trailing comment). #1 should be a % macro which expects an ordinary undelimited TeX argument. % \def\parsearg#1{% \let\next = #1% \begingroup \obeylines \futurelet\temp\parseargx } % If the next token is an obeyed space (from an @example environment or % the like), remove it and recurse. Otherwise, we're done. \def\parseargx{% % \obeyedspace is defined far below, after the definition of \sepspaces. \ifx\obeyedspace\temp \expandafter\parseargdiscardspace \else \expandafter\parseargline \fi } % Remove a single space (as the delimiter token to the macro call). {\obeyspaces % \gdef\parseargdiscardspace {\futurelet\temp\parseargx}} {\obeylines % \gdef\parseargline#1^^M{% \endgroup % End of the group started in \parsearg. % % First remove any @c comment, then any @comment. % Result of each macro is put in \toks0. \argremovec #1\c\relax % \expandafter\argremovecomment \the\toks0 \comment\relax % % % Call the caller's macro, saved as \next in \parsearg. \expandafter\next\expandafter{\the\toks0}% }% } % Since all \c{,omment} does is throw away the argument, we can let TeX % do that for us. The \relax here is matched by the \relax in the call % in \parseargline; it could be more or less anything, its purpose is % just to delimit the argument to the \c. \def\argremovec#1\c#2\relax{\toks0 = {#1}} \def\argremovecomment#1\comment#2\relax{\toks0 = {#1}} % \argremovec{,omment} might leave us with trailing spaces, though; e.g., % @end itemize @c foo % will have two active spaces as part of the argument with the % `itemize'. Here we remove all active spaces from #1, and assign the % result to \toks0. % % This loses if there are any *other* active characters besides spaces % in the argument -- _ ^ +, for example -- since they get expanded. % Fortunately, Texinfo does not define any such commands. (If it ever % does, the catcode of the characters in questionwill have to be changed % here.) But this means we cannot call \removeactivespaces as part of % \argremovec{,omment}, since @c uses \parsearg, and thus the argument % that \parsearg gets might well have any character at all in it. % \def\removeactivespaces#1{% \begingroup \ignoreactivespaces \edef\temp{#1}% \global\toks0 = \expandafter{\temp}% \endgroup } % Change the active space to expand to nothing. % \begingroup \obeyspaces \gdef\ignoreactivespaces{\obeyspaces\let =\empty} \endgroup \def\flushcr{\ifx\par\lisppar \def\next##1{}\else \let\next=\relax \fi \next} %% These are used to keep @begin/@end levels from running away %% Call \inENV within environments (after a \begingroup) \newif\ifENV \ENVfalse \def\inENV{\ifENV\relax\else\ENVtrue\fi} \def\ENVcheck{% \ifENV\errmessage{Still within an environment. Type Return to continue.} \endgroup\fi} % This is not perfect, but it should reduce lossage % @begin foo is the same as @foo, for now. \newhelp\EMsimple{Type to continue.} \outer\def\begin{\parsearg\beginxxx} \def\beginxxx #1{% \expandafter\ifx\csname #1\endcsname\relax {\errhelp=\EMsimple \errmessage{Undefined command @begin #1}}\else \csname #1\endcsname\fi} % @end foo executes the definition of \Efoo. % \def\end{\parsearg\endxxx} \def\endxxx #1{% \removeactivespaces{#1}% \edef\endthing{\the\toks0}% % \expandafter\ifx\csname E\endthing\endcsname\relax \expandafter\ifx\csname \endthing\endcsname\relax % There's no \foo, i.e., no ``environment'' foo. \errhelp = \EMsimple \errmessage{Undefined command `@end \endthing'}% \else \unmatchedenderror\endthing \fi \else % Everything's ok; the right environment has been started. \csname E\endthing\endcsname \fi } % There is an environment #1, but it hasn't been started. Give an error. % \def\unmatchedenderror#1{% \errhelp = \EMsimple \errmessage{This `@end #1' doesn't have a matching `@#1'}% } % Define the control sequence \E#1 to give an unmatched @end error. % \def\defineunmatchedend#1{% \expandafter\def\csname E#1\endcsname{\unmatchedenderror{#1}}% } % Single-spacing is done by various environments (specifically, in % \nonfillstart and \quotations). \newskip\singlespaceskip \singlespaceskip = 12.5pt \def\singlespace{% % Why was this kern here? It messes up equalizing space above and below % environments. --karl, 6may93 %{\advance \baselineskip by -\singlespaceskip %\kern \baselineskip}% \setleading \singlespaceskip } %% Simple single-character @ commands % @@ prints an @ % Kludge this until the fonts are right (grr). \def\@{{\tt\char64}} % This is turned off because it was never documented % and you can use @w{...} around a quote to suppress ligatures. %% Define @` and @' to be the same as ` and ' %% but suppressing ligatures. %\def\`{{`}} %\def\'{{'}} % Used to generate quoted braces. \def\mylbrace {{\tt\char123}} \def\myrbrace {{\tt\char125}} \let\{=\mylbrace \let\}=\myrbrace \begingroup % Definitions to produce actual \{ & \} command in an index. \catcode`\{ = 12 \catcode`\} = 12 \catcode`\[ = 1 \catcode`\] = 2 \catcode`\@ = 0 \catcode`\\ = 12 @gdef@lbracecmd[\{]% @gdef@rbracecmd[\}]% @endgroup % Accents: @, @dotaccent @ringaccent @ubaraccent @udotaccent % Others are defined by plain TeX: @` @' @" @^ @~ @= @v @H. \let\, = \c \let\dotaccent = \. \def\ringaccent#1{{\accent23 #1}} \let\tieaccent = \t \let\ubaraccent = \b \let\udotaccent = \d % Other special characters: @questiondown @exclamdown % Plain TeX defines: @AA @AE @O @OE @L (and lowercase versions) @ss. \def\questiondown{?`} \def\exclamdown{!`} % Dotless i and dotless j, used for accents. \def\imacro{i} \def\jmacro{j} \def\dotless#1{% \def\temp{#1}% \ifx\temp\imacro \ptexi \else\ifx\temp\jmacro \j \else \errmessage{@dotless can be used only with i or j}% \fi\fi } % Be sure we're in horizontal mode when doing a tie, since we make space % equivalent to this in @example-like environments. Otherwise, a space % at the beginning of a line will start with \penalty -- and % since \penalty is valid in vertical mode, we'd end up putting the % penalty on the vertical list instead of in the new paragraph. {\catcode`@ = 11 % Avoid using \@M directly, because that causes trouble % if the definition is written into an index file. \global\let\tiepenalty = \@M \gdef\tie{\leavevmode\penalty\tiepenalty\ } } % @: forces normal size whitespace following. \def\:{\spacefactor=1000 } % @* forces a line break. \def\*{\hfil\break\hbox{}\ignorespaces} % @. is an end-of-sentence period. \def\.{.\spacefactor=3000 } % @! is an end-of-sentence bang. \def\!{!\spacefactor=3000 } % @? is an end-of-sentence query. \def\?{?\spacefactor=3000 } % @w prevents a word break. Without the \leavevmode, @w at the % beginning of a paragraph, when TeX is still in vertical mode, would % produce a whole line of output instead of starting the paragraph. \def\w#1{\leavevmode\hbox{#1}} % @group ... @end group forces ... to be all on one page, by enclosing % it in a TeX vbox. We use \vtop instead of \vbox to construct the box % to keep its height that of a normal line. According to the rules for % \topskip (p.114 of the TeXbook), the glue inserted is % max (\topskip - \ht (first item), 0). If that height is large, % therefore, no glue is inserted, and the space between the headline and % the text is small, which looks bad. % \def\group{\begingroup \ifnum\catcode13=\active \else \errhelp = \groupinvalidhelp \errmessage{@group invalid in context where filling is enabled}% \fi % % The \vtop we start below produces a box with normal height and large % depth; thus, TeX puts \baselineskip glue before it, and (when the % next line of text is done) \lineskip glue after it. (See p.82 of % the TeXbook.) Thus, space below is not quite equal to space % above. But it's pretty close. \def\Egroup{% \egroup % End the \vtop. \endgroup % End the \group. }% % \vtop\bgroup % We have to put a strut on the last line in case the @group is in % the midst of an example, rather than completely enclosing it. % Otherwise, the interline space between the last line of the group % and the first line afterwards is too small. But we can't put the % strut in \Egroup, since there it would be on a line by itself. % Hence this just inserts a strut at the beginning of each line. \everypar = {\strut}% % % Since we have a strut on every line, we don't need any of TeX's % normal interline spacing. \offinterlineskip % % OK, but now we have to do something about blank % lines in the input in @example-like environments, which normally % just turn into \lisppar, which will insert no space now that we've % turned off the interline space. Simplest is to make them be an % empty paragraph. \ifx\par\lisppar \edef\par{\leavevmode \par}% % % Reset ^^M's definition to new definition of \par. \obeylines \fi % % Do @comment since we are called inside an environment such as % @example, where each end-of-line in the input causes an % end-of-line in the output. We don't want the end-of-line after % the `@group' to put extra space in the output. Since @group % should appear on a line by itself (according to the Texinfo % manual), we don't worry about eating any user text. \comment } % % TeX puts in an \escapechar (i.e., `@') at the beginning of the help % message, so this ends up printing `@group can only ...'. % \newhelp\groupinvalidhelp{% group can only be used in environments such as @example,^^J% where each line of input produces a line of output.} % @need space-in-mils % forces a page break if there is not space-in-mils remaining. \newdimen\mil \mil=0.001in \def\need{\parsearg\needx} % Old definition--didn't work. %\def\needx #1{\par % %% This method tries to make TeX break the page naturally %% if the depth of the box does not fit. %{\baselineskip=0pt% %\vtop to #1\mil{\vfil}\kern -#1\mil\penalty 10000 %\prevdepth=-1000pt %}} \def\needx#1{% % Go into vertical mode, so we don't make a big box in the middle of a % paragraph. \par % % Don't add any leading before our big empty box, but allow a page % break, since the best break might be right here. \allowbreak \nointerlineskip \vtop to #1\mil{\vfil}% % % TeX does not even consider page breaks if a penalty added to the % main vertical list is 10000 or more. But in order to see if the % empty box we just added fits on the page, we must make it consider % page breaks. On the other hand, we don't want to actually break the % page after the empty box. So we use a penalty of 9999. % % There is an extremely small chance that TeX will actually break the % page at this \penalty, if there are no other feasible breakpoints in % sight. (If the user is using lots of big @group commands, which % almost-but-not-quite fill up a page, TeX will have a hard time doing % good page breaking, for example.) However, I could not construct an % example where a page broke at this \penalty; if it happens in a real % document, then we can reconsider our strategy. \penalty9999 % % Back up by the size of the box, whether we did a page break or not. \kern -#1\mil % % Do not allow a page break right after this kern. \nobreak } % @br forces paragraph break \let\br = \par % @dots{} output an ellipsis using the current font. % We do .5em per period so that it has the same spacing in a typewriter % font as three actual period characters. % \def\dots{\hbox to 1.5em{% \hskip 0pt plus 0.25fil minus 0.25fil .\hss.\hss.% \hskip 0pt plus 0.5fil minus 0.5fil }} % @enddots{} is an end-of-sentence ellipsis. % \def\enddots{% \hbox to 2em{% \hskip 0pt plus 0.25fil minus 0.25fil .\hss.\hss.\hss.% \hskip 0pt plus 0.5fil minus 0.5fil }% \spacefactor=3000 } % @page forces the start of a new page \def\page{\par\vfill\supereject} % @exdent text.... % outputs text on separate line in roman font, starting at standard page margin % This records the amount of indent in the innermost environment. % That's how much \exdent should take out. \newskip\exdentamount % This defn is used inside fill environments such as @defun. \def\exdent{\parsearg\exdentyyy} \def\exdentyyy #1{{\hfil\break\hbox{\kern -\exdentamount{\rm#1}}\hfil\break}} % This defn is used inside nofill environments such as @example. \def\nofillexdent{\parsearg\nofillexdentyyy} \def\nofillexdentyyy #1{{\advance \leftskip by -\exdentamount \leftline{\hskip\leftskip{\rm#1}}}} % @inmargin{TEXT} puts TEXT in the margin next to the current paragraph. \def\inmargin#1{% \strut\vadjust{\nobreak\kern-\strutdepth \vtop to \strutdepth{\baselineskip\strutdepth\vss \llap{\rightskip=\inmarginspacing \vbox{\noindent #1}}\null}}} \newskip\inmarginspacing \inmarginspacing=1cm \def\strutdepth{\dp\strutbox} %\hbox{{\rm#1}}\hfil\break}} % @include file insert text of that file as input. % Allow normal characters that we make active in the argument (a file name). \def\include{\begingroup \catcode`\\=12 \catcode`~=12 \catcode`^=12 \catcode`_=12 \catcode`|=12 \catcode`<=12 \catcode`>=12 \catcode`+=12 \parsearg\includezzz} % Restore active chars for included file. \def\includezzz#1{\endgroup\begingroup % Read the included file in a group so nested @include's work. \def\thisfile{#1}% \input\thisfile \endgroup} \def\thisfile{} % @center line outputs that line, centered \def\center{\parsearg\centerzzz} \def\centerzzz #1{{\advance\hsize by -\leftskip \advance\hsize by -\rightskip \centerline{#1}}} % @sp n outputs n lines of vertical space \def\sp{\parsearg\spxxx} \def\spxxx #1{\vskip #1\baselineskip} % @comment ...line which is ignored... % @c is the same as @comment % @ignore ... @end ignore is another way to write a comment \def\comment{\catcode 64=\other \catcode 123=\other \catcode 125=\other% \parsearg \commentxxx} \def\commentxxx #1{\catcode 64=0 \catcode 123=1 \catcode 125=2 } \let\c=\comment % @paragraphindent is defined for the Info formatting commands only. \let\paragraphindent=\comment % Prevent errors for section commands. % Used in @ignore and in failing conditionals. \def\ignoresections{% \let\chapter=\relax \let\unnumbered=\relax \let\top=\relax \let\unnumberedsec=\relax \let\unnumberedsection=\relax \let\unnumberedsubsec=\relax \let\unnumberedsubsection=\relax \let\unnumberedsubsubsec=\relax \let\unnumberedsubsubsection=\relax \let\section=\relax \let\subsec=\relax \let\subsubsec=\relax \let\subsection=\relax \let\subsubsection=\relax \let\appendix=\relax \let\appendixsec=\relax \let\appendixsection=\relax \let\appendixsubsec=\relax \let\appendixsubsection=\relax \let\appendixsubsubsec=\relax \let\appendixsubsubsection=\relax \let\contents=\relax \let\smallbook=\relax \let\titlepage=\relax } % Used in nested conditionals, where we have to parse the Texinfo source % and so want to turn off most commands, in case they are used % incorrectly. % \def\ignoremorecommands{% \let\defcodeindex = \relax \let\defcv = \relax \let\deffn = \relax \let\deffnx = \relax \let\defindex = \relax \let\defivar = \relax \let\defmac = \relax \let\defmethod = \relax \let\defop = \relax \let\defopt = \relax \let\defspec = \relax \let\deftp = \relax \let\deftypefn = \relax \let\deftypefun = \relax \let\deftypevar = \relax \let\deftypevr = \relax \let\defun = \relax \let\defvar = \relax \let\defvr = \relax \let\ref = \relax \let\xref = \relax \let\printindex = \relax \let\pxref = \relax \let\settitle = \relax \let\setchapternewpage = \relax \let\setchapterstyle = \relax \let\everyheading = \relax \let\evenheading = \relax \let\oddheading = \relax \let\everyfooting = \relax \let\evenfooting = \relax \let\oddfooting = \relax \let\headings = \relax \let\include = \relax \let\lowersections = \relax \let\down = \relax \let\raisesections = \relax \let\up = \relax \let\set = \relax \let\clear = \relax \let\item = \relax } % Ignore @ignore ... @end ignore. % \def\ignore{\doignore{ignore}} % Ignore @ifinfo, @ifhtml, @ifnottex, @html, @menu, and @direntry text. % \def\ifinfo{\doignore{ifinfo}} \def\ifhtml{\doignore{ifhtml}} \def\ifnottex{\doignore{ifnottex}} \def\html{\doignore{html}} \def\menu{\doignore{menu}} \def\direntry{\doignore{direntry}} % Also ignore @macro ... @end macro. The user must run texi2dvi, % which runs makeinfo to do macro expansion. Ignore @unmacro, too. \def\macro{\doignore{macro}} \def\macrocsname{macro} \let\unmacro = \comment % @dircategory CATEGORY -- specify a category of the dir file % which this file should belong to. Ignore this in TeX. \let\dircategory = \comment % Ignore text until a line `@end #1'. % \def\doignore#1{\begingroup % Don't complain about control sequences we have declared \outer. \ignoresections % % Define a command to swallow text until we reach `@end #1'. % This @ is a catcode 12 token (that is the normal catcode of @ in % this texinfo.tex file). We change the catcode of @ below to match. \long\def\doignoretext##1@end #1{\enddoignore}% % % Make sure that spaces turn into tokens that match what \doignoretext wants. \catcode32 = 10 % % Ignore braces, too, so mismatched braces don't cause trouble. \catcode`\{ = 9 \catcode`\} = 9 % % We must not have @c interpreted as a control sequence. \catcode`\@ = 12 % % Make the letter c a comment character so that the rest of the line % will be ignored. This way, the document can have (for example) % @c @end ifinfo % and the @end ifinfo will be properly ignored. % (We've just changed @ to catcode 12.) % % But we can't do this if #1 is `macro', since that actually contains a c. % Happily, none of the other conditionals have the letter `c' in their names! \def\temp{#1}% \ifx\temp\macrocsname \else \catcode`\c = 14 \fi % % And now expand that command. \doignoretext } % What we do to finish off ignored text. % \def\enddoignore{\endgroup\ignorespaces}% \newif\ifwarnedobs\warnedobsfalse \def\obstexwarn{% \ifwarnedobs\relax\else % We need to warn folks that they may have trouble with TeX 3.0. % This uses \immediate\write16 rather than \message to get newlines. \immediate\write16{} \immediate\write16{***WARNING*** for users of Unix TeX 3.0!} \immediate\write16{This manual trips a bug in TeX version 3.0 (tex hangs).} \immediate\write16{If you are running another version of TeX, relax.} \immediate\write16{If you are running Unix TeX 3.0, kill this TeX process.} \immediate\write16{ Then upgrade your TeX installation if you can.} \immediate\write16{ (See ftp://ftp.gnu.ai.mit.edu/pub/gnu/TeX.README.)} \immediate\write16{If you are stuck with version 3.0, run the} \immediate\write16{ script ``tex3patch'' from the Texinfo distribution} \immediate\write16{ to use a workaround.} \immediate\write16{} \global\warnedobstrue \fi } % **In TeX 3.0, setting text in \nullfont hangs tex. For a % workaround (which requires the file ``dummy.tfm'' to be installed), % uncomment the following line: %%%%%\font\nullfont=dummy\let\obstexwarn=\relax % Ignore text, except that we keep track of conditional commands for % purposes of nesting, up to an `@end #1' command. % \def\nestedignore#1{% \obstexwarn % We must actually expand the ignored text to look for the @end % command, so that nested ignore constructs work. Thus, we put the % text into a \vbox and then do nothing with the result. To minimize % the change of memory overflow, we follow the approach outlined on % page 401 of the TeXbook: make the current font be a dummy font. % \setbox0 = \vbox\bgroup % Don't complain about control sequences we have declared \outer. \ignoresections % % Define `@end #1' to end the box, which will in turn undefine the % @end command again. \expandafter\def\csname E#1\endcsname{\egroup\ignorespaces}% % % We are going to be parsing Texinfo commands. Most cause no % trouble when they are used incorrectly, but some commands do % complicated argument parsing or otherwise get confused, so we % undefine them. % % We can't do anything about stray @-signs, unfortunately; % they'll produce `undefined control sequence' errors. \ignoremorecommands % % Set the current font to be \nullfont, a TeX primitive, and define % all the font commands to also use \nullfont. We don't use % dummy.tfm, as suggested in the TeXbook, because not all sites % might have that installed. Therefore, math mode will still % produce output, but that should be an extremely small amount of % stuff compared to the main input. % \nullfont \let\tenrm = \nullfont \let\tenit = \nullfont \let\tensl = \nullfont \let\tenbf = \nullfont \let\tentt = \nullfont \let\smallcaps = \nullfont \let\tensf = \nullfont % Similarly for index fonts (mostly for their use in % smallexample) \let\indrm = \nullfont \let\indit = \nullfont \let\indsl = \nullfont \let\indbf = \nullfont \let\indtt = \nullfont \let\indsc = \nullfont \let\indsf = \nullfont % % Don't complain when characters are missing from the fonts. \tracinglostchars = 0 % % Don't bother to do space factor calculations. \frenchspacing % % Don't report underfull hboxes. \hbadness = 10000 % % Do minimal line-breaking. \pretolerance = 10000 % % Do not execute instructions in @tex \def\tex{\doignore{tex}}% } % @set VAR sets the variable VAR to an empty value. % @set VAR REST-OF-LINE sets VAR to the value REST-OF-LINE. % % Since we want to separate VAR from REST-OF-LINE (which might be % empty), we can't just use \parsearg; we have to insert a space of our % own to delimit the rest of the line, and then take it out again if we % didn't need it. Make sure the catcode of space is correct to avoid % losing inside @example, for instance. % \def\set{\begingroup\catcode` =10 \catcode`\-=12 \catcode`\_=12 % Allow - and _ in VAR. \parsearg\setxxx} \def\setxxx#1{\setyyy#1 \endsetyyy} \def\setyyy#1 #2\endsetyyy{% \def\temp{#2}% \ifx\temp\empty \global\expandafter\let\csname SET#1\endcsname = \empty \else \setzzz{#1}#2\endsetzzz % Remove the trailing space \setxxx inserted. \fi \endgroup } % Can't use \xdef to pre-expand #2 and save some time, since \temp or % \next or other control sequences that we've defined might get us into % an infinite loop. Consider `@set foo @cite{bar}'. \def\setzzz#1#2 \endsetzzz{\expandafter\gdef\csname SET#1\endcsname{#2}} % @clear VAR clears (i.e., unsets) the variable VAR. % \def\clear{\parsearg\clearxxx} \def\clearxxx#1{\global\expandafter\let\csname SET#1\endcsname=\relax} % @value{foo} gets the text saved in variable foo. % \def\value{\begingroup \catcode`\-=12 \catcode`\_=12 % Allow - and _ in VAR. \valuexxx} \def\valuexxx#1{% \expandafter\ifx\csname SET#1\endcsname\relax {\{No value for ``#1''\}}% \else \csname SET#1\endcsname \fi \endgroup} % @ifset VAR ... @end ifset reads the `...' iff VAR has been defined % with @set. % \def\ifset{\parsearg\ifsetxxx} \def\ifsetxxx #1{% \expandafter\ifx\csname SET#1\endcsname\relax \expandafter\ifsetfail \else \expandafter\ifsetsucceed \fi } \def\ifsetsucceed{\conditionalsucceed{ifset}} \def\ifsetfail{\nestedignore{ifset}} \defineunmatchedend{ifset} % @ifclear VAR ... @end ifclear reads the `...' iff VAR has never been % defined with @set, or has been undefined with @clear. % \def\ifclear{\parsearg\ifclearxxx} \def\ifclearxxx #1{% \expandafter\ifx\csname SET#1\endcsname\relax \expandafter\ifclearsucceed \else \expandafter\ifclearfail \fi } \def\ifclearsucceed{\conditionalsucceed{ifclear}} \def\ifclearfail{\nestedignore{ifclear}} \defineunmatchedend{ifclear} % @iftex, @ifnothtml, @ifnotinfo always succeed; we read the text % following, through the first @end iftex (etc.). Make `@end iftex' % (etc.) valid only after an @iftex. % \def\iftex{\conditionalsucceed{iftex}} \def\ifnothtml{\conditionalsucceed{ifnothtml}} \def\ifnotinfo{\conditionalsucceed{ifnotinfo}} \defineunmatchedend{iftex} \defineunmatchedend{ifnothtml} \defineunmatchedend{ifnotinfo} % We can't just want to start a group at @iftex (for example) and end it % at @end iftex, since then @set commands inside the conditional have no % effect (they'd get reverted at the end of the group). So we must % define \Eiftex to redefine itself to be its previous value. (We can't % just define it to fail again with an ``unmatched end'' error, since % the @ifset might be nested.) % \def\conditionalsucceed#1{% \edef\temp{% % Remember the current value of \E#1. \let\nece{prevE#1} = \nece{E#1}% % % At the `@end #1', redefine \E#1 to be its previous value. \def\nece{E#1}{\let\nece{E#1} = \nece{prevE#1}}% }% \temp } % We need to expand lots of \csname's, but we don't want to expand the % control sequences after we've constructed them. % \def\nece#1{\expandafter\noexpand\csname#1\endcsname} % @asis just yields its argument. Used with @table, for example. % \def\asis#1{#1} % @math means output in math mode. % We don't use $'s directly in the definition of \math because control % sequences like \math are expanded when the toc file is written. Then, % we read the toc file back, the $'s will be normal characters (as they % should be, according to the definition of Texinfo). So we must use a % control sequence to switch into and out of math mode. % % This isn't quite enough for @math to work properly in indices, but it % seems unlikely it will ever be needed there. % \let\implicitmath = $ \def\math#1{\implicitmath #1\implicitmath} % @bullet and @minus need the same treatment as @math, just above. \def\bullet{\implicitmath\ptexbullet\implicitmath} \def\minus{\implicitmath-\implicitmath} \def\node{\ENVcheck\parsearg\nodezzz} \def\nodezzz#1{\nodexxx [#1,]} \def\nodexxx[#1,#2]{\gdef\lastnode{#1}} \let\nwnode=\node \let\lastnode=\relax \def\donoderef{\ifx\lastnode\relax\else \expandafter\expandafter\expandafter\setref{\lastnode}\fi \global\let\lastnode=\relax} \def\unnumbnoderef{\ifx\lastnode\relax\else \expandafter\expandafter\expandafter\unnumbsetref{\lastnode}\fi \global\let\lastnode=\relax} \def\appendixnoderef{\ifx\lastnode\relax\else \expandafter\expandafter\expandafter\appendixsetref{\lastnode}\fi \global\let\lastnode=\relax} % @refill is a no-op. \let\refill=\relax % @setfilename is done at the beginning of every texinfo file. % So open here the files we need to have open while reading the input. % This makes it possible to make a .fmt file for texinfo. \def\setfilename{% \readauxfile \opencontents \openindices \fixbackslash % Turn off hack to swallow `\input texinfo'. \global\let\setfilename=\comment % Ignore extra @setfilename cmds. % % If texinfo.cnf is present on the system, read it. % Useful for site-wide @afourpaper, etc. % Just to be on the safe side, close the input stream before the \input. \openin 1 texinfo.cnf \ifeof1 \let\temp=\relax \else \def\temp{\input texinfo.cnf }\fi \closein1 \temp % \comment % Ignore the actual filename. } % @bye. \outer\def\bye{\pagealignmacro\tracingstats=1\ptexend} % \def\macro#1{\begingroup\ignoresections\catcode`\#=6\def\macrotemp{#1}\parsearg\macroxxx} % \def\macroxxx#1#2 \end macro{% % \expandafter\gdef\macrotemp#1{#2}% % \endgroup} %\def\linemacro#1{\begingroup\ignoresections\catcode`\#=6\def\macrotemp{#1}\parsearg\linemacroxxx} %\def\linemacroxxx#1#2 \end linemacro{% %\let\parsearg=\relax %\edef\macrotempx{\csname M\butfirst\expandafter\string\macrotemp\endcsname}% %\expandafter\xdef\macrotemp{\parsearg\macrotempx}% %\expandafter\gdef\macrotempx#1{#2}% %\endgroup} %\def\butfirst#1{} \message{fonts,} % Font-change commands. % Texinfo supports the sans serif font style, which plain TeX does not. % So we set up a \sf analogous to plain's \rm, etc. \newfam\sffam \def\sf{\fam=\sffam \tensf} \let\li = \sf % Sometimes we call it \li, not \sf. % We don't need math for this one. \def\ttsl{\tenttsl} % Use Computer Modern fonts at \magstephalf (11pt). \newcount\mainmagstep \mainmagstep=\magstephalf % Set the font macro #1 to the font named #2, adding on the % specified font prefix (normally `cm'). % #3 is the font's design size, #4 is a scale factor \def\setfont#1#2#3#4{\font#1=\fontprefix#2#3 scaled #4} % Use cm as the default font prefix. % To specify the font prefix, you must define \fontprefix % before you read in texinfo.tex. \ifx\fontprefix\undefined \def\fontprefix{cm} \fi % Support font families that don't use the same naming scheme as CM. \def\rmshape{r} \def\rmbshape{bx} %where the normal face is bold \def\bfshape{b} \def\bxshape{bx} \def\ttshape{tt} \def\ttbshape{tt} \def\ttslshape{sltt} \def\itshape{ti} \def\itbshape{bxti} \def\slshape{sl} \def\slbshape{bxsl} \def\sfshape{ss} \def\sfbshape{ss} \def\scshape{csc} \def\scbshape{csc} \ifx\bigger\relax \let\mainmagstep=\magstep1 \setfont\textrm\rmshape{12}{1000} \setfont\texttt\ttshape{12}{1000} \else \setfont\textrm\rmshape{10}{\mainmagstep} \setfont\texttt\ttshape{10}{\mainmagstep} \fi % Instead of cmb10, you many want to use cmbx10. % cmbx10 is a prettier font on its own, but cmb10 % looks better when embedded in a line with cmr10. \setfont\textbf\bfshape{10}{\mainmagstep} \setfont\textit\itshape{10}{\mainmagstep} \setfont\textsl\slshape{10}{\mainmagstep} \setfont\textsf\sfshape{10}{\mainmagstep} \setfont\textsc\scshape{10}{\mainmagstep} \setfont\textttsl\ttslshape{10}{\mainmagstep} \font\texti=cmmi10 scaled \mainmagstep \font\textsy=cmsy10 scaled \mainmagstep % A few fonts for @defun, etc. \setfont\defbf\bxshape{10}{\magstep1} %was 1314 \setfont\deftt\ttshape{10}{\magstep1} \def\df{\let\tentt=\deftt \let\tenbf = \defbf \bf} % Fonts for indices and small examples (9pt). % We actually use the slanted font rather than the italic, % because texinfo normally uses the slanted fonts for that. % Do not make many font distinctions in general in the index, since they % aren't very useful. \setfont\ninett\ttshape{9}{1000} \setfont\indrm\rmshape{9}{1000} \setfont\indit\slshape{9}{1000} \let\indsl=\indit \let\indtt=\ninett \let\indttsl=\ninett \let\indsf=\indrm \let\indbf=\indrm \setfont\indsc\scshape{10}{900} \font\indi=cmmi9 \font\indsy=cmsy9 % Fonts for title page: \setfont\titlerm\rmbshape{12}{\magstep3} \setfont\titleit\itbshape{10}{\magstep4} \setfont\titlesl\slbshape{10}{\magstep4} \setfont\titlett\ttbshape{12}{\magstep3} \setfont\titlettsl\ttslshape{10}{\magstep4} \setfont\titlesf\sfbshape{17}{\magstep1} \let\titlebf=\titlerm \setfont\titlesc\scbshape{10}{\magstep4} \font\titlei=cmmi12 scaled \magstep3 \font\titlesy=cmsy10 scaled \magstep4 \def\authorrm{\secrm} % Chapter (and unnumbered) fonts (17.28pt). \setfont\chaprm\rmbshape{12}{\magstep2} \setfont\chapit\itbshape{10}{\magstep3} \setfont\chapsl\slbshape{10}{\magstep3} \setfont\chaptt\ttbshape{12}{\magstep2} \setfont\chapttsl\ttslshape{10}{\magstep3} \setfont\chapsf\sfbshape{17}{1000} \let\chapbf=\chaprm \setfont\chapsc\scbshape{10}{\magstep3} \font\chapi=cmmi12 scaled \magstep2 \font\chapsy=cmsy10 scaled \magstep3 % Section fonts (14.4pt). \setfont\secrm\rmbshape{12}{\magstep1} \setfont\secit\itbshape{10}{\magstep2} \setfont\secsl\slbshape{10}{\magstep2} \setfont\sectt\ttbshape{12}{\magstep1} \setfont\secttsl\ttslshape{10}{\magstep2} \setfont\secsf\sfbshape{12}{\magstep1} \let\secbf\secrm \setfont\secsc\scbshape{10}{\magstep2} \font\seci=cmmi12 scaled \magstep1 \font\secsy=cmsy10 scaled \magstep2 % \setfont\ssecrm\bxshape{10}{\magstep1} % This size an font looked bad. % \setfont\ssecit\itshape{10}{\magstep1} % The letters were too crowded. % \setfont\ssecsl\slshape{10}{\magstep1} % \setfont\ssectt\ttshape{10}{\magstep1} % \setfont\ssecsf\sfshape{10}{\magstep1} %\setfont\ssecrm\bfshape{10}{1315} % Note the use of cmb rather than cmbx. %\setfont\ssecit\itshape{10}{1315} % Also, the size is a little larger than %\setfont\ssecsl\slshape{10}{1315} % being scaled magstep1. %\setfont\ssectt\ttshape{10}{1315} %\setfont\ssecsf\sfshape{10}{1315} %\let\ssecbf=\ssecrm % Subsection fonts (13.15pt). \setfont\ssecrm\rmbshape{12}{\magstephalf} \setfont\ssecit\itbshape{10}{1315} \setfont\ssecsl\slbshape{10}{1315} \setfont\ssectt\ttbshape{12}{\magstephalf} \setfont\ssecttsl\ttslshape{10}{1315} \setfont\ssecsf\sfbshape{12}{\magstephalf} \let\ssecbf\ssecrm \setfont\ssecsc\scbshape{10}{\magstep1} \font\sseci=cmmi12 scaled \magstephalf \font\ssecsy=cmsy10 scaled 1315 % The smallcaps and symbol fonts should actually be scaled \magstep1.5, % but that is not a standard magnification. % In order for the font changes to affect most math symbols and letters, % we have to define the \textfont of the standard families. Since % texinfo doesn't allow for producing subscripts and superscripts, we % don't bother to reset \scriptfont and \scriptscriptfont (which would % also require loading a lot more fonts). % \def\resetmathfonts{% \textfont0 = \tenrm \textfont1 = \teni \textfont2 = \tensy \textfont\itfam = \tenit \textfont\slfam = \tensl \textfont\bffam = \tenbf \textfont\ttfam = \tentt \textfont\sffam = \tensf } % The font-changing commands redefine the meanings of \tenSTYLE, instead % of just \STYLE. We do this so that font changes will continue to work % in math mode, where it is the current \fam that is relevant in most % cases, not the current font. Plain TeX does \def\bf{\fam=\bffam % \tenbf}, for example. By redefining \tenbf, we obviate the need to % redefine \bf itself. \def\textfonts{% \let\tenrm=\textrm \let\tenit=\textit \let\tensl=\textsl \let\tenbf=\textbf \let\tentt=\texttt \let\smallcaps=\textsc \let\tensf=\textsf \let\teni=\texti \let\tensy=\textsy \let\tenttsl=\textttsl \resetmathfonts} \def\titlefonts{% \let\tenrm=\titlerm \let\tenit=\titleit \let\tensl=\titlesl \let\tenbf=\titlebf \let\tentt=\titlett \let\smallcaps=\titlesc \let\tensf=\titlesf \let\teni=\titlei \let\tensy=\titlesy \let\tenttsl=\titlettsl \resetmathfonts \setleading{25pt}} \def\titlefont#1{{\titlefonts\rm #1}} \def\chapfonts{% \let\tenrm=\chaprm \let\tenit=\chapit \let\tensl=\chapsl \let\tenbf=\chapbf \let\tentt=\chaptt \let\smallcaps=\chapsc \let\tensf=\chapsf \let\teni=\chapi \let\tensy=\chapsy \let\tenttsl=\chapttsl \resetmathfonts \setleading{19pt}} \def\secfonts{% \let\tenrm=\secrm \let\tenit=\secit \let\tensl=\secsl \let\tenbf=\secbf \let\tentt=\sectt \let\smallcaps=\secsc \let\tensf=\secsf \let\teni=\seci \let\tensy=\secsy \let\tenttsl=\secttsl \resetmathfonts \setleading{16pt}} \def\subsecfonts{% \let\tenrm=\ssecrm \let\tenit=\ssecit \let\tensl=\ssecsl \let\tenbf=\ssecbf \let\tentt=\ssectt \let\smallcaps=\ssecsc \let\tensf=\ssecsf \let\teni=\sseci \let\tensy=\ssecsy \let\tenttsl=\ssecttsl \resetmathfonts \setleading{15pt}} \let\subsubsecfonts = \subsecfonts % Maybe make sssec fonts scaled magstephalf? \def\indexfonts{% \let\tenrm=\indrm \let\tenit=\indit \let\tensl=\indsl \let\tenbf=\indbf \let\tentt=\indtt \let\smallcaps=\indsc \let\tensf=\indsf \let\teni=\indi \let\tensy=\indsy \let\tenttsl=\indttsl \resetmathfonts \setleading{12pt}} % Set up the default fonts, so we can use them for creating boxes. % \textfonts % Define these so they can be easily changed for other fonts. \def\angleleft{$\langle$} \def\angleright{$\rangle$} % Count depth in font-changes, for error checks \newcount\fontdepth \fontdepth=0 % Fonts for short table of contents. \setfont\shortcontrm\rmshape{12}{1000} \setfont\shortcontbf\bxshape{12}{1000} \setfont\shortcontsl\slshape{12}{1000} %% Add scribe-like font environments, plus @l for inline lisp (usually sans %% serif) and @ii for TeX italic % \smartitalic{ARG} outputs arg in italics, followed by an italic correction % unless the following character is such as not to need one. \def\smartitalicx{\ifx\next,\else\ifx\next-\else\ifx\next.\else\/\fi\fi\fi} \def\smartitalic#1{{\sl #1}\futurelet\next\smartitalicx} \let\i=\smartitalic \let\var=\smartitalic \let\dfn=\smartitalic \let\emph=\smartitalic \let\cite=\smartitalic \def\b#1{{\bf #1}} \let\strong=\b % We can't just use \exhyphenpenalty, because that only has effect at % the end of a paragraph. Restore normal hyphenation at the end of the % group within which \nohyphenation is presumably called. % \def\nohyphenation{\hyphenchar\font = -1 \aftergroup\restorehyphenation} \def\restorehyphenation{\hyphenchar\font = `- } \def\t#1{% {\tt \rawbackslash \frenchspacing #1}% \null } \let\ttfont=\t \def\samp#1{`\tclose{#1}'\null} \setfont\smallrm\rmshape{8}{1000} \font\smallsy=cmsy9 \def\key#1{{\smallrm\textfont2=\smallsy \leavevmode\hbox{% \raise0.4pt\hbox{\angleleft}\kern-.08em\vtop{% \vbox{\hrule\kern-0.4pt \hbox{\raise0.4pt\hbox{\vphantom{\angleleft}}#1}}% \kern-0.4pt\hrule}% \kern-.06em\raise0.4pt\hbox{\angleright}}}} % The old definition, with no lozenge: %\def\key #1{{\ttsl \nohyphenation \uppercase{#1}}\null} \def\ctrl #1{{\tt \rawbackslash \hat}#1} \let\file=\samp % @code is a modification of @t, % which makes spaces the same size as normal in the surrounding text. \def\tclose#1{% {% % Change normal interword space to be same as for the current font. \spaceskip = \fontdimen2\font % % Switch to typewriter. \tt % % But `\ ' produces the large typewriter interword space. \def\ {{\spaceskip = 0pt{} }}% % % Turn off hyphenation. \nohyphenation % \rawbackslash \frenchspacing #1% }% \null } % We *must* turn on hyphenation at `-' and `_' in \code. % Otherwise, it is too hard to avoid overfull hboxes % in the Emacs manual, the Library manual, etc. % Unfortunately, TeX uses one parameter (\hyphenchar) to control % both hyphenation at - and hyphenation within words. % We must therefore turn them both off (\tclose does that) % and arrange explicitly to hyphenate at a dash. % -- rms. { \catcode`\-=\active \catcode`\_=\active \catcode`\|=\active \global\def\code{\begingroup \catcode`\-=\active \let-\codedash \catcode`\_=\active \let_\codeunder \codex} % The following is used by \doprintindex to insure that long function names % wrap around. It is necessary for - and _ to be active before the index is % read from the file, as \entry parses the arguments long before \code is % ever called. -- mycroft % _ is always active; and it shouldn't be \let = to an _ that is a % subscript character anyway. Then, @cindex @samp{_} (for example) % fails. --karl \global\def\indexbreaks{% \catcode`\-=\active \let-\realdash } } \def\realdash{-} \def\codedash{-\discretionary{}{}{}} \def\codeunder{\ifusingtt{\normalunderscore\discretionary{}{}{}}{\_}} \def\codex #1{\tclose{#1}\endgroup} %\let\exp=\tclose %Was temporary % @kbd is like @code, except that if the argument is just one @key command, % then @kbd has no effect. % @kbdinputstyle -- arg is `distinct' (@kbd uses slanted tty font always), % `example' (@kbd uses ttsl only inside of @example and friends), % or `code' (@kbd uses normal tty font always). \def\kbdinputstyle{\parsearg\kbdinputstylexxx} \def\kbdinputstylexxx#1{% \def\arg{#1}% \ifx\arg\worddistinct \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl}% \else\ifx\arg\wordexample \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\tt}% \else\ifx\arg\wordcode \gdef\kbdexamplefont{\tt}\gdef\kbdfont{\tt}% \fi\fi\fi } \def\worddistinct{distinct} \def\wordexample{example} \def\wordcode{code} % Default is kbdinputdistinct. (Too much of a hassle to call the macro, % the catcodes are wrong for parsearg to work.) \gdef\kbdexamplefont{\ttsl}\gdef\kbdfont{\ttsl} \def\xkey{\key} \def\kbdfoo#1#2#3\par{\def\one{#1}\def\three{#3}\def\threex{??}% \ifx\one\xkey\ifx\threex\three \key{#2}% \else{\tclose{\kbdfont\look}}\fi \else{\tclose{\kbdfont\look}}\fi} % @url. Quotes do not seem necessary, so use \code. \let\url=\code % @uref (abbreviation for `urlref') takes an optional second argument % specifying the text to display. First (mandatory) arg is the url. % Perhaps eventually put in a hypertex \special here. % \def\uref#1{\urefxxx #1,,\finish} \def\urefxxx#1,#2,#3\finish{% \setbox0 = \hbox{\ignorespaces #2}% \ifdim\wd0 > 0pt \unhbox0\ (\code{#1})% \else \code{#1}% \fi } % rms does not like the angle brackets --karl, 17may97. % So now @email is just like @uref. %\def\email#1{\angleleft{\tt #1}\angleright} \let\email=\uref % Check if we are currently using a typewriter font. Since all the % Computer Modern typewriter fonts have zero interword stretch (and % shrink), and it is reasonable to expect all typewriter fonts to have % this property, we can check that font parameter. % \def\ifmonospace{\ifdim\fontdimen3\font=0pt } % Typeset a dimension, e.g., `in' or `pt'. The only reason for the % argument is to make the input look right: @dmn{pt} instead of % @dmn{}pt. % \def\dmn#1{\thinspace #1} \def\kbd#1{\def\look{#1}\expandafter\kbdfoo\look??\par} % @l was never documented to mean ``switch to the Lisp font'', % and it is not used as such in any manual I can find. We need it for % Polish suppressed-l. --karl, 22sep96. %\def\l#1{{\li #1}\null} \def\r#1{{\rm #1}} % roman font % Use of \lowercase was suggested. \def\sc#1{{\smallcaps#1}} % smallcaps font \def\ii#1{{\it #1}} % italic font % @pounds{} is a sterling sign. \def\pounds{{\it\$}} \message{page headings,} \newskip\titlepagetopglue \titlepagetopglue = 1.5in \newskip\titlepagebottomglue \titlepagebottomglue = 2pc % First the title page. Must do @settitle before @titlepage. \newif\ifseenauthor \newif\iffinishedtitlepage \def\shorttitlepage{\parsearg\shorttitlepagezzz} \def\shorttitlepagezzz #1{\begingroup\hbox{}\vskip 1.5in \chaprm \centerline{#1}% \endgroup\page\hbox{}\page} \def\titlepage{\begingroup \parindent=0pt \textfonts \let\subtitlerm=\tenrm % I deinstalled the following change because \cmr12 is undefined. % This change was not in the ChangeLog anyway. --rms. % \let\subtitlerm=\cmr12 \def\subtitlefont{\subtitlerm \normalbaselineskip = 13pt \normalbaselines}% % \def\authorfont{\authorrm \normalbaselineskip = 16pt \normalbaselines}% % % Leave some space at the very top of the page. \vglue\titlepagetopglue % % Now you can print the title using @title. \def\title{\parsearg\titlezzz}% \def\titlezzz##1{\leftline{\titlefonts\rm ##1} % print a rule at the page bottom also. \finishedtitlepagefalse \vskip4pt \hrule height 4pt width \hsize \vskip4pt}% % No rule at page bottom unless we print one at the top with @title. \finishedtitlepagetrue % % Now you can put text using @subtitle. \def\subtitle{\parsearg\subtitlezzz}% \def\subtitlezzz##1{{\subtitlefont \rightline{##1}}}% % % @author should come last, but may come many times. \def\author{\parsearg\authorzzz}% \def\authorzzz##1{\ifseenauthor\else\vskip 0pt plus 1filll\seenauthortrue\fi {\authorfont \leftline{##1}}}% % % Most title ``pages'' are actually two pages long, with space % at the top of the second. We don't want the ragged left on the second. \let\oldpage = \page \def\page{% \iffinishedtitlepage\else \finishtitlepage \fi \oldpage \let\page = \oldpage \hbox{}}% % \def\page{\oldpage \hbox{}} } \def\Etitlepage{% \iffinishedtitlepage\else \finishtitlepage \fi % It is important to do the page break before ending the group, % because the headline and footline are only empty inside the group. % If we use the new definition of \page, we always get a blank page % after the title page, which we certainly don't want. \oldpage \endgroup \HEADINGSon } \def\finishtitlepage{% \vskip4pt \hrule height 2pt width \hsize \vskip\titlepagebottomglue \finishedtitlepagetrue } %%% Set up page headings and footings. \let\thispage=\folio \newtoks \evenheadline % Token sequence for heading line of even pages \newtoks \oddheadline % Token sequence for heading line of odd pages \newtoks \evenfootline % Token sequence for footing line of even pages \newtoks \oddfootline % Token sequence for footing line of odd pages % Now make Tex use those variables \headline={{\textfonts\rm \ifodd\pageno \the\oddheadline \else \the\evenheadline \fi}} \footline={{\textfonts\rm \ifodd\pageno \the\oddfootline \else \the\evenfootline \fi}\HEADINGShook} \let\HEADINGShook=\relax % Commands to set those variables. % For example, this is what @headings on does % @evenheading @thistitle|@thispage|@thischapter % @oddheading @thischapter|@thispage|@thistitle % @evenfooting @thisfile|| % @oddfooting ||@thisfile \def\evenheading{\parsearg\evenheadingxxx} \def\oddheading{\parsearg\oddheadingxxx} \def\everyheading{\parsearg\everyheadingxxx} \def\evenfooting{\parsearg\evenfootingxxx} \def\oddfooting{\parsearg\oddfootingxxx} \def\everyfooting{\parsearg\everyfootingxxx} {\catcode`\@=0 % \gdef\evenheadingxxx #1{\evenheadingyyy #1@|@|@|@|\finish} \gdef\evenheadingyyy #1@|#2@|#3@|#4\finish{% \global\evenheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}} \gdef\oddheadingxxx #1{\oddheadingyyy #1@|@|@|@|\finish} \gdef\oddheadingyyy #1@|#2@|#3@|#4\finish{% \global\oddheadline={\rlap{\centerline{#2}}\line{#1\hfil#3}}} \gdef\everyheadingxxx#1{\oddheadingxxx{#1}\evenheadingxxx{#1}}% \gdef\evenfootingxxx #1{\evenfootingyyy #1@|@|@|@|\finish} \gdef\evenfootingyyy #1@|#2@|#3@|#4\finish{% \global\evenfootline={\rlap{\centerline{#2}}\line{#1\hfil#3}}} \gdef\oddfootingxxx #1{\oddfootingyyy #1@|@|@|@|\finish} \gdef\oddfootingyyy #1@|#2@|#3@|#4\finish{% \global\oddfootline = {\rlap{\centerline{#2}}\line{#1\hfil#3}}% % % Leave some space for the footline. Hopefully ok to assume % @evenfooting will not be used by itself. \global\advance\pageheight by -\baselineskip \global\advance\vsize by -\baselineskip } \gdef\everyfootingxxx#1{\oddfootingxxx{#1}\evenfootingxxx{#1}} % }% unbind the catcode of @. % @headings double turns headings on for double-sided printing. % @headings single turns headings on for single-sided printing. % @headings off turns them off. % @headings on same as @headings double, retained for compatibility. % @headings after turns on double-sided headings after this page. % @headings doubleafter turns on double-sided headings after this page. % @headings singleafter turns on single-sided headings after this page. % By default, they are off at the start of a document, % and turned `on' after @end titlepage. \def\headings #1 {\csname HEADINGS#1\endcsname} \def\HEADINGSoff{ \global\evenheadline={\hfil} \global\evenfootline={\hfil} \global\oddheadline={\hfil} \global\oddfootline={\hfil}} \HEADINGSoff % When we turn headings on, set the page number to 1. % For double-sided printing, put current file name in lower left corner, % chapter name on inside top of right hand pages, document % title on inside top of left hand pages, and page numbers on outside top % edge of all pages. \def\HEADINGSdouble{ \global\pageno=1 \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\folio\hfil\thistitle}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chapoddpage } \let\contentsalignmacro = \chappager % For single-sided printing, chapter title goes across top left of page, % page number on top right. \def\HEADINGSsingle{ \global\pageno=1 \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\thischapter\hfil\folio}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chappager } \def\HEADINGSon{\HEADINGSdouble} \def\HEADINGSafter{\let\HEADINGShook=\HEADINGSdoublex} \let\HEADINGSdoubleafter=\HEADINGSafter \def\HEADINGSdoublex{% \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\folio\hfil\thistitle}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chapoddpage } \def\HEADINGSsingleafter{\let\HEADINGShook=\HEADINGSsinglex} \def\HEADINGSsinglex{% \global\evenfootline={\hfil} \global\oddfootline={\hfil} \global\evenheadline={\line{\thischapter\hfil\folio}} \global\oddheadline={\line{\thischapter\hfil\folio}} \global\let\contentsalignmacro = \chappager } % Subroutines used in generating headings % Produces Day Month Year style of output. \def\today{\number\day\space \ifcase\month\or January\or February\or March\or April\or May\or June\or July\or August\or September\or October\or November\or December\fi \space\number\year} % Use this if you want the Month Day, Year style of output. %\def\today{\ifcase\month\or %January\or February\or March\or April\or May\or June\or %July\or August\or September\or October\or November\or December\fi %\space\number\day, \number\year} % @settitle line... specifies the title of the document, for headings % It generates no output of its own \def\thistitle{No Title} \def\settitle{\parsearg\settitlezzz} \def\settitlezzz #1{\gdef\thistitle{#1}} \message{tables,} % Tables -- @table, @ftable, @vtable, @item(x), @kitem(x), @xitem(x). % default indentation of table text \newdimen\tableindent \tableindent=.8in % default indentation of @itemize and @enumerate text \newdimen\itemindent \itemindent=.3in % margin between end of table item and start of table text. \newdimen\itemmargin \itemmargin=.1in % used internally for \itemindent minus \itemmargin \newdimen\itemmax % Note @table, @vtable, and @vtable define @item, @itemx, etc., with % these defs. % They also define \itemindex % to index the item name in whatever manner is desired (perhaps none). \newif\ifitemxneedsnegativevskip \def\itemxpar{\par\ifitemxneedsnegativevskip\nobreak\vskip-\parskip\nobreak\fi} \def\internalBitem{\smallbreak \parsearg\itemzzz} \def\internalBitemx{\itemxpar \parsearg\itemzzz} \def\internalBxitem "#1"{\def\xitemsubtopix{#1} \smallbreak \parsearg\xitemzzz} \def\internalBxitemx "#1"{\def\xitemsubtopix{#1} \itemxpar \parsearg\xitemzzz} \def\internalBkitem{\smallbreak \parsearg\kitemzzz} \def\internalBkitemx{\itemxpar \parsearg\kitemzzz} \def\kitemzzz #1{\dosubind {kw}{\code{#1}}{for {\bf \lastfunction}}% \itemzzz {#1}} \def\xitemzzz #1{\dosubind {kw}{\code{#1}}{for {\bf \xitemsubtopic}}% \itemzzz {#1}} \def\itemzzz #1{\begingroup % \advance\hsize by -\rightskip \advance\hsize by -\tableindent \setbox0=\hbox{\itemfont{#1}}% \itemindex{#1}% \nobreak % This prevents a break before @itemx. % % Be sure we are not still in the middle of a paragraph. %{\parskip = 0in %\par %}% % % If the item text does not fit in the space we have, put it on a line % by itself, and do not allow a page break either before or after that % line. We do not start a paragraph here because then if the next % command is, e.g., @kindex, the whatsit would get put into the % horizontal list on a line by itself, resulting in extra blank space. \ifdim \wd0>\itemmax % % Make this a paragraph so we get the \parskip glue and wrapping, % but leave it ragged-right. \begingroup \advance\leftskip by-\tableindent \advance\hsize by\tableindent \advance\rightskip by0pt plus1fil \leavevmode\unhbox0\par \endgroup % % We're going to be starting a paragraph, but we don't want the % \parskip glue -- logically it's part of the @item we just started. \nobreak \vskip-\parskip % % Stop a page break at the \parskip glue coming up. Unfortunately % we can't prevent a possible page break at the following % \baselineskip glue. \nobreak \endgroup \itemxneedsnegativevskipfalse \else % The item text fits into the space. Start a paragraph, so that the % following text (if any) will end up on the same line. Since that % text will be indented by \tableindent, we make the item text be in % a zero-width box. \noindent \rlap{\hskip -\tableindent\box0}\ignorespaces% \endgroup% \itemxneedsnegativevskiptrue% \fi } \def\item{\errmessage{@item while not in a table}} \def\itemx{\errmessage{@itemx while not in a table}} \def\kitem{\errmessage{@kitem while not in a table}} \def\kitemx{\errmessage{@kitemx while not in a table}} \def\xitem{\errmessage{@xitem while not in a table}} \def\xitemx{\errmessage{@xitemx while not in a table}} %% Contains a kludge to get @end[description] to work \def\description{\tablez{\dontindex}{1}{}{}{}{}} \def\table{\begingroup\inENV\obeylines\obeyspaces\tablex} {\obeylines\obeyspaces% \gdef\tablex #1^^M{% \tabley\dontindex#1 \endtabley}} \def\ftable{\begingroup\inENV\obeylines\obeyspaces\ftablex} {\obeylines\obeyspaces% \gdef\ftablex #1^^M{% \tabley\fnitemindex#1 \endtabley \def\Eftable{\endgraf\afterenvbreak\endgroup}% \let\Etable=\relax}} \def\vtable{\begingroup\inENV\obeylines\obeyspaces\vtablex} {\obeylines\obeyspaces% \gdef\vtablex #1^^M{% \tabley\vritemindex#1 \endtabley \def\Evtable{\endgraf\afterenvbreak\endgroup}% \let\Etable=\relax}} \def\dontindex #1{} \def\fnitemindex #1{\doind {fn}{\code{#1}}}% \def\vritemindex #1{\doind {vr}{\code{#1}}}% {\obeyspaces % \gdef\tabley#1#2 #3 #4 #5 #6 #7\endtabley{\endgroup% \tablez{#1}{#2}{#3}{#4}{#5}{#6}}} \def\tablez #1#2#3#4#5#6{% \aboveenvbreak % \begingroup % \def\Edescription{\Etable}% Necessary kludge. \let\itemindex=#1% \ifnum 0#3>0 \advance \leftskip by #3\mil \fi % \ifnum 0#4>0 \tableindent=#4\mil \fi % \ifnum 0#5>0 \advance \rightskip by #5\mil \fi % \def\itemfont{#2}% \itemmax=\tableindent % \advance \itemmax by -\itemmargin % \advance \leftskip by \tableindent % \exdentamount=\tableindent \parindent = 0pt \parskip = \smallskipamount \ifdim \parskip=0pt \parskip=2pt \fi% \def\Etable{\endgraf\afterenvbreak\endgroup}% \let\item = \internalBitem % \let\itemx = \internalBitemx % \let\kitem = \internalBkitem % \let\kitemx = \internalBkitemx % \let\xitem = \internalBxitem % \let\xitemx = \internalBxitemx % } % This is the counter used by @enumerate, which is really @itemize \newcount \itemno \def\itemize{\parsearg\itemizezzz} \def\itemizezzz #1{% \begingroup % ended by the @end itemsize \itemizey {#1}{\Eitemize} } \def\itemizey #1#2{% \aboveenvbreak % \itemmax=\itemindent % \advance \itemmax by -\itemmargin % \advance \leftskip by \itemindent % \exdentamount=\itemindent \parindent = 0pt % \parskip = \smallskipamount % \ifdim \parskip=0pt \parskip=2pt \fi% \def#2{\endgraf\afterenvbreak\endgroup}% \def\itemcontents{#1}% \let\item=\itemizeitem} % Set sfcode to normal for the chars that usually have another value. % These are `.?!:;,' \def\frenchspacing{\sfcode46=1000 \sfcode63=1000 \sfcode33=1000 \sfcode58=1000 \sfcode59=1000 \sfcode44=1000 } % \splitoff TOKENS\endmark defines \first to be the first token in % TOKENS, and \rest to be the remainder. % \def\splitoff#1#2\endmark{\def\first{#1}\def\rest{#2}}% % Allow an optional argument of an uppercase letter, lowercase letter, % or number, to specify the first label in the enumerated list. No % argument is the same as `1'. % \def\enumerate{\parsearg\enumeratezzz} \def\enumeratezzz #1{\enumeratey #1 \endenumeratey} \def\enumeratey #1 #2\endenumeratey{% \begingroup % ended by the @end enumerate % % If we were given no argument, pretend we were given `1'. \def\thearg{#1}% \ifx\thearg\empty \def\thearg{1}\fi % % Detect if the argument is a single token. If so, it might be a % letter. Otherwise, the only valid thing it can be is a number. % (We will always have one token, because of the test we just made. % This is a good thing, since \splitoff doesn't work given nothing at % all -- the first parameter is undelimited.) \expandafter\splitoff\thearg\endmark \ifx\rest\empty % Only one token in the argument. It could still be anything. % A ``lowercase letter'' is one whose \lccode is nonzero. % An ``uppercase letter'' is one whose \lccode is both nonzero, and % not equal to itself. % Otherwise, we assume it's a number. % % We need the \relax at the end of the \ifnum lines to stop TeX from % continuing to look for a . % \ifnum\lccode\expandafter`\thearg=0\relax \numericenumerate % a number (we hope) \else % It's a letter. \ifnum\lccode\expandafter`\thearg=\expandafter`\thearg\relax \lowercaseenumerate % lowercase letter \else \uppercaseenumerate % uppercase letter \fi \fi \else % Multiple tokens in the argument. We hope it's a number. \numericenumerate \fi } % An @enumerate whose labels are integers. The starting integer is % given in \thearg. % \def\numericenumerate{% \itemno = \thearg \startenumeration{\the\itemno}% } % The starting (lowercase) letter is in \thearg. \def\lowercaseenumerate{% \itemno = \expandafter`\thearg \startenumeration{% % Be sure we're not beyond the end of the alphabet. \ifnum\itemno=0 \errmessage{No more lowercase letters in @enumerate; get a bigger alphabet}% \fi \char\lccode\itemno }% } % The starting (uppercase) letter is in \thearg. \def\uppercaseenumerate{% \itemno = \expandafter`\thearg \startenumeration{% % Be sure we're not beyond the end of the alphabet. \ifnum\itemno=0 \errmessage{No more uppercase letters in @enumerate; get a bigger alphabet} \fi \char\uccode\itemno }% } % Call itemizey, adding a period to the first argument and supplying the % common last two arguments. Also subtract one from the initial value in % \itemno, since @item increments \itemno. % \def\startenumeration#1{% \advance\itemno by -1 \itemizey{#1.}\Eenumerate\flushcr } % @alphaenumerate and @capsenumerate are abbreviations for giving an arg % to @enumerate. % \def\alphaenumerate{\enumerate{a}} \def\capsenumerate{\enumerate{A}} \def\Ealphaenumerate{\Eenumerate} \def\Ecapsenumerate{\Eenumerate} % Definition of @item while inside @itemize. \def\itemizeitem{% \advance\itemno by 1 {\let\par=\endgraf \smallbreak}% \ifhmode \errmessage{In hmode at itemizeitem}\fi {\parskip=0in \hskip 0pt \hbox to 0pt{\hss \itemcontents\hskip \itemmargin}% \vadjust{\penalty 1200}}% \flushcr} % @multitable macros % Amy Hendrickson, 8/18/94, 3/6/96 % % @multitable ... @end multitable will make as many columns as desired. % Contents of each column will wrap at width given in preamble. Width % can be specified either with sample text given in a template line, % or in percent of \hsize, the current width of text on page. % Table can continue over pages but will only break between lines. % To make preamble: % % Either define widths of columns in terms of percent of \hsize: % @multitable @columnfractions .25 .3 .45 % @item ... % % Numbers following @columnfractions are the percent of the total % current hsize to be used for each column. You may use as many % columns as desired. % Or use a template: % @multitable {Column 1 template} {Column 2 template} {Column 3 template} % @item ... % using the widest term desired in each column. % % For those who want to use more than one line's worth of words in % the preamble, break the line within one argument and it % will parse correctly, i.e., % % @multitable {Column 1 template} {Column 2 template} {Column 3 % template} % Not: % @multitable {Column 1 template} {Column 2 template} % {Column 3 template} % Each new table line starts with @item, each subsequent new column % starts with @tab. Empty columns may be produced by supplying @tab's % with nothing between them for as many times as empty columns are needed, % ie, @tab@tab@tab will produce two empty columns. % @item, @tab, @multitable or @end multitable do not need to be on their % own lines, but it will not hurt if they are. % Sample multitable: % @multitable {Column 1 template} {Column 2 template} {Column 3 template} % @item first col stuff @tab second col stuff @tab third col % @item % first col stuff % @tab % second col stuff % @tab % third col % @item first col stuff @tab second col stuff % @tab Many paragraphs of text may be used in any column. % % They will wrap at the width determined by the template. % @item@tab@tab This will be in third column. % @end multitable % Default dimensions may be reset by user. % @multitableparskip is vertical space between paragraphs in table. % @multitableparindent is paragraph indent in table. % @multitablecolmargin is horizontal space to be left between columns. % @multitablelinespace is space to leave between table items, baseline % to baseline. % 0pt means it depends on current normal line spacing. % \newskip\multitableparskip \newskip\multitableparindent \newdimen\multitablecolspace \newskip\multitablelinespace \multitableparskip=0pt \multitableparindent=6pt \multitablecolspace=12pt \multitablelinespace=0pt % Macros used to set up halign preamble: % \let\endsetuptable\relax \def\xendsetuptable{\endsetuptable} \let\columnfractions\relax \def\xcolumnfractions{\columnfractions} \newif\ifsetpercent % 2/1/96, to allow fractions to be given with more than one digit. \def\pickupwholefraction#1 {\global\advance\colcount by1 % \expandafter\xdef\csname col\the\colcount\endcsname{.#1\hsize}% \setuptable} \newcount\colcount \def\setuptable#1{\def\firstarg{#1}% \ifx\firstarg\xendsetuptable\let\go\relax% \else \ifx\firstarg\xcolumnfractions\global\setpercenttrue% \else \ifsetpercent \let\go\pickupwholefraction % In this case arg of setuptable % is the decimal point before the % number given in percent of hsize. % We don't need this so we don't use it. \else \global\advance\colcount by1 \setbox0=\hbox{#1 }% Add a normal word space as a separator; % typically that is always in the input, anyway. \expandafter\xdef\csname col\the\colcount\endcsname{\the\wd0}% \fi% \fi% \ifx\go\pickupwholefraction\else\let\go\setuptable\fi% \fi\go} % multitable syntax \def\tab{&\hskip1sp\relax} % 2/2/96 % tiny skip here makes sure this column space is % maintained, even if it is never used. % @multitable ... @end multitable definitions: \def\multitable{\parsearg\dotable} \def\dotable#1{\bgroup \vskip\parskip \let\item\crcr \tolerance=9500 \hbadness=9500 \setmultitablespacing \parskip=\multitableparskip \parindent=\multitableparindent \overfullrule=0pt \global\colcount=0 \def\Emultitable{\global\setpercentfalse\cr\egroup\egroup}% % % To parse everything between @multitable and @item: \setuptable#1 \endsetuptable % % \everycr will reset column counter, \colcount, at the end of % each line. Every column entry will cause \colcount to advance by one. % The table preamble % looks at the current \colcount to find the correct column width. \everycr{\noalign{% % % \filbreak%% keeps underfull box messages off when table breaks over pages. % Maybe so, but it also creates really weird page breaks when the table % breaks over pages. Wouldn't \vfil be better? Wait until the problem % manifests itself, so it can be fixed for real --karl. \global\colcount=0\relax}}% % % This preamble sets up a generic column definition, which will % be used as many times as user calls for columns. % \vtop will set a single line and will also let text wrap and % continue for many paragraphs if desired. \halign\bgroup&\global\advance\colcount by 1\relax \multistrut\vtop{\hsize=\expandafter\csname col\the\colcount\endcsname % % In order to keep entries from bumping into each other % we will add a \leftskip of \multitablecolspace to all columns after % the first one. % % If a template has been used, we will add \multitablecolspace % to the width of each template entry. % % If the user has set preamble in terms of percent of \hsize we will % use that dimension as the width of the column, and the \leftskip % will keep entries from bumping into each other. Table will start at % left margin and final column will justify at right margin. % % Make sure we don't inherit \rightskip from the outer environment. \rightskip=0pt \ifnum\colcount=1 % The first column will be indented with the surrounding text. \advance\hsize by\leftskip \else \ifsetpercent \else % If user has not set preamble in terms of percent of \hsize % we will advance \hsize by \multitablecolspace. \advance\hsize by \multitablecolspace \fi % In either case we will make \leftskip=\multitablecolspace: \leftskip=\multitablecolspace \fi % Ignoring space at the beginning and end avoids an occasional spurious % blank line, when TeX decides to break the line at the space before the % box from the multistrut, so the strut ends up on a line by itself. % For example: % @multitable @columnfractions .11 .89 % @item @code{#} % @tab Legal holiday which is valid in major parts of the whole country. % Is automatically provided with highlighting sequences respectively marking % characters. \noindent\ignorespaces##\unskip\multistrut}\cr } \def\setmultitablespacing{% test to see if user has set \multitablelinespace. % If so, do nothing. If not, give it an appropriate dimension based on % current baselineskip. \ifdim\multitablelinespace=0pt %% strut to put in table in case some entry doesn't have descenders, %% to keep lines equally spaced \let\multistrut = \strut %% Test to see if parskip is larger than space between lines of %% table. If not, do nothing. %% If so, set to same dimension as multitablelinespace. \else \gdef\multistrut{\vrule height\multitablelinespace depth\dp0 width0pt\relax} \fi \ifdim\multitableparskip>\multitablelinespace \global\multitableparskip=\multitablelinespace \global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller %% than skip between lines in the table. \fi% \ifdim\multitableparskip=0pt \global\multitableparskip=\multitablelinespace \global\advance\multitableparskip-7pt %% to keep parskip somewhat smaller %% than skip between lines in the table. \fi} \message{indexing,} % Index generation facilities % Define \newwrite to be identical to plain tex's \newwrite % except not \outer, so it can be used within \newindex. {\catcode`\@=11 \gdef\newwrite{\alloc@7\write\chardef\sixt@@n}} % \newindex {foo} defines an index named foo. % It automatically defines \fooindex such that % \fooindex ...rest of line... puts an entry in the index foo. % It also defines \fooindfile to be the number of the output channel for % the file that accumulates this index. The file's extension is foo. % The name of an index should be no more than 2 characters long % for the sake of vms. \def\newindex #1{ \expandafter\newwrite \csname#1indfile\endcsname% Define number for output file \openout \csname#1indfile\endcsname \jobname.#1 % Open the file \expandafter\xdef\csname#1index\endcsname{% % Define \xxxindex \noexpand\doindex {#1}} } % @defindex foo == \newindex{foo} \def\defindex{\parsearg\newindex} % Define @defcodeindex, like @defindex except put all entries in @code. \def\newcodeindex #1{ \expandafter\newwrite \csname#1indfile\endcsname% Define number for output file \openout \csname#1indfile\endcsname \jobname.#1 % Open the file \expandafter\xdef\csname#1index\endcsname{% % Define \xxxindex \noexpand\docodeindex {#1}} } \def\defcodeindex{\parsearg\newcodeindex} % @synindex foo bar makes index foo feed into index bar. % Do this instead of @defindex foo if you don't want it as a separate index. % The \closeout helps reduce unnecessary open files; the limit on the % Acorn RISC OS is a mere 16 files. \def\synindex#1 #2 {% \expandafter\let\expandafter\synindexfoo\expandafter=\csname#2indfile\endcsname \expandafter\closeout\csname#1indfile\endcsname \expandafter\let\csname#1indfile\endcsname=\synindexfoo \expandafter\xdef\csname#1index\endcsname{% define \xxxindex \noexpand\doindex{#2}}% } % @syncodeindex foo bar similar, but put all entries made for index foo % inside @code. \def\syncodeindex#1 #2 {% \expandafter\let\expandafter\synindexfoo\expandafter=\csname#2indfile\endcsname \expandafter\closeout\csname#1indfile\endcsname \expandafter\let\csname#1indfile\endcsname=\synindexfoo \expandafter\xdef\csname#1index\endcsname{% define \xxxindex \noexpand\docodeindex{#2}}% } % Define \doindex, the driver for all \fooindex macros. % Argument #1 is generated by the calling \fooindex macro, % and it is "foo", the name of the index. % \doindex just uses \parsearg; it calls \doind for the actual work. % This is because \doind is more useful to call from other macros. % There is also \dosubind {index}{topic}{subtopic} % which makes an entry in a two-level index such as the operation index. \def\doindex#1{\edef\indexname{#1}\parsearg\singleindexer} \def\singleindexer #1{\doind{\indexname}{#1}} % like the previous two, but they put @code around the argument. \def\docodeindex#1{\edef\indexname{#1}\parsearg\singlecodeindexer} \def\singlecodeindexer #1{\doind{\indexname}{\code{#1}}} \def\indexdummies{% \def\ { }% % Take care of the plain tex accent commands. \def\"{\realbackslash "}% \def\`{\realbackslash `}% \def\'{\realbackslash '}% \def\^{\realbackslash ^}% \def\~{\realbackslash ~}% \def\={\realbackslash =}% \def\b{\realbackslash b}% \def\c{\realbackslash c}% \def\d{\realbackslash d}% \def\u{\realbackslash u}% \def\v{\realbackslash v}% \def\H{\realbackslash H}% % Take care of the plain tex special European modified letters. \def\oe{\realbackslash oe}% \def\ae{\realbackslash ae}% \def\aa{\realbackslash aa}% \def\OE{\realbackslash OE}% \def\AE{\realbackslash AE}% \def\AA{\realbackslash AA}% \def\o{\realbackslash o}% \def\O{\realbackslash O}% \def\l{\realbackslash l}% \def\L{\realbackslash L}% \def\ss{\realbackslash ss}% % Take care of texinfo commands likely to appear in an index entry. % (Must be a way to avoid doing expansion at all, and thus not have to % laboriously list every single command here.) \def\@{@}% will be @@ when we switch to @ as escape char. %\let\{ = \lbracecmd %\let\} = \rbracecmd \def\_{{\realbackslash _}}% \def\w{\realbackslash w }% \def\bf{\realbackslash bf }% %\def\rm{\realbackslash rm }% \def\sl{\realbackslash sl }% \def\sf{\realbackslash sf}% \def\tt{\realbackslash tt}% \def\gtr{\realbackslash gtr}% \def\less{\realbackslash less}% \def\hat{\realbackslash hat}% %\def\char{\realbackslash char}% \def\TeX{\realbackslash TeX}% \def\dots{\realbackslash dots }% \def\result{\realbackslash result}% \def\equiv{\realbackslash equiv}% \def\expansion{\realbackslash expansion}% \def\print{\realbackslash print}% \def\error{\realbackslash error}% \def\point{\realbackslash point}% \def\copyright{\realbackslash copyright}% \def\tclose##1{\realbackslash tclose {##1}}% \def\code##1{\realbackslash code {##1}}% \def\dotless##1{\realbackslash dotless {##1}}% \def\samp##1{\realbackslash samp {##1}}% \def\,##1{\realbackslash ,{##1}}% \def\t##1{\realbackslash t {##1}}% \def\r##1{\realbackslash r {##1}}% \def\i##1{\realbackslash i {##1}}% \def\b##1{\realbackslash b {##1}}% \def\sc##1{\realbackslash sc {##1}}% \def\cite##1{\realbackslash cite {##1}}% \def\key##1{\realbackslash key {##1}}% \def\file##1{\realbackslash file {##1}}% \def\var##1{\realbackslash var {##1}}% \def\kbd##1{\realbackslash kbd {##1}}% \def\dfn##1{\realbackslash dfn {##1}}% \def\emph##1{\realbackslash emph {##1}}% \def\value##1{\realbackslash value {##1}}% \unsepspaces } % If an index command is used in an @example environment, any spaces % therein should become regular spaces in the raw index file, not the % expansion of \tie (\\leavevmode \penalty \@M \ ). {\obeyspaces \gdef\unsepspaces{\obeyspaces\let =\space}} % \indexnofonts no-ops all font-change commands. % This is used when outputting the strings to sort the index by. \def\indexdummyfont#1{#1} \def\indexdummytex{TeX} \def\indexdummydots{...} \def\indexnofonts{% % Just ignore accents. \let\,=\indexdummyfont \let\"=\indexdummyfont \let\`=\indexdummyfont \let\'=\indexdummyfont \let\^=\indexdummyfont \let\~=\indexdummyfont \let\==\indexdummyfont \let\b=\indexdummyfont \let\c=\indexdummyfont \let\d=\indexdummyfont \let\u=\indexdummyfont \let\v=\indexdummyfont \let\H=\indexdummyfont \let\dotless=\indexdummyfont % Take care of the plain tex special European modified letters. \def\oe{oe}% \def\ae{ae}% \def\aa{aa}% \def\OE{OE}% \def\AE{AE}% \def\AA{AA}% \def\o{o}% \def\O{O}% \def\l{l}% \def\L{L}% \def\ss{ss}% \let\w=\indexdummyfont \let\t=\indexdummyfont \let\r=\indexdummyfont \let\i=\indexdummyfont \let\b=\indexdummyfont \let\emph=\indexdummyfont \let\strong=\indexdummyfont \let\cite=\indexdummyfont \let\sc=\indexdummyfont %Don't no-op \tt, since it isn't a user-level command % and is used in the definitions of the active chars like <, >, |... %\let\tt=\indexdummyfont \let\tclose=\indexdummyfont \let\code=\indexdummyfont \let\file=\indexdummyfont \let\samp=\indexdummyfont \let\kbd=\indexdummyfont \let\key=\indexdummyfont \let\var=\indexdummyfont \let\TeX=\indexdummytex \let\dots=\indexdummydots \def\@{@}% } % To define \realbackslash, we must make \ not be an escape. % We must first make another character (@) an escape % so we do not become unable to do a definition. {\catcode`\@=0 \catcode`\\=\other @gdef@realbackslash{\}} \let\indexbackslash=0 %overridden during \printindex. \let\SETmarginindex=\relax %initialize! % workhorse for all \fooindexes % #1 is name of index, #2 is stuff to put there \def\doind #1#2{% % Put the index entry in the margin if desired. \ifx\SETmarginindex\relax\else \insert\margin{\hbox{\vrule height8pt depth3pt width0pt #2}}% \fi {% \count255=\lastpenalty {% \indexdummies % Must do this here, since \bf, etc expand at this stage \escapechar=`\\ {% \let\folio=0% We will expand all macros now EXCEPT \folio. \def\rawbackslashxx{\indexbackslash}% \indexbackslash isn't defined now % so it will be output as is; and it will print as backslash. % % First process the index-string with all font commands turned off % to get the string to sort by. {\indexnofonts \xdef\indexsorttmp{#2}}% % % Now produce the complete index entry, with both the sort key and the % original text, including any font commands. \toks0 = {#2}% \edef\temp{% \write\csname#1indfile\endcsname{% \realbackslash entry{\indexsorttmp}{\folio}{\the\toks0}}% }% \temp }% }% \penalty\count255 }% } \def\dosubind #1#2#3{% {\count10=\lastpenalty % {\indexdummies % Must do this here, since \bf, etc expand at this stage \escapechar=`\\% {\let\folio=0% \def\rawbackslashxx{\indexbackslash}% % % Now process the index-string once, with all font commands turned off, % to get the string to sort the index by. {\indexnofonts \xdef\temp1{#2 #3}% }% % Now produce the complete index entry. We process the index-string again, % this time with font commands expanded, to get what to print in the index. \edef\temp{% \write \csname#1indfile\endcsname{% \realbackslash entry {\temp1}{\folio}{#2}{#3}}}% \temp }% }\penalty\count10}} % The index entry written in the file actually looks like % \entry {sortstring}{page}{topic} % or % \entry {sortstring}{page}{topic}{subtopic} % The texindex program reads in these files and writes files % containing these kinds of lines: % \initial {c} % before the first topic whose initial is c % \entry {topic}{pagelist} % for a topic that is used without subtopics % \primary {topic} % for the beginning of a topic that is used with subtopics % \secondary {subtopic}{pagelist} % for each subtopic. % Define the user-accessible indexing commands % @findex, @vindex, @kindex, @cindex. \def\findex {\fnindex} \def\kindex {\kyindex} \def\cindex {\cpindex} \def\vindex {\vrindex} \def\tindex {\tpindex} \def\pindex {\pgindex} \def\cindexsub {\begingroup\obeylines\cindexsub} {\obeylines % \gdef\cindexsub "#1" #2^^M{\endgroup % \dosubind{cp}{#2}{#1}}} % Define the macros used in formatting output of the sorted index material. % @printindex causes a particular index (the ??s file) to get printed. % It does not print any chapter heading (usually an @unnumbered). % \def\printindex{\parsearg\doprintindex} \def\doprintindex#1{\begingroup \dobreak \chapheadingskip{10000}% % \indexfonts \rm \tolerance = 9500 \indexbreaks % % See if the index file exists and is nonempty. % Change catcode of @ here so that if the index file contains % \initial {@} % as its first line, TeX doesn't complain about mismatched braces % (because it thinks @} is a control sequence). \catcode`\@ = 11 \openin 1 \jobname.#1s \ifeof 1 % \enddoublecolumns gets confused if there is no text in the index, % and it loses the chapter title and the aux file entries for the % index. The easiest way to prevent this problem is to make sure % there is some text. (Index is nonexistent) \else % % If the index file exists but is empty, then \openin leaves \ifeof % false. We have to make TeX try to read something from the file, so % it can discover if there is anything in it. \read 1 to \temp \ifeof 1 (Index is empty) \else % Index files are almost Texinfo source, but we use \ as the escape % character. It would be better to use @, but that's too big a change % to make right now. \def\indexbackslash{\rawbackslashxx}% \catcode`\\ = 0 \escapechar = `\\ \begindoublecolumns \input \jobname.#1s \enddoublecolumns \fi \fi \closein 1 \endgroup} % These macros are used by the sorted index file itself. % Change them to control the appearance of the index. % Same as \bigskipamount except no shrink. % \balancecolumns gets confused if there is any shrink. \newskip\initialskipamount \initialskipamount 12pt plus4pt \def\initial #1{% {\let\tentt=\sectt \let\tt=\sectt \let\sf=\sectt \ifdim\lastskip<\initialskipamount \removelastskip \penalty-200 \vskip \initialskipamount\fi \line{\secbf#1\hfill}\kern 2pt\penalty10000}} % This typesets a paragraph consisting of #1, dot leaders, and then #2 % flush to the right margin. It is used for index and table of contents % entries. The paragraph is indented by \leftskip. % \def\entry #1#2{\begingroup % % Start a new paragraph if necessary, so our assignments below can't % affect previous text. \par % % Do not fill out the last line with white space. \parfillskip = 0in % % No extra space above this paragraph. \parskip = 0in % % Do not prefer a separate line ending with a hyphen to fewer lines. \finalhyphendemerits = 0 % % \hangindent is only relevant when the entry text and page number % don't both fit on one line. In that case, bob suggests starting the % dots pretty far over on the line. Unfortunately, a large % indentation looks wrong when the entry text itself is broken across % lines. So we use a small indentation and put up with long leaders. % % \hangafter is reset to 1 (which is the value we want) at the start % of each paragraph, so we need not do anything with that. \hangindent=2em % % When the entry text needs to be broken, just fill out the first line % with blank space. \rightskip = 0pt plus1fil % % Start a ``paragraph'' for the index entry so the line breaking % parameters we've set above will have an effect. \noindent % % Insert the text of the index entry. TeX will do line-breaking on it. #1% % The following is kludged to not output a line of dots in the index if % there are no page numbers. The next person who breaks this will be % cursed by a Unix daemon. \def\tempa{{\rm }}% \def\tempb{#2}% \edef\tempc{\tempa}% \edef\tempd{\tempb}% \ifx\tempc\tempd\ \else% % % If we must, put the page number on a line of its own, and fill out % this line with blank space. (The \hfil is overwhelmed with the % fill leaders glue in \indexdotfill if the page number does fit.) \hfil\penalty50 \null\nobreak\indexdotfill % Have leaders before the page number. % % The `\ ' here is removed by the implicit \unskip that TeX does as % part of (the primitive) \par. Without it, a spurious underfull % \hbox ensues. \ #2% The page number ends the paragraph. \fi% \par \endgroup} % Like \dotfill except takes at least 1 em. \def\indexdotfill{\cleaders \hbox{$\mathsurround=0pt \mkern1.5mu ${\it .}$ \mkern1.5mu$}\hskip 1em plus 1fill} \def\primary #1{\line{#1\hfil}} \newskip\secondaryindent \secondaryindent=0.5cm \def\secondary #1#2{ {\parfillskip=0in \parskip=0in \hangindent =1in \hangafter=1 \noindent\hskip\secondaryindent\hbox{#1}\indexdotfill #2\par }} % Define two-column mode, which we use to typeset indexes. % Adapted from the TeXbook, page 416, which is to say, % the manmac.tex format used to print the TeXbook itself. \catcode`\@=11 \newbox\partialpage \newdimen\doublecolumnhsize \def\begindoublecolumns{\begingroup % ended by \enddoublecolumns % Grab any single-column material above us. \output = {\global\setbox\partialpage = \vbox{% % % Here is a possibility not foreseen in manmac: if we accumulate a % whole lot of material, we might end up calling this \output % routine twice in a row (see the doublecol-lose test, which is % essentially a couple of indexes with @setchapternewpage off). In % that case, we must prevent the second \partialpage from % simply overwriting the first, causing us to lose the page. % This will preserve it until a real output routine can ship it % out. Generally, \partialpage will be empty when this runs and % this will be a no-op. \unvbox\partialpage % % Unvbox the main output page. \unvbox255 \kern-\topskip \kern\baselineskip }}% \eject % % Use the double-column output routine for subsequent pages. \output = {\doublecolumnout}% % % Change the page size parameters. We could do this once outside this % routine, in each of @smallbook, @afourpaper, and the default 8.5x11 % format, but then we repeat the same computation. Repeating a couple % of assignments once per index is clearly meaningless for the % execution time, so we may as well do it in one place. % % First we halve the line length, less a little for the gutter between % the columns. We compute the gutter based on the line length, so it % changes automatically with the paper format. The magic constant % below is chosen so that the gutter has the same value (well, +-<1pt) % as it did when we hard-coded it. % % We put the result in a separate register, \doublecolumhsize, so we % can restore it in \pagesofar, after \hsize itself has (potentially) % been clobbered. % \doublecolumnhsize = \hsize \advance\doublecolumnhsize by -.04154\hsize \divide\doublecolumnhsize by 2 \hsize = \doublecolumnhsize % % Double the \vsize as well. (We don't need a separate register here, % since nobody clobbers \vsize.) \vsize = 2\vsize } \def\doublecolumnout{% \splittopskip=\topskip \splitmaxdepth=\maxdepth % Get the available space for the double columns -- the normal % (undoubled) page height minus any material left over from the % previous page. \dimen@=\pageheight \advance\dimen@ by-\ht\partialpage % box0 will be the left-hand column, box2 the right. \setbox0=\vsplit255 to\dimen@ \setbox2=\vsplit255 to\dimen@ \onepageout\pagesofar \unvbox255 \penalty\outputpenalty } \def\pagesofar{% % Re-output the contents of the output page -- any previous material, % followed by the two boxes we just split. \unvbox\partialpage \hsize = \doublecolumnhsize \wd0=\hsize \wd2=\hsize \hbox to\pagewidth{\box0\hfil\box2}% } \def\enddoublecolumns{% \output = {\balancecolumns}\eject % split what we have \endgroup % started in \begindoublecolumns % % Back to normal single-column typesetting, but take account of the % fact that we just accumulated some stuff on the output page. \pagegoal = \vsize } \def\balancecolumns{% % Called at the end of the double column material. \setbox0 = \vbox{\unvbox255}% \dimen@ = \ht0 \advance\dimen@ by \topskip \advance\dimen@ by-\baselineskip \divide\dimen@ by 2 \splittopskip = \topskip % Loop until we get a decent breakpoint. {\vbadness=10000 \loop \global\setbox3=\copy0 \global\setbox1=\vsplit3 to\dimen@ \ifdim\ht3>\dimen@ \global\advance\dimen@ by1pt \repeat}% \setbox0=\vbox to\dimen@{\unvbox1}% \setbox2=\vbox to\dimen@{\unvbox3}% \pagesofar } \catcode`\@ = \other \message{sectioning,} % Define chapters, sections, etc. \newcount\chapno \newcount\secno \secno=0 \newcount\subsecno \subsecno=0 \newcount\subsubsecno \subsubsecno=0 % This counter is funny since it counts through charcodes of letters A, B, ... \newcount\appendixno \appendixno = `\@ \def\appendixletter{\char\the\appendixno} \newwrite\contentsfile % This is called from \setfilename. \def\opencontents{\openout\contentsfile = \jobname.toc } % Each @chapter defines this as the name of the chapter. % page headings and footings can use it. @section does likewise \def\thischapter{} \def\thissection{} \def\seccheck#1{\ifnum \pageno<0 \errmessage{@#1 not allowed after generating table of contents}% \fi} \def\chapternofonts{% \let\rawbackslash=\relax \let\frenchspacing=\relax \def\result{\realbackslash result}% \def\equiv{\realbackslash equiv}% \def\expansion{\realbackslash expansion}% \def\print{\realbackslash print}% \def\TeX{\realbackslash TeX}% \def\dots{\realbackslash dots}% \def\result{\realbackslash result}% \def\equiv{\realbackslash equiv}% \def\expansion{\realbackslash expansion}% \def\print{\realbackslash print}% \def\error{\realbackslash error}% \def\point{\realbackslash point}% \def\copyright{\realbackslash copyright}% \def\tt{\realbackslash tt}% \def\bf{\realbackslash bf}% \def\w{\realbackslash w}% \def\less{\realbackslash less}% \def\gtr{\realbackslash gtr}% \def\hat{\realbackslash hat}% \def\char{\realbackslash char}% \def\tclose##1{\realbackslash tclose{##1}}% \def\code##1{\realbackslash code{##1}}% \def\samp##1{\realbackslash samp{##1}}% \def\r##1{\realbackslash r{##1}}% \def\b##1{\realbackslash b{##1}}% \def\key##1{\realbackslash key{##1}}% \def\file##1{\realbackslash file{##1}}% \def\kbd##1{\realbackslash kbd{##1}}% % These are redefined because @smartitalic wouldn't work inside xdef. \def\i##1{\realbackslash i{##1}}% \def\cite##1{\realbackslash cite{##1}}% \def\var##1{\realbackslash var{##1}}% \def\emph##1{\realbackslash emph{##1}}% \def\dfn##1{\realbackslash dfn{##1}}% } \newcount\absseclevel % used to calculate proper heading level \newcount\secbase\secbase=0 % @raise/lowersections modify this count % @raisesections: treat @section as chapter, @subsection as section, etc. \def\raisesections{\global\advance\secbase by -1} \let\up=\raisesections % original BFox name % @lowersections: treat @chapter as section, @section as subsection, etc. \def\lowersections{\global\advance\secbase by 1} \let\down=\lowersections % original BFox name % Choose a numbered-heading macro % #1 is heading level if unmodified by @raisesections or @lowersections % #2 is text for heading \def\numhead#1#2{\absseclevel=\secbase\advance\absseclevel by #1 \ifcase\absseclevel \chapterzzz{#2} \or \seczzz{#2} \or \numberedsubseczzz{#2} \or \numberedsubsubseczzz{#2} \else \ifnum \absseclevel<0 \chapterzzz{#2} \else \numberedsubsubseczzz{#2} \fi \fi } % like \numhead, but chooses appendix heading levels \def\apphead#1#2{\absseclevel=\secbase\advance\absseclevel by #1 \ifcase\absseclevel \appendixzzz{#2} \or \appendixsectionzzz{#2} \or \appendixsubseczzz{#2} \or \appendixsubsubseczzz{#2} \else \ifnum \absseclevel<0 \appendixzzz{#2} \else \appendixsubsubseczzz{#2} \fi \fi } % like \numhead, but chooses numberless heading levels \def\unnmhead#1#2{\absseclevel=\secbase\advance\absseclevel by #1 \ifcase\absseclevel \unnumberedzzz{#2} \or \unnumberedseczzz{#2} \or \unnumberedsubseczzz{#2} \or \unnumberedsubsubseczzz{#2} \else \ifnum \absseclevel<0 \unnumberedzzz{#2} \else \unnumberedsubsubseczzz{#2} \fi \fi } \def\thischaptername{No Chapter Title} \outer\def\chapter{\parsearg\chapteryyy} \def\chapteryyy #1{\numhead0{#1}} % normally numhead0 calls chapterzzz \def\chapterzzz #1{\seccheck{chapter}% \secno=0 \subsecno=0 \subsubsecno=0 \global\advance \chapno by 1 \message{\putwordChapter \the\chapno}% \chapmacro {#1}{\the\chapno}% \gdef\thissection{#1}% \gdef\thischaptername{#1}% % We don't substitute the actual chapter name into \thischapter % because we don't want its macros evaluated now. \xdef\thischapter{\putwordChapter{} \the\chapno: \noexpand\thischaptername}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash chapentry{\the\toks0}{\the\chapno}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \donoderef % \global\let\section = \numberedsec \global\let\subsection = \numberedsubsec \global\let\subsubsection = \numberedsubsubsec }} \outer\def\appendix{\parsearg\appendixyyy} \def\appendixyyy #1{\apphead0{#1}} % normally apphead0 calls appendixzzz \def\appendixzzz #1{\seccheck{appendix}% \secno=0 \subsecno=0 \subsubsecno=0 \global\advance \appendixno by 1 \message{Appendix \appendixletter}% \chapmacro {#1}{\putwordAppendix{} \appendixletter}% \gdef\thissection{#1}% \gdef\thischaptername{#1}% \xdef\thischapter{\putwordAppendix{} \appendixletter: \noexpand\thischaptername}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash chapentry{\the\toks0}% {\putwordAppendix{} \appendixletter}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \appendixnoderef % \global\let\section = \appendixsec \global\let\subsection = \appendixsubsec \global\let\subsubsection = \appendixsubsubsec }} % @centerchap is like @unnumbered, but the heading is centered. \outer\def\centerchap{\parsearg\centerchapyyy} \def\centerchapyyy #1{{\let\unnumbchapmacro=\centerchapmacro \unnumberedyyy{#1}}} \outer\def\top{\parsearg\unnumberedyyy} \outer\def\unnumbered{\parsearg\unnumberedyyy} \def\unnumberedyyy #1{\unnmhead0{#1}} % normally unnmhead0 calls unnumberedzzz \def\unnumberedzzz #1{\seccheck{unnumbered}% \secno=0 \subsecno=0 \subsubsecno=0 % % This used to be simply \message{#1}, but TeX fully expands the % argument to \message. Therefore, if #1 contained @-commands, TeX % expanded them. For example, in `@unnumbered The @cite{Book}', TeX % expanded @cite (which turns out to cause errors because \cite is meant % to be executed, not expanded). % % Anyway, we don't want the fully-expanded definition of @cite to appear % as a result of the \message, we just want `@cite' itself. We use % \the to achieve this: TeX expands \the only once, % simply yielding the contents of the . \toks0 = {#1}\message{(\the\toks0)}% % \unnumbchapmacro {#1}% \gdef\thischapter{#1}\gdef\thissection{#1}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash unnumbchapentry{\the\toks0}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \unnumbnoderef % \global\let\section = \unnumberedsec \global\let\subsection = \unnumberedsubsec \global\let\subsubsection = \unnumberedsubsubsec }} \outer\def\numberedsec{\parsearg\secyyy} \def\secyyy #1{\numhead1{#1}} % normally calls seczzz \def\seczzz #1{\seccheck{section}% \subsecno=0 \subsubsecno=0 \global\advance \secno by 1 % \gdef\thissection{#1}\secheading {#1}{\the\chapno}{\the\secno}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash secentry % {\the\toks0}{\the\chapno}{\the\secno}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \donoderef % \penalty 10000 % }} \outer\def\appendixsection{\parsearg\appendixsecyyy} \outer\def\appendixsec{\parsearg\appendixsecyyy} \def\appendixsecyyy #1{\apphead1{#1}} % normally calls appendixsectionzzz \def\appendixsectionzzz #1{\seccheck{appendixsection}% \subsecno=0 \subsubsecno=0 \global\advance \secno by 1 % \gdef\thissection{#1}\secheading {#1}{\appendixletter}{\the\secno}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash secentry % {\the\toks0}{\appendixletter}{\the\secno}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \appendixnoderef % \penalty 10000 % }} \outer\def\unnumberedsec{\parsearg\unnumberedsecyyy} \def\unnumberedsecyyy #1{\unnmhead1{#1}} % normally calls unnumberedseczzz \def\unnumberedseczzz #1{\seccheck{unnumberedsec}% \plainsecheading {#1}\gdef\thissection{#1}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash unnumbsecentry{\the\toks0}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \unnumbnoderef % \penalty 10000 % }} \outer\def\numberedsubsec{\parsearg\numberedsubsecyyy} \def\numberedsubsecyyy #1{\numhead2{#1}} % normally calls numberedsubseczzz \def\numberedsubseczzz #1{\seccheck{subsection}% \gdef\thissection{#1}\subsubsecno=0 \global\advance \subsecno by 1 % \subsecheading {#1}{\the\chapno}{\the\secno}{\the\subsecno}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash subsecentry % {\the\toks0}{\the\chapno}{\the\secno}{\the\subsecno}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \donoderef % \penalty 10000 % }} \outer\def\appendixsubsec{\parsearg\appendixsubsecyyy} \def\appendixsubsecyyy #1{\apphead2{#1}} % normally calls appendixsubseczzz \def\appendixsubseczzz #1{\seccheck{appendixsubsec}% \gdef\thissection{#1}\subsubsecno=0 \global\advance \subsecno by 1 % \subsecheading {#1}{\appendixletter}{\the\secno}{\the\subsecno}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash subsecentry % {\the\toks0}{\appendixletter}{\the\secno}{\the\subsecno}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \appendixnoderef % \penalty 10000 % }} \outer\def\unnumberedsubsec{\parsearg\unnumberedsubsecyyy} \def\unnumberedsubsecyyy #1{\unnmhead2{#1}} %normally calls unnumberedsubseczzz \def\unnumberedsubseczzz #1{\seccheck{unnumberedsubsec}% \plainsubsecheading {#1}\gdef\thissection{#1}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash unnumbsubsecentry{\the\toks0}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \unnumbnoderef % \penalty 10000 % }} \outer\def\numberedsubsubsec{\parsearg\numberedsubsubsecyyy} \def\numberedsubsubsecyyy #1{\numhead3{#1}} % normally numberedsubsubseczzz \def\numberedsubsubseczzz #1{\seccheck{subsubsection}% \gdef\thissection{#1}\global\advance \subsubsecno by 1 % \subsubsecheading {#1} {\the\chapno}{\the\secno}{\the\subsecno}{\the\subsubsecno}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash subsubsecentry{\the\toks0} {\the\chapno}{\the\secno}{\the\subsecno}{\the\subsubsecno} {\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \donoderef % \penalty 10000 % }} \outer\def\appendixsubsubsec{\parsearg\appendixsubsubsecyyy} \def\appendixsubsubsecyyy #1{\apphead3{#1}} % normally appendixsubsubseczzz \def\appendixsubsubseczzz #1{\seccheck{appendixsubsubsec}% \gdef\thissection{#1}\global\advance \subsubsecno by 1 % \subsubsecheading {#1} {\appendixletter}{\the\secno}{\the\subsecno}{\the\subsubsecno}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash subsubsecentry{\the\toks0}% {\appendixletter} {\the\secno}{\the\subsecno}{\the\subsubsecno}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \appendixnoderef % \penalty 10000 % }} \outer\def\unnumberedsubsubsec{\parsearg\unnumberedsubsubsecyyy} \def\unnumberedsubsubsecyyy #1{\unnmhead3{#1}} %normally unnumberedsubsubseczzz \def\unnumberedsubsubseczzz #1{\seccheck{unnumberedsubsubsec}% \plainsubsubsecheading {#1}\gdef\thissection{#1}% {\chapternofonts% \toks0 = {#1}% \edef\temp{{\realbackslash unnumbsubsubsecentry{\the\toks0}{\noexpand\folio}}}% \escapechar=`\\% \write \contentsfile \temp % \unnumbnoderef % \penalty 10000 % }} % These are variants which are not "outer", so they can appear in @ifinfo. % Actually, they should now be obsolete; ordinary section commands should work. \def\infotop{\parsearg\unnumberedzzz} \def\infounnumbered{\parsearg\unnumberedzzz} \def\infounnumberedsec{\parsearg\unnumberedseczzz} \def\infounnumberedsubsec{\parsearg\unnumberedsubseczzz} \def\infounnumberedsubsubsec{\parsearg\unnumberedsubsubseczzz} \def\infoappendix{\parsearg\appendixzzz} \def\infoappendixsec{\parsearg\appendixseczzz} \def\infoappendixsubsec{\parsearg\appendixsubseczzz} \def\infoappendixsubsubsec{\parsearg\appendixsubsubseczzz} \def\infochapter{\parsearg\chapterzzz} \def\infosection{\parsearg\sectionzzz} \def\infosubsection{\parsearg\subsectionzzz} \def\infosubsubsection{\parsearg\subsubsectionzzz} % These macros control what the section commands do, according % to what kind of chapter we are in (ordinary, appendix, or unnumbered). % Define them by default for a numbered chapter. \global\let\section = \numberedsec \global\let\subsection = \numberedsubsec \global\let\subsubsection = \numberedsubsubsec % Define @majorheading, @heading and @subheading % NOTE on use of \vbox for chapter headings, section headings, and % such: % 1) We use \vbox rather than the earlier \line to permit % overlong headings to fold. % 2) \hyphenpenalty is set to 10000 because hyphenation in a % heading is obnoxious; this forbids it. % 3) Likewise, headings look best if no \parindent is used, and % if justification is not attempted. Hence \raggedright. \def\majorheading{\parsearg\majorheadingzzz} \def\majorheadingzzz #1{% {\advance\chapheadingskip by 10pt \chapbreak }% {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt\raggedright \rm #1\hfill}}\bigskip \par\penalty 200} \def\chapheading{\parsearg\chapheadingzzz} \def\chapheadingzzz #1{\chapbreak % {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt\raggedright \rm #1\hfill}}\bigskip \par\penalty 200} % @heading, @subheading, @subsubheading. \def\heading{\parsearg\plainsecheading} \def\subheading{\parsearg\plainsubsecheading} \def\subsubheading{\parsearg\plainsubsubsecheading} % These macros generate a chapter, section, etc. heading only % (including whitespace, linebreaking, etc. around it), % given all the information in convenient, parsed form. %%% Args are the skip and penalty (usually negative) \def\dobreak#1#2{\par\ifdim\lastskip<#1\removelastskip\penalty#2\vskip#1\fi} \def\setchapterstyle #1 {\csname CHAPF#1\endcsname} %%% Define plain chapter starts, and page on/off switching for it % Parameter controlling skip before chapter headings (if needed) \newskip\chapheadingskip \def\chapbreak{\dobreak \chapheadingskip {-4000}} \def\chappager{\par\vfill\supereject} \def\chapoddpage{\chappager \ifodd\pageno \else \hbox to 0pt{} \chappager\fi} \def\setchapternewpage #1 {\csname CHAPPAG#1\endcsname} \def\CHAPPAGoff{ \global\let\contentsalignmacro = \chappager \global\let\pchapsepmacro=\chapbreak \global\let\pagealignmacro=\chappager} \def\CHAPPAGon{ \global\let\contentsalignmacro = \chappager \global\let\pchapsepmacro=\chappager \global\let\pagealignmacro=\chappager \global\def\HEADINGSon{\HEADINGSsingle}} \def\CHAPPAGodd{ \global\let\contentsalignmacro = \chapoddpage \global\let\pchapsepmacro=\chapoddpage \global\let\pagealignmacro=\chapoddpage \global\def\HEADINGSon{\HEADINGSdouble}} \CHAPPAGon \def\CHAPFplain{ \global\let\chapmacro=\chfplain \global\let\unnumbchapmacro=\unnchfplain \global\let\centerchapmacro=\centerchfplain} % Plain chapter opening. % #1 is the text, #2 the chapter number or empty if unnumbered. \def\chfplain#1#2{% \pchapsepmacro {% \chapfonts \rm \def\chapnum{#2}% \setbox0 = \hbox{#2\ifx\chapnum\empty\else\enspace\fi}% \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright \hangindent = \wd0 \centerparametersmaybe \unhbox0 #1\par}% }% \nobreak\bigskip % no page break after a chapter title \nobreak } % Plain opening for unnumbered. \def\unnchfplain#1{\chfplain{#1}{}} % @centerchap -- centered and unnumbered. \let\centerparametersmaybe = \relax \def\centerchfplain#1{{% \def\centerparametersmaybe{% \advance\rightskip by 3\rightskip \leftskip = \rightskip \parfillskip = 0pt }% \chfplain{#1}{}% }} \CHAPFplain % The default \def\unnchfopen #1{% \chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt\raggedright \rm #1\hfill}}\bigskip \par\penalty 10000 % } \def\chfopen #1#2{\chapoddpage {\chapfonts \vbox to 3in{\vfil \hbox to\hsize{\hfil #2} \hbox to\hsize{\hfil #1} \vfil}}% \par\penalty 5000 % } \def\centerchfopen #1{% \chapoddpage {\chapfonts \vbox{\hyphenpenalty=10000\tolerance=5000 \parindent=0pt \hfill {\rm #1}\hfill}}\bigskip \par\penalty 10000 % } \def\CHAPFopen{ \global\let\chapmacro=\chfopen \global\let\unnumbchapmacro=\unnchfopen \global\let\centerchapmacro=\centerchfopen} % Section titles. \newskip\secheadingskip \def\secheadingbreak{\dobreak \secheadingskip {-1000}} \def\secheading#1#2#3{\sectionheading{sec}{#2.#3}{#1}} \def\plainsecheading#1{\sectionheading{sec}{}{#1}} % Subsection titles. \newskip \subsecheadingskip \def\subsecheadingbreak{\dobreak \subsecheadingskip {-500}} \def\subsecheading#1#2#3#4{\sectionheading{subsec}{#2.#3.#4}{#1}} \def\plainsubsecheading#1{\sectionheading{subsec}{}{#1}} % Subsubsection titles. \let\subsubsecheadingskip = \subsecheadingskip \let\subsubsecheadingbreak = \subsecheadingbreak \def\subsubsecheading#1#2#3#4#5{\sectionheading{subsubsec}{#2.#3.#4.#5}{#1}} \def\plainsubsubsecheading#1{\sectionheading{subsubsec}{}{#1}} % Print any size section title. % % #1 is the section type (sec/subsec/subsubsec), #2 is the section % number (maybe empty), #3 the text. \def\sectionheading#1#2#3{% {% \expandafter\advance\csname #1headingskip\endcsname by \parskip \csname #1headingbreak\endcsname }% {% % Switch to the right set of fonts. \csname #1fonts\endcsname \rm % % Only insert the separating space if we have a section number. \def\secnum{#2}% \setbox0 = \hbox{#2\ifx\secnum\empty\else\enspace\fi}% % \vbox{\hyphenpenalty=10000 \tolerance=5000 \parindent=0pt \raggedright \hangindent = \wd0 % zero if no section number \unhbox0 #3}% }% \ifdim\parskip<10pt \nobreak\kern10pt\nobreak\kern-\parskip\fi \nobreak } \message{toc printing,} % Finish up the main text and prepare to read what we've written % to \contentsfile. \newskip\contentsrightmargin \contentsrightmargin=1in \def\startcontents#1{% % If @setchapternewpage on, and @headings double, the contents should % start on an odd page, unlike chapters. Thus, we maintain % \contentsalignmacro in parallel with \pagealignmacro. % From: Torbjorn Granlund \contentsalignmacro \immediate\closeout \contentsfile \ifnum \pageno>0 \pageno = -1 % Request roman numbered pages. \fi % Don't need to put `Contents' or `Short Contents' in the headline. % It is abundantly clear what they are. \unnumbchapmacro{#1}\def\thischapter{}% \begingroup % Set up to handle contents files properly. \catcode`\\=0 \catcode`\{=1 \catcode`\}=2 \catcode`\@=11 % We can't do this, because then an actual ^ in a section % title fails, e.g., @chapter ^ -- exponentiation. --karl, 9jul97. %\catcode`\^=7 % to see ^^e4 as \"a etc. juha@piuha.ydi.vtt.fi \raggedbottom % Worry more about breakpoints than the bottom. \advance\hsize by -\contentsrightmargin % Don't use the full line length. } % Normal (long) toc. \outer\def\contents{% \startcontents{\putwordTableofContents}% \input \jobname.toc \endgroup \vfill \eject } % And just the chapters. \outer\def\summarycontents{% \startcontents{\putwordShortContents}% % \let\chapentry = \shortchapentry \let\unnumbchapentry = \shortunnumberedentry % We want a true roman here for the page numbers. \secfonts \let\rm=\shortcontrm \let\bf=\shortcontbf \let\sl=\shortcontsl \rm \hyphenpenalty = 10000 \advance\baselineskip by 1pt % Open it up a little. \def\secentry ##1##2##3##4{} \def\unnumbsecentry ##1##2{} \def\subsecentry ##1##2##3##4##5{} \def\unnumbsubsecentry ##1##2{} \def\subsubsecentry ##1##2##3##4##5##6{} \def\unnumbsubsubsecentry ##1##2{} \input \jobname.toc \endgroup \vfill \eject } \let\shortcontents = \summarycontents % These macros generate individual entries in the table of contents. % The first argument is the chapter or section name. % The last argument is the page number. % The arguments in between are the chapter number, section number, ... % Chapter-level things, for both the long and short contents. \def\chapentry#1#2#3{\dochapentry{#2\labelspace#1}{#3}} % See comments in \dochapentry re vbox and related settings \def\shortchapentry#1#2#3{% \tocentry{\shortchaplabel{#2}\labelspace #1}{\doshortpageno{#3}}% } % Typeset the label for a chapter or appendix for the short contents. % The arg is, e.g. `Appendix A' for an appendix, or `3' for a chapter. % We could simplify the code here by writing out an \appendixentry % command in the toc file for appendices, instead of using \chapentry % for both, but it doesn't seem worth it. \setbox0 = \hbox{\shortcontrm \putwordAppendix } \newdimen\shortappendixwidth \shortappendixwidth = \wd0 \def\shortchaplabel#1{% % We typeset #1 in a box of constant width, regardless of the text of % #1, so the chapter titles will come out aligned. \setbox0 = \hbox{#1}% \dimen0 = \ifdim\wd0 > \shortappendixwidth \shortappendixwidth \else 0pt \fi % % This space should be plenty, since a single number is .5em, and the % widest letter (M) is 1em, at least in the Computer Modern fonts. % (This space doesn't include the extra space that gets added after % the label; that gets put in by \shortchapentry above.) \advance\dimen0 by 1.1em \hbox to \dimen0{#1\hfil}% } \def\unnumbchapentry#1#2{\dochapentry{#1}{#2}} \def\shortunnumberedentry#1#2{\tocentry{#1}{\doshortpageno{#2}}} % Sections. \def\secentry#1#2#3#4{\dosecentry{#2.#3\labelspace#1}{#4}} \def\unnumbsecentry#1#2{\dosecentry{#1}{#2}} % Subsections. \def\subsecentry#1#2#3#4#5{\dosubsecentry{#2.#3.#4\labelspace#1}{#5}} \def\unnumbsubsecentry#1#2{\dosubsecentry{#1}{#2}} % And subsubsections. \def\subsubsecentry#1#2#3#4#5#6{% \dosubsubsecentry{#2.#3.#4.#5\labelspace#1}{#6}} \def\unnumbsubsubsecentry#1#2{\dosubsubsecentry{#1}{#2}} % This parameter controls the indentation of the various levels. \newdimen\tocindent \tocindent = 3pc % Now for the actual typesetting. In all these, #1 is the text and #2 is the % page number. % % If the toc has to be broken over pages, we want it to be at chapters % if at all possible; hence the \penalty. \def\dochapentry#1#2{% \penalty-300 \vskip1\baselineskip plus.33\baselineskip minus.25\baselineskip \begingroup \chapentryfonts \tocentry{#1}{\dopageno{#2}}% \endgroup \nobreak\vskip .25\baselineskip plus.1\baselineskip } \def\dosecentry#1#2{\begingroup \secentryfonts \leftskip=\tocindent \tocentry{#1}{\dopageno{#2}}% \endgroup} \def\dosubsecentry#1#2{\begingroup \subsecentryfonts \leftskip=2\tocindent \tocentry{#1}{\dopageno{#2}}% \endgroup} \def\dosubsubsecentry#1#2{\begingroup \subsubsecentryfonts \leftskip=3\tocindent \tocentry{#1}{\dopageno{#2}}% \endgroup} % Final typesetting of a toc entry; we use the same \entry macro as for % the index entries, but we want to suppress hyphenation here. (We % can't do that in the \entry macro, since index entries might consist % of hyphenated-identifiers-that-do-not-fit-on-a-line-and-nothing-else.) \def\tocentry#1#2{\begingroup \vskip 0pt plus1pt % allow a little stretch for the sake of nice page breaks % Do not use \turnoffactive in these arguments. Since the toc is % typeset in cmr, so characters such as _ would come out wrong; we % have to do the usual translation tricks. \entry{#1}{#2}% \endgroup} % Space between chapter (or whatever) number and the title. \def\labelspace{\hskip1em \relax} \def\dopageno#1{{\rm #1}} \def\doshortpageno#1{{\rm #1}} \def\chapentryfonts{\secfonts \rm} \def\secentryfonts{\textfonts} \let\subsecentryfonts = \textfonts \let\subsubsecentryfonts = \textfonts \message{environments,} % Since these characters are used in examples, it should be an even number of % \tt widths. Each \tt character is 1en, so two makes it 1em. % Furthermore, these definitions must come after we define our fonts. \newbox\dblarrowbox \newbox\longdblarrowbox \newbox\pushcharbox \newbox\bullbox \newbox\equivbox \newbox\errorbox %{\tentt %\global\setbox\dblarrowbox = \hbox to 1em{\hfil$\Rightarrow$\hfil} %\global\setbox\longdblarrowbox = \hbox to 1em{\hfil$\mapsto$\hfil} %\global\setbox\pushcharbox = \hbox to 1em{\hfil$\dashv$\hfil} %\global\setbox\equivbox = \hbox to 1em{\hfil$\ptexequiv$\hfil} % Adapted from the manmac format (p.420 of TeXbook) %\global\setbox\bullbox = \hbox to 1em{\kern.15em\vrule height .75ex width .85ex % depth .1ex\hfil} %} % @point{}, @result{}, @expansion{}, @print{}, @equiv{}. \def\point{$\star$} \def\result{\leavevmode\raise.15ex\hbox to 1em{\hfil$\Rightarrow$\hfil}} \def\expansion{\leavevmode\raise.1ex\hbox to 1em{\hfil$\mapsto$\hfil}} \def\print{\leavevmode\lower.1ex\hbox to 1em{\hfil$\dashv$\hfil}} \def\equiv{\leavevmode\lower.1ex\hbox to 1em{\hfil$\ptexequiv$\hfil}} % Adapted from the TeXbook's \boxit. {\tentt \global\dimen0 = 3em}% Width of the box. \dimen2 = .55pt % Thickness of rules % The text. (`r' is open on the right, `e' somewhat less so on the left.) \setbox0 = \hbox{\kern-.75pt \tensf error\kern-1.5pt} \global\setbox\errorbox=\hbox to \dimen0{\hfil \hsize = \dimen0 \advance\hsize by -5.8pt % Space to left+right. \advance\hsize by -2\dimen2 % Rules. \vbox{ \hrule height\dimen2 \hbox{\vrule width\dimen2 \kern3pt % Space to left of text. \vtop{\kern2.4pt \box0 \kern2.4pt}% Space above/below. \kern3pt\vrule width\dimen2}% Space to right. \hrule height\dimen2} \hfil} % The @error{} command. \def\error{\leavevmode\lower.7ex\copy\errorbox} % @tex ... @end tex escapes into raw Tex temporarily. % One exception: @ is still an escape character, so that @end tex works. % But \@ or @@ will get a plain tex @ character. \def\tex{\begingroup \catcode `\\=0 \catcode `\{=1 \catcode `\}=2 \catcode `\$=3 \catcode `\&=4 \catcode `\#=6 \catcode `\^=7 \catcode `\_=8 \catcode `\~=13 \let~=\tie \catcode `\%=14 \catcode 43=12 % plus \catcode`\"=12 \catcode`\==12 \catcode`\|=12 \catcode`\<=12 \catcode`\>=12 \escapechar=`\\ % \let\b=\ptexb \let\bullet=\ptexbullet \let\c=\ptexc \let\,=\ptexcomma \let\.=\ptexdot \let\dots=\ptexdots \let\equiv=\ptexequiv \let\!=\ptexexclam \let\i=\ptexi \let\{=\ptexlbrace \let\+=\tabalign \let\}=\ptexrbrace \let\*=\ptexstar \let\t=\ptext % \def\endldots{\mathinner{\ldots\ldots\ldots\ldots}}% \def\enddots{\relax\ifmmode\endldots\else$\mathsurround=0pt \endldots\,$\fi}% \def\@{@}% \let\Etex=\endgroup} % Define @lisp ... @endlisp. % @lisp does a \begingroup so it can rebind things, % including the definition of @endlisp (which normally is erroneous). % Amount to narrow the margins by for @lisp. \newskip\lispnarrowing \lispnarrowing=0.4in % This is the definition that ^^M gets inside @lisp, @example, and other % such environments. \null is better than a space, since it doesn't % have any width. \def\lisppar{\null\endgraf} % Make each space character in the input produce a normal interword % space in the output. Don't allow a line break at this space, as this % is used only in environments like @example, where each line of input % should produce a line of output anyway. % {\obeyspaces % \gdef\sepspaces{\obeyspaces\let =\tie}} % Define \obeyedspace to be our active space, whatever it is. This is % for use in \parsearg. {\sepspaces% \global\let\obeyedspace= } % This space is always present above and below environments. \newskip\envskipamount \envskipamount = 0pt % Make spacing and below environment symmetrical. We use \parskip here % to help in doing that, since in @example-like environments \parskip % is reset to zero; thus the \afterenvbreak inserts no space -- but the % start of the next paragraph will insert \parskip % \def\aboveenvbreak{{\advance\envskipamount by \parskip \endgraf \ifdim\lastskip<\envskipamount \removelastskip \penalty-50 \vskip\envskipamount \fi}} \let\afterenvbreak = \aboveenvbreak % \nonarrowing is a flag. If "set", @lisp etc don't narrow margins. \let\nonarrowing=\relax % @cartouche ... @end cartouche: draw rectangle w/rounded corners around % environment contents. \font\circle=lcircle10 \newdimen\circthick \newdimen\cartouter\newdimen\cartinner \newskip\normbskip\newskip\normpskip\newskip\normlskip \circthick=\fontdimen8\circle % \def\ctl{{\circle\char'013\hskip -6pt}}% 6pt from pl file: 1/2charwidth \def\ctr{{\hskip 6pt\circle\char'010}} \def\cbl{{\circle\char'012\hskip -6pt}} \def\cbr{{\hskip 6pt\circle\char'011}} \def\carttop{\hbox to \cartouter{\hskip\lskip \ctl\leaders\hrule height\circthick\hfil\ctr \hskip\rskip}} \def\cartbot{\hbox to \cartouter{\hskip\lskip \cbl\leaders\hrule height\circthick\hfil\cbr \hskip\rskip}} % \newskip\lskip\newskip\rskip \long\def\cartouche{% \begingroup \lskip=\leftskip \rskip=\rightskip \leftskip=0pt\rightskip=0pt %we want these *outside*. \cartinner=\hsize \advance\cartinner by-\lskip \advance\cartinner by-\rskip \cartouter=\hsize \advance\cartouter by 18.4pt % allow for 3pt kerns on either % side, and for 6pt waste from % each corner char, and rule thickness \normbskip=\baselineskip \normpskip=\parskip \normlskip=\lineskip % Flag to tell @lisp, etc., not to narrow margin. \let\nonarrowing=\comment \vbox\bgroup \baselineskip=0pt\parskip=0pt\lineskip=0pt \carttop \hbox\bgroup \hskip\lskip \vrule\kern3pt \vbox\bgroup \hsize=\cartinner \kern3pt \begingroup \baselineskip=\normbskip \lineskip=\normlskip \parskip=\normpskip \vskip -\parskip \def\Ecartouche{% \endgroup \kern3pt \egroup \kern3pt\vrule \hskip\rskip \egroup \cartbot \egroup \endgroup }} % This macro is called at the beginning of all the @example variants, % inside a group. \def\nonfillstart{% \aboveenvbreak \inENV % This group ends at the end of the body \hfuzz = 12pt % Don't be fussy \sepspaces % Make spaces be word-separators rather than space tokens. \singlespace \let\par = \lisppar % don't ignore blank lines \obeylines % each line of input is a line of output \parskip = 0pt \parindent = 0pt \emergencystretch = 0pt % don't try to avoid overfull boxes % @cartouche defines \nonarrowing to inhibit narrowing % at next level down. \ifx\nonarrowing\relax \advance \leftskip by \lispnarrowing \exdentamount=\lispnarrowing \let\exdent=\nofillexdent \let\nonarrowing=\relax \fi } % To ending an @example-like environment, we first end the paragraph % (via \afterenvbreak's vertical glue), and then the group. That way we % keep the zero \parskip that the environments set -- \parskip glue % will be inserted at the beginning of the next paragraph in the % document, after the environment. % \def\nonfillfinish{\afterenvbreak\endgroup}% \def\lisp{\begingroup \nonfillstart \let\Elisp = \nonfillfinish \tt % Make @kbd do something special, if requested. \let\kbdfont\kbdexamplefont \rawbackslash % have \ input char produce \ char from current font \gobble } % Define the \E... control sequence only if we are inside the % environment, so the error checking in \end will work. % % We must call \lisp last in the definition, since it reads the % return following the @example (or whatever) command. % \def\example{\begingroup \def\Eexample{\nonfillfinish\endgroup}\lisp} \def\smallexample{\begingroup \def\Esmallexample{\nonfillfinish\endgroup}\lisp} \def\smalllisp{\begingroup \def\Esmalllisp{\nonfillfinish\endgroup}\lisp} % @smallexample and @smalllisp. This is not used unless the @smallbook % command is given. Originally contributed by Pavel@xerox. % \def\smalllispx{\begingroup \nonfillstart \let\Esmalllisp = \nonfillfinish \let\Esmallexample = \nonfillfinish % % Smaller fonts for small examples. \indexfonts \tt \rawbackslash % make \ output the \ character from the current font (tt) \gobble } % This is @display; same as @lisp except use roman font. % \def\display{\begingroup \nonfillstart \let\Edisplay = \nonfillfinish \gobble } % This is @format; same as @display except don't narrow margins. % \def\format{\begingroup \let\nonarrowing = t \nonfillstart \let\Eformat = \nonfillfinish \gobble } % @flushleft (same as @format) and @flushright. % \def\flushleft{\begingroup \let\nonarrowing = t \nonfillstart \let\Eflushleft = \nonfillfinish \gobble } \def\flushright{\begingroup \let\nonarrowing = t \nonfillstart \let\Eflushright = \nonfillfinish \advance\leftskip by 0pt plus 1fill \gobble} % @quotation does normal linebreaking (hence we can't use \nonfillstart) % and narrows the margins. % \def\quotation{% \begingroup\inENV %This group ends at the end of the @quotation body {\parskip=0pt \aboveenvbreak}% because \aboveenvbreak inserts \parskip \singlespace \parindent=0pt % We have retained a nonzero parskip for the environment, since we're % doing normal filling. So to avoid extra space below the environment... \def\Equotation{\parskip = 0pt \nonfillfinish}% % % @cartouche defines \nonarrowing to inhibit narrowing at next level down. \ifx\nonarrowing\relax \advance\leftskip by \lispnarrowing \advance\rightskip by \lispnarrowing \exdentamount = \lispnarrowing \let\nonarrowing = \relax \fi } \message{defuns,} % Define formatter for defuns % First, allow user to change definition object font (\df) internally \def\setdeffont #1 {\csname DEF#1\endcsname} \newskip\defbodyindent \defbodyindent=.4in \newskip\defargsindent \defargsindent=50pt \newskip\deftypemargin \deftypemargin=12pt \newskip\deflastargmargin \deflastargmargin=18pt \newcount\parencount % define \functionparens, which makes ( and ) and & do special things. % \functionparens affects the group it is contained in. \def\activeparens{% \catcode`\(=\active \catcode`\)=\active \catcode`\&=\active \catcode`\[=\active \catcode`\]=\active} % Make control sequences which act like normal parenthesis chars. \let\lparen = ( \let\rparen = ) {\activeparens % Now, smart parens don't turn on until &foo (see \amprm) % Be sure that we always have a definition for `(', etc. For example, % if the fn name has parens in it, \boldbrax will not be in effect yet, % so TeX would otherwise complain about undefined control sequence. \global\let(=\lparen \global\let)=\rparen \global\let[=\lbrack \global\let]=\rbrack \gdef\functionparens{\boldbrax\let&=\amprm\parencount=0 } \gdef\boldbrax{\let(=\opnr\let)=\clnr\let[=\lbrb\let]=\rbrb} % This is used to turn on special parens % but make & act ordinary (given that it's active). \gdef\boldbraxnoamp{\let(=\opnr\let)=\clnr\let[=\lbrb\let]=\rbrb\let&=\ampnr} % Definitions of (, ) and & used in args for functions. % This is the definition of ( outside of all parentheses. \gdef\oprm#1 {{\rm\char`\(}#1 \bf \let(=\opnested \global\advance\parencount by 1 } % % This is the definition of ( when already inside a level of parens. \gdef\opnested{\char`\(\global\advance\parencount by 1 } % \gdef\clrm{% Print a paren in roman if it is taking us back to depth of 0. % also in that case restore the outer-level definition of (. \ifnum \parencount=1 {\rm \char `\)}\sl \let(=\oprm \else \char `\) \fi \global\advance \parencount by -1 } % If we encounter &foo, then turn on ()-hacking afterwards \gdef\amprm#1 {{\rm\}\let(=\oprm \let)=\clrm\ } % \gdef\normalparens{\boldbrax\let&=\ampnr} } % End of definition inside \activeparens %% These parens (in \boldbrax) actually are a little bolder than the %% contained text. This is especially needed for [ and ] \def\opnr{{\sf\char`\(}\global\advance\parencount by 1 } \def\clnr{{\sf\char`\)}\global\advance\parencount by -1 } \def\ampnr{\&} \def\lbrb{{\bf\char`\[}} \def\rbrb{{\bf\char`\]}} % First, defname, which formats the header line itself. % #1 should be the function name. % #2 should be the type of definition, such as "Function". \def\defname #1#2{% % Get the values of \leftskip and \rightskip as they were % outside the @def... \dimen2=\leftskip \advance\dimen2 by -\defbodyindent \dimen3=\rightskip \advance\dimen3 by -\defbodyindent \noindent % \setbox0=\hbox{\hskip \deflastargmargin{\rm #2}\hskip \deftypemargin}% \dimen0=\hsize \advance \dimen0 by -\wd0 % compute size for first line \dimen1=\hsize \advance \dimen1 by -\defargsindent %size for continuations \parshape 2 0in \dimen0 \defargsindent \dimen1 % % Now output arg 2 ("Function" or some such) % ending at \deftypemargin from the right margin, % but stuck inside a box of width 0 so it does not interfere with linebreaking {% Adjust \hsize to exclude the ambient margins, % so that \rightline will obey them. \advance \hsize by -\dimen2 \advance \hsize by -\dimen3 \rlap{\rightline{{\rm #2}\hskip \deftypemargin}}}% % Make all lines underfull and no complaints: \tolerance=10000 \hbadness=10000 \advance\leftskip by -\defbodyindent \exdentamount=\defbodyindent {\df #1}\enskip % Generate function name } % Actually process the body of a definition % #1 should be the terminating control sequence, such as \Edefun. % #2 should be the "another name" control sequence, such as \defunx. % #3 should be the control sequence that actually processes the header, % such as \defunheader. \def\defparsebody #1#2#3{\begingroup\inENV% Environment for definitionbody \medbreak % % Define the end token that this defining construct specifies % so that it will exit this group. \def#1{\endgraf\endgroup\medbreak}% \def#2{\begingroup\obeylines\activeparens\spacesplit#3}% \parindent=0in \advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent \exdentamount=\defbodyindent \begingroup % \catcode 61=\active % 61 is `=' \obeylines\activeparens\spacesplit#3} % #1 is the \E... control sequence to end the definition (which we define). % #2 is the \...x control sequence for consecutive fns (which we define). % #3 is the control sequence to call to resume processing. % #4, delimited by the space, is the class name. % \def\defmethparsebody#1#2#3#4 {\begingroup\inENV % \medbreak % % Define the end token that this defining construct specifies % so that it will exit this group. \def#1{\endgraf\endgroup\medbreak}% \def#2##1 {\begingroup\obeylines\activeparens\spacesplit{#3{##1}}}% \parindent=0in \advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent \exdentamount=\defbodyindent \begingroup\obeylines\activeparens\spacesplit{#3{#4}}} % @deftypemethod has an extra argument that nothing else does. Sigh. % \def\deftypemethparsebody#1#2#3#4 #5 {\begingroup\inENV % \medbreak % % Define the end token that this defining construct specifies % so that it will exit this group. \def#1{\endgraf\endgroup\medbreak}% \def#2##1 {\begingroup\obeylines\activeparens\spacesplit{#3{##1}}}% \parindent=0in \advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent \exdentamount=\defbodyindent \begingroup\obeylines\activeparens\spacesplit{#3{#4}{#5}}} \def\defopparsebody #1#2#3#4#5 {\begingroup\inENV % \medbreak % % Define the end token that this defining construct specifies % so that it will exit this group. \def#1{\endgraf\endgroup\medbreak}% \def#2##1 ##2 {\def#4{##1}% \begingroup\obeylines\activeparens\spacesplit{#3{##2}}}% \parindent=0in \advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent \exdentamount=\defbodyindent \begingroup\obeylines\activeparens\spacesplit{#3{#5}}} % These parsing functions are similar to the preceding ones % except that they do not make parens into active characters. % These are used for "variables" since they have no arguments. \def\defvarparsebody #1#2#3{\begingroup\inENV% Environment for definitionbody \medbreak % % Define the end token that this defining construct specifies % so that it will exit this group. \def#1{\endgraf\endgroup\medbreak}% \def#2{\begingroup\obeylines\spacesplit#3}% \parindent=0in \advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent \exdentamount=\defbodyindent \begingroup % \catcode 61=\active % \obeylines\spacesplit#3} % This is used for \def{tp,vr}parsebody. It could probably be used for % some of the others, too, with some judicious conditionals. % \def\parsebodycommon#1#2#3{% \begingroup\inENV % \medbreak % % Define the end token that this defining construct specifies % so that it will exit this group. \def#1{\endgraf\endgroup\medbreak}% \def#2##1 {\begingroup\obeylines\spacesplit{#3{##1}}}% \parindent=0in \advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent \exdentamount=\defbodyindent \begingroup\obeylines } \def\defvrparsebody#1#2#3#4 {% \parsebodycommon{#1}{#2}{#3}% \spacesplit{#3{#4}}% } % This loses on `@deftp {Data Type} {struct termios}' -- it thinks the % type is just `struct', because we lose the braces in `{struct % termios}' when \spacesplit reads its undelimited argument. Sigh. % \let\deftpparsebody=\defvrparsebody % % So, to get around this, we put \empty in with the type name. That % way, TeX won't find exactly `{...}' as an undelimited argument, and % won't strip off the braces. % \def\deftpparsebody #1#2#3#4 {% \parsebodycommon{#1}{#2}{#3}% \spacesplit{\parsetpheaderline{#3{#4}}}\empty } % Fine, but then we have to eventually remove the \empty *and* the % braces (if any). That's what this does. % \def\removeemptybraces\empty#1\relax{#1} % After \spacesplit has done its work, this is called -- #1 is the final % thing to call, #2 the type name (which starts with \empty), and #3 % (which might be empty) the arguments. % \def\parsetpheaderline#1#2#3{% #1{\removeemptybraces#2\relax}{#3}% }% \def\defopvarparsebody #1#2#3#4#5 {\begingroup\inENV % \medbreak % % Define the end token that this defining construct specifies % so that it will exit this group. \def#1{\endgraf\endgroup\medbreak}% \def#2##1 ##2 {\def#4{##1}% \begingroup\obeylines\spacesplit{#3{##2}}}% \parindent=0in \advance\leftskip by \defbodyindent \advance \rightskip by \defbodyindent \exdentamount=\defbodyindent \begingroup\obeylines\spacesplit{#3{#5}}} % Split up #2 at the first space token. % call #1 with two arguments: % the first is all of #2 before the space token, % the second is all of #2 after that space token. % If #2 contains no space token, all of it is passed as the first arg % and the second is passed as empty. {\obeylines \gdef\spacesplit#1#2^^M{\endgroup\spacesplitfoo{#1}#2 \relax\spacesplitfoo}% \long\gdef\spacesplitfoo#1#2 #3#4\spacesplitfoo{% \ifx\relax #3% #1{#2}{}\else #1{#2}{#3#4}\fi}} % So much for the things common to all kinds of definitions. % Define @defun. % First, define the processing that is wanted for arguments of \defun % Use this to expand the args and terminate the paragraph they make up \def\defunargs #1{\functionparens \sl % Expand, preventing hyphenation at `-' chars. % Note that groups don't affect changes in \hyphenchar. \hyphenchar\tensl=0 #1% \hyphenchar\tensl=45 \ifnum\parencount=0 \else \errmessage{Unbalanced parentheses in @def}\fi% \interlinepenalty=10000 \advance\rightskip by 0pt plus 1fil \endgraf\penalty 10000\vskip -\parskip\penalty 10000% } \def\deftypefunargs #1{% % Expand, preventing hyphenation at `-' chars. % Note that groups don't affect changes in \hyphenchar. % Use \boldbraxnoamp, not \functionparens, so that & is not special. \boldbraxnoamp \tclose{#1}% avoid \code because of side effects on active chars \interlinepenalty=10000 \advance\rightskip by 0pt plus 1fil \endgraf\penalty 10000\vskip -\parskip\penalty 10000% } % Do complete processing of one @defun or @defunx line already parsed. % @deffn Command forward-char nchars \def\deffn{\defmethparsebody\Edeffn\deffnx\deffnheader} \def\deffnheader #1#2#3{\doind {fn}{\code{#2}}% \begingroup\defname {#2}{#1}\defunargs{#3}\endgroup % \catcode 61=\other % Turn off change made in \defparsebody } % @defun == @deffn Function \def\defun{\defparsebody\Edefun\defunx\defunheader} \def\defunheader #1#2{\doind {fn}{\code{#1}}% Make entry in function index \begingroup\defname {#1}{Function}% \defunargs {#2}\endgroup % \catcode 61=\other % Turn off change made in \defparsebody } % @deftypefun int foobar (int @var{foo}, float @var{bar}) \def\deftypefun{\defparsebody\Edeftypefun\deftypefunx\deftypefunheader} % #1 is the data type. #2 is the name and args. \def\deftypefunheader #1#2{\deftypefunheaderx{#1}#2 \relax} % #1 is the data type, #2 the name, #3 the args. \def\deftypefunheaderx #1#2 #3\relax{% \doind {fn}{\code{#2}}% Make entry in function index \begingroup\defname {\defheaderxcond#1\relax$$$#2}{Function}% \deftypefunargs {#3}\endgroup % \catcode 61=\other % Turn off change made in \defparsebody } % @deftypefn {Library Function} int foobar (int @var{foo}, float @var{bar}) \def\deftypefn{\defmethparsebody\Edeftypefn\deftypefnx\deftypefnheader} % \defheaderxcond#1\relax$$$ % puts #1 in @code, followed by a space, but does nothing if #1 is null. \def\defheaderxcond#1#2$$${\ifx#1\relax\else\code{#1#2} \fi} % #1 is the classification. #2 is the data type. #3 is the name and args. \def\deftypefnheader #1#2#3{\deftypefnheaderx{#1}{#2}#3 \relax} % #1 is the classification, #2 the data type, #3 the name, #4 the args. \def\deftypefnheaderx #1#2#3 #4\relax{% \doind {fn}{\code{#3}}% Make entry in function index \begingroup \normalparens % notably, turn off `&' magic, which prevents % at least some C++ text from working \defname {\defheaderxcond#2\relax$$$#3}{#1}% \deftypefunargs {#4}\endgroup % \catcode 61=\other % Turn off change made in \defparsebody } % @defmac == @deffn Macro \def\defmac{\defparsebody\Edefmac\defmacx\defmacheader} \def\defmacheader #1#2{\doind {fn}{\code{#1}}% Make entry in function index \begingroup\defname {#1}{Macro}% \defunargs {#2}\endgroup % \catcode 61=\other % Turn off change made in \defparsebody } % @defspec == @deffn Special Form \def\defspec{\defparsebody\Edefspec\defspecx\defspecheader} \def\defspecheader #1#2{\doind {fn}{\code{#1}}% Make entry in function index \begingroup\defname {#1}{Special Form}% \defunargs {#2}\endgroup % \catcode 61=\other % Turn off change made in \defparsebody } % This definition is run if you use @defunx % anywhere other than immediately after a @defun or @defunx. \def\deffnx #1 {\errmessage{@deffnx in invalid context}} \def\defunx #1 {\errmessage{@defunx in invalid context}} \def\defmacx #1 {\errmessage{@defmacx in invalid context}} \def\defspecx #1 {\errmessage{@defspecx in invalid context}} \def\deftypefnx #1 {\errmessage{@deftypefnx in invalid context}} \def\deftypemethodx #1 {\errmessage{@deftypemethodx in invalid context}} \def\deftypeunx #1 {\errmessage{@deftypeunx in invalid context}} % @defmethod, and so on % @defop CATEGORY CLASS OPERATION ARG... \def\defop #1 {\def\defoptype{#1}% \defopparsebody\Edefop\defopx\defopheader\defoptype} \def\defopheader #1#2#3{% \dosubind {fn}{\code{#2}}{\putwordon\ #1}% Make entry in function index \begingroup\defname {#2}{\defoptype{} on #1}% \defunargs {#3}\endgroup % } % @deftypemethod CLASS RETURN-TYPE METHOD ARG... % \def\deftypemethod{% \deftypemethparsebody\Edeftypemethod\deftypemethodx\deftypemethodheader} % % #1 is the class name, #2 the data type, #3 the method name, #4 the args. \def\deftypemethodheader#1#2#3#4{% \dosubind{fn}{\code{#3}}{\putwordon\ \code{#1}}% entry in function index \begingroup \defname{\defheaderxcond#2\relax$$$#3}{\putwordMethodon\ \code{#1}}% \deftypefunargs{#4}% \endgroup } % @defmethod == @defop Method % \def\defmethod{\defmethparsebody\Edefmethod\defmethodx\defmethodheader} % % #1 is the class name, #2 the method name, #3 the args. \def\defmethodheader#1#2#3{% \dosubind{fn}{\code{#2}}{\putwordon\ \code{#1}}% entry in function index \begingroup \defname{#2}{\putwordMethodon\ \code{#1}}% \defunargs{#3}% \endgroup } % @defcv {Class Option} foo-class foo-flag \def\defcv #1 {\def\defcvtype{#1}% \defopvarparsebody\Edefcv\defcvx\defcvarheader\defcvtype} \def\defcvarheader #1#2#3{% \dosubind {vr}{\code{#2}}{of #1}% Make entry in var index \begingroup\defname {#2}{\defcvtype{} of #1}% \defvarargs {#3}\endgroup % } % @defivar == @defcv {Instance Variable} \def\defivar{\defvrparsebody\Edefivar\defivarx\defivarheader} \def\defivarheader #1#2#3{% \dosubind {vr}{\code{#2}}{of #1}% Make entry in var index \begingroup\defname {#2}{Instance Variable of #1}% \defvarargs {#3}\endgroup % } % These definitions are run if you use @defmethodx, etc., % anywhere other than immediately after a @defmethod, etc. \def\defopx #1 {\errmessage{@defopx in invalid context}} \def\defmethodx #1 {\errmessage{@defmethodx in invalid context}} \def\defcvx #1 {\errmessage{@defcvx in invalid context}} \def\defivarx #1 {\errmessage{@defivarx in invalid context}} % Now @defvar % First, define the processing that is wanted for arguments of @defvar. % This is actually simple: just print them in roman. % This must expand the args and terminate the paragraph they make up \def\defvarargs #1{\normalparens #1% \interlinepenalty=10000 \endgraf\penalty 10000\vskip -\parskip\penalty 10000} % @defvr Counter foo-count \def\defvr{\defvrparsebody\Edefvr\defvrx\defvrheader} \def\defvrheader #1#2#3{\doind {vr}{\code{#2}}% \begingroup\defname {#2}{#1}\defvarargs{#3}\endgroup} % @defvar == @defvr Variable \def\defvar{\defvarparsebody\Edefvar\defvarx\defvarheader} \def\defvarheader #1#2{\doind {vr}{\code{#1}}% Make entry in var index \begingroup\defname {#1}{Variable}% \defvarargs {#2}\endgroup % } % @defopt == @defvr {User Option} \def\defopt{\defvarparsebody\Edefopt\defoptx\defoptheader} \def\defoptheader #1#2{\doind {vr}{\code{#1}}% Make entry in var index \begingroup\defname {#1}{User Option}% \defvarargs {#2}\endgroup % } % @deftypevar int foobar \def\deftypevar{\defvarparsebody\Edeftypevar\deftypevarx\deftypevarheader} % #1 is the data type. #2 is the name, perhaps followed by text that % is actually part of the data type, which should not be put into the index. \def\deftypevarheader #1#2{% \dovarind#2 \relax% Make entry in variables index \begingroup\defname {\defheaderxcond#1\relax$$$#2}{Variable}% \interlinepenalty=10000 \endgraf\penalty 10000\vskip -\parskip\penalty 10000 \endgroup} \def\dovarind#1 #2\relax{\doind{vr}{\code{#1}}} % @deftypevr {Global Flag} int enable \def\deftypevr{\defvrparsebody\Edeftypevr\deftypevrx\deftypevrheader} \def\deftypevrheader #1#2#3{\dovarind#3 \relax% \begingroup\defname {\defheaderxcond#2\relax$$$#3}{#1} \interlinepenalty=10000 \endgraf\penalty 10000\vskip -\parskip\penalty 10000 \endgroup} % This definition is run if you use @defvarx % anywhere other than immediately after a @defvar or @defvarx. \def\defvrx #1 {\errmessage{@defvrx in invalid context}} \def\defvarx #1 {\errmessage{@defvarx in invalid context}} \def\defoptx #1 {\errmessage{@defoptx in invalid context}} \def\deftypevarx #1 {\errmessage{@deftypevarx in invalid context}} \def\deftypevrx #1 {\errmessage{@deftypevrx in invalid context}} % Now define @deftp % Args are printed in bold, a slight difference from @defvar. \def\deftpargs #1{\bf \defvarargs{#1}} % @deftp Class window height width ... \def\deftp{\deftpparsebody\Edeftp\deftpx\deftpheader} \def\deftpheader #1#2#3{\doind {tp}{\code{#2}}% \begingroup\defname {#2}{#1}\deftpargs{#3}\endgroup} % This definition is run if you use @deftpx, etc % anywhere other than immediately after a @deftp, etc. \def\deftpx #1 {\errmessage{@deftpx in invalid context}} \message{cross reference,} \newwrite\auxfile \newif\ifhavexrefs % True if xref values are known. \newif\ifwarnedxrefs % True if we warned once that they aren't known. % @inforef is relatively simple. \def\inforef #1{\inforefzzz #1,,,,**} \def\inforefzzz #1,#2,#3,#4**{\putwordSee{} \putwordInfo{} \putwordfile{} \file{\ignorespaces #3{}}, node \samp{\ignorespaces#1{}}} % @setref{foo} defines a cross-reference point named foo. \def\setref#1{% \dosetq{#1-title}{Ytitle}% \dosetq{#1-pg}{Ypagenumber}% \dosetq{#1-snt}{Ysectionnumberandtype}} \def\unnumbsetref#1{% \dosetq{#1-title}{Ytitle}% \dosetq{#1-pg}{Ypagenumber}% \dosetq{#1-snt}{Ynothing}} \def\appendixsetref#1{% \dosetq{#1-title}{Ytitle}% \dosetq{#1-pg}{Ypagenumber}% \dosetq{#1-snt}{Yappendixletterandtype}} % \xref, \pxref, and \ref generate cross-references to specified points. % For \xrefX, #1 is the node name, #2 the name of the Info % cross-reference, #3 the printed node name, #4 the name of the Info % file, #5 the name of the printed manual. All but the node name can be % omitted. % \def\pxref#1{\putwordsee{} \xrefX[#1,,,,,,,]} \def\xref#1{\putwordSee{} \xrefX[#1,,,,,,,]} \def\ref#1{\xrefX[#1,,,,,,,]} \def\xrefX[#1,#2,#3,#4,#5,#6]{\begingroup \def\printedmanual{\ignorespaces #5}% \def\printednodename{\ignorespaces #3}% \setbox1=\hbox{\printedmanual}% \setbox0=\hbox{\printednodename}% \ifdim \wd0 = 0pt % No printed node name was explicitly given. \expandafter\ifx\csname SETxref-automatic-section-title\endcsname\relax % Use the node name inside the square brackets. \def\printednodename{\ignorespaces #1}% \else % Use the actual chapter/section title appear inside % the square brackets. Use the real section title if we have it. \ifdim \wd1>0pt% % It is in another manual, so we don't have it. \def\printednodename{\ignorespaces #1}% \else \ifhavexrefs % We know the real title if we have the xref values. \def\printednodename{\refx{#1-title}{}}% \else % Otherwise just copy the Info node name. \def\printednodename{\ignorespaces #1}% \fi% \fi \fi \fi % % If we use \unhbox0 and \unhbox1 to print the node names, TeX does not % insert empty discretionaries after hyphens, which means that it will % not find a line break at a hyphen in a node names. Since some manuals % are best written with fairly long node names, containing hyphens, this % is a loss. Therefore, we give the text of the node name again, so it % is as if TeX is seeing it for the first time. \ifdim \wd1 > 0pt \putwordsection{} ``\printednodename'' in \cite{\printedmanual}% \else % _ (for example) has to be the character _ for the purposes of the % control sequence corresponding to the node, but it has to expand % into the usual \leavevmode...\vrule stuff for purposes of % printing. So we \turnoffactive for the \refx-snt, back on for the % printing, back off for the \refx-pg. {\normalturnoffactive \refx{#1-snt}{}}% \space [\printednodename],\space \turnoffactive \putwordpage\tie\refx{#1-pg}{}% \fi \endgroup} % \dosetq is the interface for calls from other macros % Use \normalturnoffactive so that punctuation chars such as underscore % and backslash work in node names. (\turnoffactive doesn't do \.) \def\dosetq#1#2{% {\let\folio=0 \normalturnoffactive \edef\next{\write\auxfile{\internalsetq{#1}{#2}}}% \next }% } % \internalsetq {foo}{page} expands into % CHARACTERS 'xrdef {foo}{...expansion of \Ypage...} % When the aux file is read, ' is the escape character \def\internalsetq #1#2{'xrdef {#1}{\csname #2\endcsname}} % Things to be expanded by \internalsetq \def\Ypagenumber{\folio} \def\Ytitle{\thissection} \def\Ynothing{} \def\Ysectionnumberandtype{% \ifnum\secno=0 \putwordChapter\xreftie\the\chapno % \else \ifnum \subsecno=0 \putwordSection\xreftie\the\chapno.\the\secno % \else \ifnum \subsubsecno=0 % \putwordSection\xreftie\the\chapno.\the\secno.\the\subsecno % \else % \putwordSection\xreftie\the\chapno.\the\secno.\the\subsecno.\the\subsubsecno % \fi \fi \fi } \def\Yappendixletterandtype{% \ifnum\secno=0 \putwordAppendix\xreftie'char\the\appendixno{}% \else \ifnum \subsecno=0 \putwordSection\xreftie'char\the\appendixno.\the\secno % \else \ifnum \subsubsecno=0 % \putwordSection\xreftie'char\the\appendixno.\the\secno.\the\subsecno % \else % \putwordSection\xreftie'char\the\appendixno.\the\secno.\the\subsecno.\the\subsubsecno % \fi \fi \fi } \gdef\xreftie{'tie} % Use TeX 3.0's \inputlineno to get the line number, for better error % messages, but if we're using an old version of TeX, don't do anything. % \ifx\inputlineno\thisisundefined \let\linenumber = \empty % Non-3.0. \else \def\linenumber{\the\inputlineno:\space} \fi % Define \refx{NAME}{SUFFIX} to reference a cross-reference string named NAME. % If its value is nonempty, SUFFIX is output afterward. \def\refx#1#2{% \expandafter\ifx\csname X#1\endcsname\relax % If not defined, say something at least. \angleleft un\-de\-fined\angleright \ifhavexrefs \message{\linenumber Undefined cross reference `#1'.}% \else \ifwarnedxrefs\else \global\warnedxrefstrue \message{Cross reference values unknown; you must run TeX again.}% \fi \fi \else % It's defined, so just use it. \csname X#1\endcsname \fi #2% Output the suffix in any case. } % This is the macro invoked by entries in the aux file. % \def\xrdef#1{\begingroup % Reenable \ as an escape while reading the second argument. \catcode`\\ = 0 \afterassignment\endgroup \expandafter\gdef\csname X#1\endcsname } % Read the last existing aux file, if any. No error if none exists. \def\readauxfile{\begingroup \catcode`\^^@=\other \catcode`\^^A=\other \catcode`\^^B=\other \catcode`\^^C=\other \catcode`\^^D=\other \catcode`\^^E=\other \catcode`\^^F=\other \catcode`\^^G=\other \catcode`\^^H=\other \catcode`\^^K=\other \catcode`\^^L=\other \catcode`\^^N=\other \catcode`\^^P=\other \catcode`\^^Q=\other \catcode`\^^R=\other \catcode`\^^S=\other \catcode`\^^T=\other \catcode`\^^U=\other \catcode`\^^V=\other \catcode`\^^W=\other \catcode`\^^X=\other \catcode`\^^Z=\other \catcode`\^^[=\other \catcode`\^^\=\other \catcode`\^^]=\other \catcode`\^^^=\other \catcode`\^^_=\other \catcode`\@=\other \catcode`\^=\other % It was suggested to define this as 7, which would allow ^^e4 etc. % in xref tags, i.e., node names. But since ^^e4 notation isn't % supported in the main text, it doesn't seem desirable. Furthermore, % that is not enough: for node names that actually contain a ^ % character, we would end up writing a line like this: 'xrdef {'hat % b-title}{'hat b} and \xrdef does a \csname...\endcsname on the first % argument, and \hat is not an expandable control sequence. It could % all be worked out, but why? Either we support ^^ or we don't. % % The other change necessary for this was to define \auxhat: % \def\auxhat{\def^{'hat }}% extra space so ok if followed by letter % and then to call \auxhat in \setq. % \catcode`\~=\other \catcode`\[=\other \catcode`\]=\other \catcode`\"=\other \catcode`\_=\other \catcode`\|=\other \catcode`\<=\other \catcode`\>=\other \catcode`\$=\other \catcode`\#=\other \catcode`\&=\other \catcode`+=\other % avoid \+ for paranoia even though we've turned it off % Make the characters 128-255 be printing characters {% \count 1=128 \def\loop{% \catcode\count 1=\other \advance\count 1 by 1 \ifnum \count 1<256 \loop \fi }% }% % The aux file uses ' as the escape (for now). % Turn off \ as an escape so we do not lose on % entries which were dumped with control sequences in their names. % For example, 'xrdef {$\leq $-fun}{page ...} made by @defun ^^ % Reference to such entries still does not work the way one would wish, % but at least they do not bomb out when the aux file is read in. \catcode`\{=1 \catcode`\}=2 \catcode`\%=\other \catcode`\'=0 \catcode`\\=\other % \openin 1 \jobname.aux \ifeof 1 \else \closein 1 \input \jobname.aux \global\havexrefstrue \global\warnedobstrue \fi % Open the new aux file. TeX will close it automatically at exit. \openout\auxfile=\jobname.aux \endgroup} % Footnotes. \newcount \footnoteno % The trailing space in the following definition for supereject is % vital for proper filling; pages come out unaligned when you do a % pagealignmacro call if that space before the closing brace is % removed. (Generally, numeric constants should always be followed by a % space to prevent strange expansion errors.) \def\supereject{\par\penalty -20000\footnoteno =0 } % @footnotestyle is meaningful for info output only. \let\footnotestyle=\comment \let\ptexfootnote=\footnote {\catcode `\@=11 % % Auto-number footnotes. Otherwise like plain. \gdef\footnote{% \global\advance\footnoteno by \@ne \edef\thisfootno{$^{\the\footnoteno}$}% % % In case the footnote comes at the end of a sentence, preserve the % extra spacing after we do the footnote number. \let\@sf\empty \ifhmode\edef\@sf{\spacefactor\the\spacefactor}\/\fi % % Remove inadvertent blank space before typesetting the footnote number. \unskip \thisfootno\@sf \footnotezzz }% % Don't bother with the trickery in plain.tex to not require the % footnote text as a parameter. Our footnotes don't need to be so general. % % Oh yes, they do; otherwise, @ifset and anything else that uses % \parseargline fail inside footnotes because the tokens are fixed when % the footnote is read. --karl, 16nov96. % \long\gdef\footnotezzz{\insert\footins\bgroup % We want to typeset this text as a normal paragraph, even if the % footnote reference occurs in (for example) a display environment. % So reset some parameters. \interlinepenalty\interfootnotelinepenalty \splittopskip\ht\strutbox % top baseline for broken footnotes \splitmaxdepth\dp\strutbox \floatingpenalty\@MM \leftskip\z@skip \rightskip\z@skip \spaceskip\z@skip \xspaceskip\z@skip \parindent\defaultparindent % % Hang the footnote text off the number. \hang \textindent{\thisfootno}% % % Don't crash into the line above the footnote text. Since this % expands into a box, it must come within the paragraph, lest it % provide a place where TeX can split the footnote. \footstrut \futurelet\next\fo@t } \def\fo@t{\ifcat\bgroup\noexpand\next \let\next\f@@t \else\let\next\f@t\fi \next} \def\f@@t{\bgroup\aftergroup\@foot\let\next} \def\f@t#1{#1\@foot} \def\@foot{\strut\egroup} }%end \catcode `\@=11 % Set the baselineskip to #1, and the lineskip and strut size % correspondingly. There is no deep meaning behind these magic numbers % used as factors; they just match (closely enough) what Knuth defined. % \def\lineskipfactor{.08333} \def\strutheightpercent{.70833} \def\strutdepthpercent {.29167} % \def\setleading#1{% \normalbaselineskip = #1\relax \normallineskip = \lineskipfactor\normalbaselineskip \normalbaselines \setbox\strutbox =\hbox{% \vrule width0pt height\strutheightpercent\baselineskip depth \strutdepthpercent \baselineskip }% } % @| inserts a changebar to the left of the current line. It should % surround any changed text. This approach does *not* work if the % change spans more than two lines of output. To handle that, we would % have adopt a much more difficult approach (putting marks into the main % vertical list for the beginning and end of each change). % \def\|{% % \vadjust can only be used in horizontal mode. \leavevmode % % Append this vertical mode material after the current line in the output. \vadjust{% % We want to insert a rule with the height and depth of the current % leading; that is exactly what \strutbox is supposed to record. \vskip-\baselineskip % % \vadjust-items are inserted at the left edge of the type. So % the \llap here moves out into the left-hand margin. \llap{% % % For a thicker or thinner bar, change the `1pt'. \vrule height\baselineskip width1pt % % This is the space between the bar and the text. \hskip 12pt }% }% } % For a final copy, take out the rectangles % that mark overfull boxes (in case you have decided % that the text looks ok even though it passes the margin). % \def\finalout{\overfullrule=0pt} % @image. We use the macros from epsf.tex to support this. % If epsf.tex is not installed and @image is used, we complain. % % Check for and read epsf.tex up front. If we read it only at @image % time, we might be inside a group, and then its definitions would get % undone and the next image would fail. \openin 1 = epsf.tex \ifeof 1 \else \closein 1 \def\epsfannounce{\toks0 = }% do not bother showing banner \input epsf.tex \fi % \newif\ifwarnednoepsf \newhelp\noepsfhelp{epsf.tex must be installed for images to work. It is also included in the Texinfo distribution, or you can get it from ftp://ftp.tug.org/tex/epsf.tex.} % % Only complain once about lack of epsf.tex. \def\image#1{% \ifx\epsfbox\undefined \ifwarnednoepsf \else \errhelp = \noepsfhelp \errmessage{epsf.tex not found, images will be ignored}% \global\warnednoepsftrue \fi \else \imagexxx #1,,,\finish \fi } % % Arguments to @image: % #1 is (mandatory) image filename; we tack on .eps extension. % #2 is (optional) width, #3 is (optional) height. % #4 is just the usual extra ignored arg for parsing this stuff. \def\imagexxx#1,#2,#3,#4\finish{% % \epsfbox itself resets \epsf?size at each figure. \setbox0 = \hbox{\ignorespaces #2}\ifdim\wd0 > 0pt \epsfxsize=#2\relax \fi \setbox0 = \hbox{\ignorespaces #3}\ifdim\wd0 > 0pt \epsfysize=#3\relax \fi \epsfbox{#1.eps}% } % End of control word definitions. \message{and turning on texinfo input format.} \def\openindices{% \newindex{cp}% \newcodeindex{fn}% \newcodeindex{vr}% \newcodeindex{tp}% \newcodeindex{ky}% \newcodeindex{pg}% } % Set some numeric style parameters, for 8.5 x 11 format. \hsize = 6in \hoffset = .25in \newdimen\defaultparindent \defaultparindent = 15pt \parindent = \defaultparindent \parskip 3pt plus 2pt minus 1pt \setleading{13.2pt} \advance\topskip by 1.2cm \chapheadingskip = 15pt plus 4pt minus 2pt \secheadingskip = 12pt plus 3pt minus 2pt \subsecheadingskip = 9pt plus 2pt minus 2pt % Prevent underfull vbox error messages. \vbadness=10000 % Following George Bush, just get rid of widows and orphans. \widowpenalty=10000 \clubpenalty=10000 % Use TeX 3.0's \emergencystretch to help line breaking, but if we're % using an old version of TeX, don't do anything. We want the amount of % stretch added to depend on the line length, hence the dependence on % \hsize. This makes it come to about 9pt for the 8.5x11 format. % \ifx\emergencystretch\thisisundefined % Allow us to assign to \emergencystretch anyway. \def\emergencystretch{\dimen0}% \else \emergencystretch = \hsize \divide\emergencystretch by 45 \fi % Use @smallbook to reset parameters for 7x9.5 format (or else 7x9.25) \def\smallbook{ \global\chapheadingskip = 15pt plus 4pt minus 2pt \global\secheadingskip = 12pt plus 3pt minus 2pt \global\subsecheadingskip = 9pt plus 2pt minus 2pt % \global\lispnarrowing = 0.3in \setleading{12pt} \advance\topskip by -1cm \global\parskip 2pt plus 1pt \global\hsize = 5in \global\vsize=7.5in \global\tolerance=700 \global\hfuzz=1pt \global\contentsrightmargin=0pt \global\deftypemargin=0pt \global\defbodyindent=.5cm % \global\pagewidth=\hsize \global\pageheight=\vsize % \global\let\smalllisp=\smalllispx \global\let\smallexample=\smalllispx \global\def\Esmallexample{\Esmalllisp} } % Use @afourpaper to print on European A4 paper. \def\afourpaper{ \global\tolerance=700 \global\hfuzz=1pt \setleading{12pt} \global\parskip 15pt plus 1pt \global\vsize= 53\baselineskip \advance\vsize by \topskip %\global\hsize= 5.85in % A4 wide 10pt \global\hsize= 6.5in \global\outerhsize=\hsize \global\advance\outerhsize by 0.5in \global\outervsize=\vsize \global\advance\outervsize by 0.6in \global\pagewidth=\hsize \global\pageheight=\vsize } \bindingoffset=0pt \normaloffset=\hoffset \pagewidth=\hsize \pageheight=\vsize % Allow control of the text dimensions. Parameters in order: textheight; % textwidth; voffset; hoffset; binding offset; topskip. % All require a dimension; % header is additional; added length extends the bottom of the page. \def\changepagesizes#1#2#3#4#5#6{ \global\vsize= #1 \global\topskip= #6 \advance\vsize by \topskip \global\voffset= #3 \global\hsize= #2 \global\outerhsize=\hsize \global\advance\outerhsize by 0.5in \global\outervsize=\vsize \global\advance\outervsize by 0.6in \global\pagewidth=\hsize \global\pageheight=\vsize \global\normaloffset= #4 \global\bindingoffset= #5} % A specific text layout, 24x15cm overall, intended for A4 paper. Top margin % 29mm, hence bottom margin 28mm, nominal side margin 3cm. \def\afourlatex {\global\tolerance=700 \global\hfuzz=1pt \setleading{12pt} \global\parskip 15pt plus 1pt \advance\baselineskip by 1.6pt \changepagesizes{237mm}{150mm}{3.6mm}{3.6mm}{3mm}{7mm} } % Use @afourwide to print on European A4 paper in wide format. \def\afourwide{\afourpaper \changepagesizes{9.5in}{6.5in}{\hoffset}{\normaloffset}{\bindingoffset}{7mm}} % Define macros to output various characters with catcode for normal text. \catcode`\"=\other \catcode`\~=\other \catcode`\^=\other \catcode`\_=\other \catcode`\|=\other \catcode`\<=\other \catcode`\>=\other \catcode`\+=\other \def\normaldoublequote{"} \def\normaltilde{~} \def\normalcaret{^} \def\normalunderscore{_} \def\normalverticalbar{|} \def\normalless{<} \def\normalgreater{>} \def\normalplus{+} % This macro is used to make a character print one way in ttfont % where it can probably just be output, and another way in other fonts, % where something hairier probably needs to be done. % % #1 is what to print if we are indeed using \tt; #2 is what to print % otherwise. Since all the Computer Modern typewriter fonts have zero % interword stretch (and shrink), and it is reasonable to expect all % typewriter fonts to have this, we can check that font parameter. % \def\ifusingtt#1#2{\ifdim \fontdimen3\the\font=0pt #1\else #2\fi} % Turn off all special characters except @ % (and those which the user can use as if they were ordinary). % Most of these we simply print from the \tt font, but for some, we can % use math or other variants that look better in normal text. \catcode`\"=\active \def\activedoublequote{{\tt\char34}} \let"=\activedoublequote \catcode`\~=\active \def~{{\tt\char126}} \chardef\hat=`\^ \catcode`\^=\active \def^{{\tt \hat}} \catcode`\_=\active \def_{\ifusingtt\normalunderscore\_} % Subroutine for the previous macro. \def\_{\leavevmode \kern.06em \vbox{\hrule width.3em height.1ex}} \catcode`\|=\active \def|{{\tt\char124}} \chardef \less=`\< \catcode`\<=\active \def<{{\tt \less}} \chardef \gtr=`\> \catcode`\>=\active \def>{{\tt \gtr}} \catcode`\+=\active \def+{{\tt \char 43}} %\catcode 27=\active %\def^^[{$\diamondsuit$} % Set up an active definition for =, but don't enable it most of the time. {\catcode`\==\active \global\def={{\tt \char 61}}} \catcode`+=\active \catcode`\_=\active % If a .fmt file is being used, characters that might appear in a file % name cannot be active until we have parsed the command line. % So turn them off again, and have \everyjob (or @setfilename) turn them on. % \otherifyactive is called near the end of this file. \def\otherifyactive{\catcode`+=\other \catcode`\_=\other} \catcode`\@=0 % \rawbackslashxx output one backslash character in current font \global\chardef\rawbackslashxx=`\\ %{\catcode`\\=\other %@gdef@rawbackslashxx{\}} % \rawbackslash redefines \ as input to do \rawbackslashxx. {\catcode`\\=\active @gdef@rawbackslash{@let\=@rawbackslashxx }} % \normalbackslash outputs one backslash in fixed width font. \def\normalbackslash{{\tt\rawbackslashxx}} % Say @foo, not \foo, in error messages. \escapechar=`\@ % \catcode 17=0 % Define control-q \catcode`\\=\active % Used sometimes to turn off (effectively) the active characters % even after parsing them. @def@turnoffactive{@let"=@normaldoublequote @let\=@realbackslash @let~=@normaltilde @let^=@normalcaret @let_=@normalunderscore @let|=@normalverticalbar @let<=@normalless @let>=@normalgreater @let+=@normalplus} @def@normalturnoffactive{@let"=@normaldoublequote @let\=@normalbackslash @let~=@normaltilde @let^=@normalcaret @let_=@normalunderscore @let|=@normalverticalbar @let<=@normalless @let>=@normalgreater @let+=@normalplus} % Make _ and + \other characters, temporarily. % This is canceled by @fixbackslash. @otherifyactive % If a .fmt file is being used, we don't want the `\input texinfo' to show up. % That is what \eatinput is for; after that, the `\' should revert to printing % a backslash. % @gdef@eatinput input texinfo{@fixbackslash} @global@let\ = @eatinput % On the other hand, perhaps the file did not have a `\input texinfo'. Then % the first `\{ in the file would cause an error. This macro tries to fix % that, assuming it is called before the first `\' could plausibly occur. % Also back turn on active characters that might appear in the input % file name, in case not using a pre-dumped format. % @gdef@fixbackslash{@ifx\@eatinput @let\ = @normalbackslash @fi @catcode`+=@active @catcode`@_=@active} % These look ok in all fonts, so just make them not special. The @rm below % makes sure that the current font starts out as the newly loaded cmr10 @catcode`@$=@other @catcode`@%=@other @catcode`@&=@other @catcode`@#=@other @textfonts @rm @c Local variables: @c page-delimiter: "^\\\\message" @c End: 070701000565a8000081a4000000020000000200000001372ff1f4000009c4000000660000004500000000000000000000001900000004reloc/doc/wget/wget.infoThis is Info file wget.info, produced by Makeinfo version 1.67 from the input file ./wget.texi. INFO-DIR-SECTION Net Utilities INFO-DIR-SECTION World Wide Web START-INFO-DIR-ENTRY * Wget: (wget). The non-interactive network downloader. END-INFO-DIR-ENTRY This file documents the the GNU Wget utility for downloading network data. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled "Copying" and "GNU General Public License" are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.  Indirect: wget.info-1: 955 wget.info-2: 50818 wget.info-3: 88475  Tag Table: (Indirect) Node: Top955 Node: Overview1832 Node: Invoking5006 Node: URL Format5815 Node: Option Syntax8147 Node: Basic Startup Options9571 Node: Logging and Input File Options10271 Node: Download Options12665 Node: Directory Options18450 Node: HTTP Options20928 Node: FTP Options24524 Node: Recursive Retrieval Options25717 Node: Recursive Accept/Reject Options27493 Node: Recursive Retrieval29575 Node: Following Links31871 Node: Relative Links32903 Node: Host Checking33417 Node: Domain Acceptance35450 Node: All Hosts37120 Node: Types of Files37547 Node: Directory-Based Limits39997 Node: FTP Links42637 Node: Time-Stamping43507 Node: Time-Stamping Usage45144 Node: HTTP Time-Stamping Internals46713 Node: FTP Time-Stamping Internals47922 Node: Startup File49130 Node: Wgetrc Location50003 Node: Wgetrc Syntax50818 Node: Wgetrc Commands51533 Node: Sample Wgetrc58229 Node: Examples62521 Node: Simple Usage63128 Node: Advanced Usage65522 Node: Guru Usage68273 Node: Various69935 Node: Proxies70459 Node: Distribution73224 Node: Mailing List73566 Node: Reporting Bugs74265 Node: Portability76050 Node: Signals77425 Node: Appendices78079 Node: Robots78494 Node: Introduction to RES79641 Node: RES Format81534 Node: User-Agent Field82638 Node: Disallow Field83402 Node: Norobots Examples84013 Node: Security Considerations84967 Node: Contributors85963 Node: Copying88475 Node: Concept Index107638  End Tag Table 070701000565a9000081a4000000020000000200000001372ff1f40000c682000000660000004500000000000000000000001b00000004reloc/doc/wget/wget.info-1This is Info file wget.info, produced by Makeinfo version 1.67 from the input file ./wget.texi. INFO-DIR-SECTION Net Utilities INFO-DIR-SECTION World Wide Web START-INFO-DIR-ENTRY * Wget: (wget). The non-interactive network downloader. END-INFO-DIR-ENTRY This file documents the the GNU Wget utility for downloading network data. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled "Copying" and "GNU General Public License" are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.  File: wget.info, Node: Top, Next: Overview, Prev: (dir), Up: (dir) Wget 1.5.3 ********** This manual documents version 1.5.3 of GNU Wget, the freely available utility for network download. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. * Menu: * Overview:: Features of Wget. * Invoking:: Wget command-line arguments. * Recursive Retrieval:: Description of recursive retrieval. * Following Links:: The available methods of chasing links. * Time-Stamping:: Mirroring according to time-stamps. * Startup File:: Wget's initialization file. * Examples:: Examples of usage. * Various:: The stuff that doesn't fit anywhere else. * Appendices:: Some useful references. * Copying:: You may give out copies of Wget. * Concept Index:: Topics covered by this manual.  File: wget.info, Node: Overview, Next: Invoking, Prev: Top, Up: Top Overview ******** GNU Wget is a freely available network utility to retrieve files from the World Wide Web, using HTTP (Hyper Text Transfer Protocol) and FTP (File Transfer Protocol), the two most widely used Internet protocols. It has many useful features to make downloading easier, some of them being: * Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user's presence, which can be a great hindrance when transferring a lot of data. * Wget is capable of descending recursively through the structure of HTML documents and FTP directory trees, making a local copy of the directory hierarchy similar to the one on the remote server. This feature can be used to mirror archives and home pages, or traverse the web in search of data, like a WWW robot (*Note Robots::). In that spirit, Wget understands the `norobots' convention. * File name wildcard matching and recursive mirroring of directories are available when retrieving via FTP. Wget can read the time-stamp information given by both HTTP and FTP servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of FTP sites, as well as home pages. * Wget works exceedingly well on slow or unstable connections, retrying the document until it is fully retrieved, or until a user-specified retry count is surpassed. It will try to resume the download from the point of interruption, using `REST' with FTP and `Range' with HTTP servers that support them. * By default, Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. However, if you are behind a firewall that requires that you use a socks style gateway, you can get the socks library and build wget with support for socks. Wget also supports the passive FTP downloading as an option. * Builtin features offer mechanisms to tune which links you wish to follow (*Note Following Links::). * The retrieval is conveniently traced with printing dots, each dot representing a fixed amount of data received (1KB by default). These representations can be customized to your preferences. * Most of the features are fully configurable, either through command line options, or via the initialization file `.wgetrc' (*Note Startup File::). Wget allows you to define "global" startup files (`/usr/local/etc/wgetrc' by default) for site settings. * Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (*Note Copying::).  File: wget.info, Node: Invoking, Next: Recursive Retrieval, Prev: Overview, Up: Top Invoking ******** By default, Wget is very simple to invoke. The basic syntax is: wget [OPTION]... [URL]... Wget will simply download all the URLs specified on the command line. URL is a "Uniform Resource Locator", as defined below. However, you may wish to change some of the default parameters of Wget. You can do it two ways: permanently, adding the appropriate command to `.wgetrc' (*Note Startup File::), or specifying it on the command line. * Menu: * URL Format:: * Option Syntax:: * Basic Startup Options:: * Logging and Input File Options:: * Download Options:: * Directory Options:: * HTTP Options:: * FTP Options:: * Recursive Retrieval Options:: * Recursive Accept/Reject Options::  File: wget.info, Node: URL Format, Next: Option Syntax, Prev: Invoking, Up: Invoking URL Format ========== "URL" is an acronym for Uniform Resource Locator. A uniform resource locator is a compact string representation for a resource available via the Internet. Wget recognizes the URL syntax as per RFC1738. This is the most widely used form (square brackets denote optional parts): http://host[:port]/directory/file ftp://host[:port]/directory/file You can also encode your username and password within a URL: ftp://user:password@host/path http://user:password@host/path Either USER or PASSWORD, or both, may be left out. If you leave out either the HTTP username or password, no authentication will be sent. If you leave out the FTP username, `anonymous' will be used. If you leave out the FTP password, your email address will be supplied as a default password.(1) You can encode unsafe characters in a URL as `%xy', `xy' being the hexadecimal representation of the character's ASCII value. Some common unsafe characters include `%' (quoted as `%25'), `:' (quoted as `%3A'), and `@' (quoted as `%40'). Refer to RFC1738 for a comprehensive list of unsafe characters. Wget also supports the `type' feature for FTP URLs. By default, FTP documents are retrieved in the binary mode (type `i'), which means that they are downloaded unchanged. Another useful mode is the `a' ("ASCII") mode, which converts the line delimiters between the different operating systems, and is thus useful for text files. Here is an example: ftp://host/directory/file;type=a Two alternative variants of URL specification are also supported, because of historical (hysterical?) reasons and their wide-spreadedness. FTP-only syntax (supported by `NcFTP'): host:/dir/file HTTP-only syntax (introduced by `Netscape'): host[:port]/dir/file These two alternative forms are deprecated, and may cease being supported in the future. If you do not understand the difference between these notations, or do not know which one to use, just use the plain ordinary format you use with your favorite browser, like `Lynx' or `Netscape'. ---------- Footnotes ---------- (1) If you have a `.netrc' file in your home directory, password will also be searched for there.  File: wget.info, Node: Option Syntax, Next: Basic Startup Options, Prev: URL Format, Up: Invoking Option Syntax ============= Since Wget uses GNU getopts to process its arguments, every option has a short form and a long form. Long options are more convenient to remember, but take time to type. You may freely mix different option styles, or specify options after the command-line arguments. Thus you may write: wget -r --tries=10 http://fly.cc.fer.hr/ -o log The space between the option accepting an argument and the argument may be omitted. Instead `-o log' you can write `-olog'. You may put several options that do not require arguments together, like: wget -drc URL This is a complete equivalent of: wget -d -r -c URL Since the options can be specified after the arguments, you may terminate them with `--'. So the following will try to download URL `-x', reporting failure to `log': wget -o log -- -x The options that accept comma-separated lists all respect the convention that specifying an empty list clears its value. This can be useful to clear the `.wgetrc' settings. For instance, if your `.wgetrc' sets `exclude_directories' to `/cgi-bin', the following example will first reset it, and then set it to exclude `/~nobody' and `/~somebody'. You can also clear the lists in `.wgetrc' (*Note Wgetrc Syntax::). wget -X '' -X /~nobody,/~somebody  File: wget.info, Node: Basic Startup Options, Next: Logging and Input File Options, Prev: Option Syntax, Up: Invoking Basic Startup Options ===================== `-V' `--version' Display the version of Wget. `-h' `--help' Print a help message describing all of Wget's command-line options. `-b' `--background' Go to background immediately after startup. If no output file is specified via the `-o', output is redirected to `wget-log'. `-e COMMAND' `--execute COMMAND' Execute COMMAND as if it were a part of `.wgetrc' (*Note Startup File::). A command thus invoked will be executed *after* the commands in `.wgetrc', thus taking precedence over them.  File: wget.info, Node: Logging and Input File Options, Next: Download Options, Prev: Basic Startup Options, Up: Invoking Logging and Input File Options ============================== `-o LOGFILE' `--output-file=LOGFILE' Log all messages to LOGFILE. The messages are normally reported to standard error. `-a LOGFILE' `--append-output=LOGFILE' Append to LOGFILE. This is the same as `-o', only it appends to LOGFILE instead of overwriting the old log file. If LOGFILE does not exist, a new file is created. `-d' `--debug' Turn on debug output, meaning various information important to the developers of Wget if it does not work properly. Your system administrator may have chosen to compile Wget without debug support, in which case `-d' will not work. Please note that compiling with debug support is always safe--Wget compiled with the debug support will *not* print any debug info unless requested with `-d'. *Note Reporting Bugs:: for more information on how to use `-d' for sending bug reports. `-q' `--quiet' Turn off Wget's output. `-v' `--verbose' Turn on verbose output, with all the available data. The default output is verbose. `-nv' `--non-verbose' Non-verbose output--turn off verbose without being completely quiet (use `-q' for that), which means that error messages and basic information still get printed. `-i FILE' `--input-file=FILE' Read URLs from FILE, in which case no URLs need to be on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. The FILE need not be an HTML document (but no harm if it is)--it is enough if the URLs are just listed sequentially. However, if you specify `--force-html', the document will be regarded as `html'. In that case you may have problems with relative links, which you can solve either by adding `' to the documents or by specifying `--base=URL' on the command line. `-F' `--force-html' When input is read from a file, force it to be treated as an HTML file. This enables you to retrieve relative links from existing HTML files on your local disk, by adding `' to HTML, or using the `--base' command-line option.  File: wget.info, Node: Download Options, Next: Directory Options, Prev: Logging and Input File Options, Up: Invoking Download Options ================ `-t NUMBER' `--tries=NUMBER' Set number of retries to NUMBER. Specify 0 or `inf' for infinite retrying. `-O FILE' `--output-document=FILE' The documents will not be written to the appropriate files, but all will be concatenated together and written to FILE. If FILE already exists, it will be overwritten. If the FILE is `-', the documents will be written to standard output. Including this option automatically sets the number of tries to 1. `-nc' `--no-clobber' Do not clobber existing files when saving to directory hierarchy within recursive retrieval of several files. This option is *extremely* useful when you wish to continue where you left off with retrieval of many files. If the files have the `.html' or (yuck) `.htm' suffix, they will be loaded from the local disk, and parsed as if they have been retrieved from the Web. `-c' `--continue' Continue getting an existing file. This is useful when you want to finish up the download started by another program, or a previous instance of Wget. Thus you can write: wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z If there is a file name `ls-lR.Z' in the current directory, Wget will assume that it is the first portion of the remote file, and will require the server to continue the retrieval from an offset equal to the length of the local file. Note that you need not specify this option if all you want is Wget to continue retrieving where it left off when the connection is lost--Wget does this by default. You need this option only when you want to continue retrieval of a file already halfway retrieved, saved by another FTP client, or left by Wget being killed. Without `-c', the previous example would just begin to download the remote file to `ls-lR.Z.1'. The `-c' option is also applicable for HTTP servers that support the `Range' header. `--dot-style=STYLE' Set the retrieval style to STYLE. Wget traces the retrieval of each document by printing dots on the screen, each dot representing a fixed amount of retrieved data. Any number of dots may be separated in a "cluster", to make counting easier. This option allows you to choose one of the pre-defined styles, determining the number of bytes represented by a dot, the number of dots in a cluster, and the number of dots on the line. With the `default' style each dot represents 1K, there are ten dots in a cluster and 50 dots in a line. The `binary' style has a more "computer"-like orientation--8K dots, 16-dots clusters and 48 dots per line (which makes for 384K lines). The `mega' style is suitable for downloading very large files--each dot represents 64K retrieved, there are eight dots in a cluster, and 48 dots on each line (so each line contains 3M). The `micro' style is exactly the reverse; it is suitable for downloading small files, with 128-byte dots, 8 dots per cluster, and 48 dots (6K) per line. `-N' `--timestamping' Turn on time-stamping. *Note Time-Stamping:: for details. `-S' `--server-response' Print the headers sent by HTTP servers and responses sent by FTP servers. `--spider' When invoked with this option, Wget will behave as a Web "spider", which means that it will not download the pages, just check that they are there. You can use it to check your bookmarks, e.g. with: wget --spider --force-html -i bookmarks.html This feature needs much more work for Wget to get close to the functionality of real WWW spiders. `-T seconds' `--timeout=SECONDS' Set the read timeout to SECONDS seconds. Whenever a network read is issued, the file descriptor is checked for a timeout, which could otherwise leave a pending connection (uninterrupted read). The default timeout is 900 seconds (fifteen minutes). Setting timeout to 0 will disable checking for timeouts. Please do not lower the default timeout value with this option unless you know what you are doing. `-w SECONDS' `--wait=SECONDS' Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the `m' suffix, in hours using `h' suffix, or in days using `d' suffix. Specifying a large value for this option is useful if the network or the destination host is down, so that Wget can wait long enough to reasonably expect the network error to be fixed before the retry. `-Y on/off' `--proxy=on/off' Turn proxy support on or off. The proxy is on by default if the appropriate environmental variable is defined. `-Q QUOTA' `--quota=QUOTA' Specify download quota for automatic retrievals. The value can be specified in bytes (default), kilobytes (with `k' suffix), or megabytes (with `m' suffix). Note that quota will never affect downloading a single file. So if you specify `wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz', all of the `ls-lR.gz' will be downloaded. The same goes even when several URLs are specified on the command-line. However, quota is respected when retrieving either recursively, or from an input file. Thus you may safely type `wget -Q2m -i sites'--download will be aborted when the quota is exceeded. Setting quota to 0 or to `inf' unlimits the download quota.  File: wget.info, Node: Directory Options, Next: HTTP Options, Prev: Download Options, Up: Invoking Directory Options ================= `-nd' `--no-directories' Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the filenames will get extensions `.n'). `-x' `--force-directories' The opposite of `-nd'--create a hierarchy of directories, even if one would not have been created otherwise. E.g. `wget -x http://fly.cc.fer.hr/robots.txt' will save the downloaded file to `fly.cc.fer.hr/robots.txt'. `-nH' `--no-host-directories' Disable generation of host-prefixed directories. By default, invoking Wget with `-r http://fly.cc.fer.hr/' will create a structure of directories beginning with `fly.cc.fer.hr/'. This option disables such behavior. `--cut-dirs=NUMBER' Ignore NUMBER directory components. This is useful for getting a fine-grained control over the directory where recursive retrieval will be saved. Take, for example, the directory at `ftp://ftp.xemacs.org/pub/xemacs/'. If you retrieve it with `-r', it will be saved locally under `ftp.xemacs.org/pub/xemacs/'. While the `-nH' option can remove the `ftp.xemacs.org/' part, you are still stuck with `pub/xemacs'. This is where `--cut-dirs' comes in handy; it makes Wget not "see" NUMBER remote directory components. Here are several examples of how `--cut-dirs' option works. No options -> ftp.xemacs.org/pub/xemacs/ -nH -> pub/xemacs/ -nH --cut-dirs=1 -> xemacs/ -nH --cut-dirs=2 -> . --cut-dirs=1 -> ftp.xemacs.org/xemacs/ ... If you just want to get rid of the directory structure, this option is similar to a combination of `-nd' and `-P'. However, unlike `-nd', `--cut-dirs' does not lose with subdirectories--for instance, with `-nH --cut-dirs=1', a `beta/' subdirectory will be placed to `xemacs/beta', as one would expect. `-P PREFIX' `--directory-prefix=PREFIX' Set directory prefix to PREFIX. The "directory prefix" is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is `.' (the current directory).  File: wget.info, Node: HTTP Options, Next: FTP Options, Prev: Directory Options, Up: Invoking HTTP Options ============ `--http-user=USER' `--http-passwd=PASSWORD' Specify the username USER and password PASSWORD on an HTTP server. According to the type of the challenge, Wget will encode them using either the `basic' (insecure) or the `digest' authentication scheme. Another way to specify username and password is in the URL itself (*Note URL Format::). For more information about security issues with Wget, *Note Security Considerations::. `-C on/off' `--cache=on/off' When set to off, disable server-side cache. In this case, Wget will send the remote server an appropriate directive (`Pragma: no-cache') to get the file from the remote service, rather than returning the cached version. This is especially useful for retrieving and flushing out-of-date documents on proxy servers. Caching is allowed by default. `--ignore-length' Unfortunately, some HTTP servers (CGI programs, to be more precise) send out bogus `Content-Length' headers, which makes Wget go wild, as it thinks not all the document was retrieved. You can spot this syndrome if Wget retries getting the same document again and again, each time claiming that the (otherwise normal) connection has closed on the very same byte. With this option, Wget will ignore the `Content-Length' header--as if it never existed. `--header=ADDITIONAL-HEADER' Define an ADDITIONAL-HEADER to be passed to the HTTP servers. Headers must contain a `:' preceded by one or more non-blank characters, and must not contain newlines. You may define more than one additional header by specifying `--header' more than once. wget --header='Accept-Charset: iso-8859-2' \ --header='Accept-Language: hr' \ http://fly.cc.fer.hr/ Specification of an empty string as the header value will clear all previous user-defined headers. `--proxy-user=USER' `--proxy-passwd=PASSWORD' Specify the username USER and password PASSWORD for authentication on a proxy server. Wget will encode them using the `basic' authentication scheme. `-s' `--save-headers' Save the headers sent by the HTTP server to the file, preceding the actual contents, with an empty line as the separator. `-U AGENT-STRING' `--user-agent=AGENT-STRING' Identify as AGENT-STRING to the HTTP server. The HTTP protocol allows the clients to identify themselves using a `User-Agent' header field. This enables distinguishing the WWW software, usually for statistical purposes or for tracing of protocol violations. Wget normally identifies as `Wget/VERSION', VERSION being the current version number of Wget. However, some sites have been known to impose the policy of tailoring the output according to the `User-Agent'-supplied information. While conceptually this is not such a bad idea, it has been abused by servers denying information to clients other than `Mozilla' or Microsoft `Internet Explorer'. This option allows you to change the `User-Agent' line issued by Wget. Use of this option is discouraged, unless you really know what you are doing. *NOTE* that Netscape Communications Corp. has claimed that false transmissions of `Mozilla' as the `User-Agent' are a copyright infringement, which will be prosecuted. *DO NOT* misrepresent Wget as Mozilla.  File: wget.info, Node: FTP Options, Next: Recursive Retrieval Options, Prev: HTTP Options, Up: Invoking FTP Options =========== `--retr-symlinks' Retrieve symbolic links on FTP sites as if they were plain files, i.e. don't just create links locally. `-g on/off' `--glob=on/off' Turn FTP globbing on or off. Globbing means you may use the shell-like special characters ("wildcards"), like `*', `?', `[' and `]' to retrieve more than one file from the same directory at once, like: wget ftp://gnjilux.cc.fer.hr/*.msg By default, globbing will be turned on if the URL contains a globbing character. This option may be used to turn globbing on or off permanently. You may have to quote the URL to protect it from being expanded by your shell. Globbing makes Wget look for a directory listing, which is system-specific. This is why it currently works only with Unix FTP servers (and the ones emulating Unix `ls' output). `--passive-ftp' Use the "passive" FTP retrieval scheme, in which the client initiates the data connection. This is sometimes required for FTP to work behind firewalls.  File: wget.info, Node: Recursive Retrieval Options, Next: Recursive Accept/Reject Options, Prev: FTP Options, Up: Invoking Recursive Retrieval Options =========================== `-r' `--recursive' Turn on recursive retrieving. *Note Recursive Retrieval:: for more details. `-l DEPTH' `--level=DEPTH' Specify recursion maximum depth level DEPTH (*Note Recursive Retrieval::). The default maximum depth is 5. `--delete-after' This option tells Wget to delete every single file it downloads, *after* having done so. It is useful for pre-fetching popular pages through proxy, e.g.: wget -r -nd --delete-after http://whatever.com/~popular/page/ The `-r' option is to retrieve recursively, and `-nd' not to create directories. `-k' `--convert-links' Convert the non-relative links to relative ones locally. Only the references to the documents actually downloaded will be converted; the rest will be left unchanged. Note that only at the end of the download can Wget know which links have been downloaded. Because of that, much of the work done by `-k' will be performed at the end of the downloads. `-m' `--mirror' Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to `-r -N -l inf -nr'. `-nr' `--dont-remove-listing' Don't remove the temporary `.listing' files generated by FTP retrievals. Normally, these files contain the raw directory listings received from FTP servers. Not removing them can be useful to access the full remote file list when running a mirror, or for debugging purposes.  File: wget.info, Node: Recursive Accept/Reject Options, Prev: Recursive Retrieval Options, Up: Invoking Recursive Accept/Reject Options =============================== `-A ACCLIST --accept ACCLIST' `-R REJLIST --reject REJLIST' Specify comma-separated lists of file name suffixes or patterns to accept or reject (*Note Types of Files:: for more details). `-D DOMAIN-LIST' `--domains=DOMAIN-LIST' Set domains to be accepted and DNS looked-up, where DOMAIN-LIST is a comma-separated list. Note that it does *not* turn on `-H'. This option speeds things up, even if only one host is spanned (*Note Domain Acceptance::). `--exclude-domains DOMAIN-LIST' Exclude the domains given in a comma-separated DOMAIN-LIST from DNS-lookup (*Note Domain Acceptance::). `-L' `--relative' Follow relative links only. Useful for retrieving a specific home page without any distractions, not even those from the same hosts (*Note Relative Links::). `--follow-ftp' Follow FTP links from HTML documents. Without this option, Wget will ignore all the FTP links. `-H' `--span-hosts' Enable spanning across hosts when doing recursive retrieving (*Note All Hosts::). `-I LIST' `--include-directories=LIST' Specify a comma-separated list of directories you wish to follow when downloading (*Note Directory-Based Limits:: for more details.) Elements of LIST may contain wildcards. `-X LIST' `--exclude-directories=LIST' Specify a comma-separated list of directories you wish to exclude from download (*Note Directory-Based Limits:: for more details.) Elements of LIST may contain wildcards. `-nh' `--no-host-lookup' Disable the time-consuming DNS lookup of almost all hosts (*Note Host Checking::). `-np' `--no-parent' Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files *below* a certain hierarchy will be downloaded. *Note Directory-Based Limits:: for more details.  File: wget.info, Node: Recursive Retrieval, Next: Following Links, Prev: Invoking, Up: Top Recursive Retrieval ******************* GNU Wget is capable of traversing parts of the Web (or a single HTTP or FTP server), depth-first following links and directory structure. This is called "recursive" retrieving, or "recursion". With HTTP URLs, Wget retrieves and parses the HTML from the given URL, documents, retrieving the files the HTML document was referring to, through markups like `href', or `src'. If the freshly downloaded file is also of type `text/html', it will be parsed and followed further. The maximum "depth" to which the retrieval may descend is specified with the `-l' option (the default maximum depth is five layers). *Note Recursive Retrieval::. When retrieving an FTP URL recursively, Wget will retrieve all the data from the given directory tree (including the subdirectories up to the specified depth) on the remote server, creating its mirror image locally. FTP retrieval is also limited by the `depth' parameter. By default, Wget will create a local directory tree, corresponding to the one found on the remote server. Recursive retrieving can find a number of applications, the most important of which is mirroring. It is also useful for WWW presentations, and any other opportunities where slow network connections should be bypassed by storing the files locally. You should be warned that invoking recursion may cause grave overloading on your system, because of the fast exchange of data through the network; all of this may hamper other users' work. The same stands for the foreign server you are mirroring--the more requests it gets in a rows, the greater is its load. Careless retrieving can also fill your file system unctrollably, which can grind the machine to a halt. The load can be minimized by lowering the maximum recursion level (`-l') and/or by lowering the number of retries (`-t'). You may also consider using the `-w' option to slow down your requests to the remote servers, as well as the numerous options to narrow the number of followed links (*Note Following Links::). Recursive retrieval is a good thing when used properly. Please take all precautions not to wreak havoc through carelessness.  File: wget.info, Node: Following Links, Next: Time-Stamping, Prev: Recursive Retrieval, Up: Top Following Links *************** When retrieving recursively, one does not wish to retrieve the loads of unnecessary data. Most of the time the users bear in mind exactly what they want to download, and want Wget to follow only specific links. For example, if you wish to download the music archive from `fly.cc.fer.hr', you will not want to download all the home pages that happen to be referenced by an obscure part of the archive. Wget possesses several mechanisms that allows you to fine-tune which links it will follow. * Menu: * Relative Links:: Follow relative links only. * Host Checking:: Follow links on the same host. * Domain Acceptance:: Check on a list of domains. * All Hosts:: No host restrictions. * Types of Files:: Getting only certain files. * Directory-Based Limits:: Getting only certain directories. * FTP Links:: Following FTP links.  File: wget.info, Node: Relative Links, Next: Host Checking, Prev: Following Links, Up: Following Links Relative Links ============== When only relative links are followed (option `-L'), recursive retrieving will never span hosts. No time-expensive DNS-lookups will be performed, and the process will be very fast, with the minimum strain of the network. This will suit your needs often, especially when mirroring the output of various `x2html' converters, since they generally output relative links.  File: wget.info, Node: Host Checking, Next: Domain Acceptance, Prev: Relative Links, Up: Following Links Host Checking ============= The drawback of following the relative links solely is that humans often tend to mix them with absolute links to the very same host, and the very same page. In this mode (which is the default mode for following links) all URLs the that refer to the same host will be retrieved. The problem with this option are the aliases of the hosts and domains. Thus there is no way for Wget to know that `regoc.srce.hr' and `www.srce.hr' are the same host, or that `fly.cc.fer.hr' is the same as `fly.cc.etf.hr'. Whenever an absolute link is encountered, the host is DNS-looked-up with `gethostbyname' to check whether we are maybe dealing with the same hosts. Although the results of `gethostbyname' are cached, it is still a great slowdown, e.g. when dealing with large indices of home pages on different hosts (because each of the hosts must be and DNS-resolved to see whether it just *might* an alias of the starting host). To avoid the overhead you may use `-nh', which will turn off DNS-resolving and make Wget compare hosts literally. This will make things run much faster, but also much less reliable (e.g. `www.srce.hr' and `regoc.srce.hr' will be flagged as different hosts). Note that modern HTTP servers allows one IP address to host several "virtual servers", each having its own directory hieratchy. Such "servers" are distinguished by their hostnames (all of which point to the same IP address); for this to work, a client must send a `Host' header, which is what Wget does. However, in that case Wget *must not* try to divine a host's "real" address, nor try to use the same hostname for each access, i.e. `-nh' must be turned on. In other words, the `-nh' option must be used to enabling the retrieval from virtual servers distinguished by their hostnames. As the number of such server setups grow, the behavior of `-nh' may become the default in the future.  File: wget.info, Node: Domain Acceptance, Next: All Hosts, Prev: Host Checking, Up: Following Links Domain Acceptance ================= With the `-D' option you may specify the domains that will be followed. The hosts the domain of which is not in this list will not be DNS-resolved. Thus you can specify `-Dmit.edu' just to make sure that *nothing outside of MIT gets looked up*. This is very important and useful. It also means that `-D' does *not* imply `-H' (span all hosts), which must be specified explicitly. Feel free to use this options since it will speed things up, with almost all the reliability of checking for all hosts. Thus you could invoke wget -r -D.hr http://fly.cc.fer.hr/ to make sure that only the hosts in `.hr' domain get DNS-looked-up for being equal to `fly.cc.fer.hr'. So `fly.cc.etf.hr' will be checked (only once!) and found equal, but `www.gnu.ai.mit.edu' will not even be checked. Of course, domain acceptance can be used to limit the retrieval to particular domains with spanning of hosts in them, but then you must specify `-H' explicitly. E.g.: wget -r -H -Dmit.edu,stanford.edu http://www.mit.edu/ will start with `http://www.mit.edu/', following links across MIT and Stanford. If there are domains you want to exclude specifically, you can do it with `--exclude-domains', which accepts the same type of arguments of `-D', but will *exclude* all the listed domains. For example, if you want to download all the hosts from `foo.edu' domain, with the exception of `sunsite.foo.edu', you can do it like this: wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu http://www.foo.edu/  File: wget.info, Node: All Hosts, Next: Types of Files, Prev: Domain Acceptance, Up: Following Links All Hosts ========= When `-H' is specified without `-D', all hosts are freely spanned. There are no restrictions whatsoever as to what part of the net Wget will go to fetch documents, other than maximum retrieval depth. If a page references `www.yahoo.com', so be it. Such an option is rarely useful for itself.  File: wget.info, Node: Types of Files, Next: Directory-Based Limits, Prev: All Hosts, Up: Following Links Types of Files ============== When downloading material from the web, you will often want to restrict the retrieval to only certain file types. For example, if you are interested in downloading GIFS, you will not be overjoyed to get loads of Postscript documents, and vice versa. Wget offers two options to deal with this problem. Each option description lists a short name, a long name, and the equivalent command in `.wgetrc'. `-A ACCLIST' `--accept ACCLIST' `accept = ACCLIST' The argument to `--accept' option is a list of file suffixes or patterns that Wget will download during recursive retrieval. A suffix is the ending part of a file, and consists of "normal" letters, e.g. `gif' or `.jpg'. A matching pattern contains shell-like wildcards, e.g. `books*' or `zelazny*196[0-9]*'. So, specifying `wget -A gif,jpg' will make Wget download only the files ending with `gif' or `jpg', i.e. GIFs and JPEGs. On the other hand, `wget -A "zelazny*196[0-9]*"' will download only files beginning with `zelazny' and containing numbers from 1960 to 1969 anywhere within. Look up the manual of your shell for a description of how pattern matching works. Of course, any number of suffixes and patterns can be combined into a comma-separated list, and given as an argument to `-A'. `-R REJLIST' `--reject REJLIST' `reject = REJLIST' The `--reject' option works the same way as `--accept', only its logic is the reverse; Wget will download all files *except* the ones matching the suffixes (or patterns) in the list. So, if you want to download a whole page except for the cumbersome MPEGs and .AU files, you can use `wget -R mpg,mpeg,au'. Analogously, to download all files except the ones beginning with `bjork', use `wget -R "bjork*"'. The quotes are to prevent expansion by the shell. The `-A' and `-R' options may be combined to achieve even better fine-tuning of which files to retrieve. E.g. `wget -A "*zelazny*" -R .ps' will download all the files having `zelazny' as a part of their name, but *not* the postscript files. Note that these two options do not affect the downloading of HTML files; Wget must load all the HTMLs to know where to go at all--recursive retrieval would make no sense otherwise.  File: wget.info, Node: Directory-Based Limits, Next: FTP Links, Prev: Types of Files, Up: Following Links Directory-Based Limits ====================== Regardless of other link-following facilities, it is often useful to place the restriction of what files to retrieve based on the directories those files are placed in. There can be many reasons for this--the home pages may be organized in a reasonable directory structure; or some directories may contain useless information, e.g. `/cgi-bin' or `/dev' directories. Wget offers three different options to deal with this requirement. Each option description lists a short name, a long name, and the equivalent command in `.wgetrc'. `-I LIST' `--include LIST' `include_directories = LIST' `-I' option accepts a comma-separated list of directories included in the retrieval. Any other directories will simply be ignored. The directories are absolute paths. So, if you wish to download from `http://host/people/bozo/' following only links to bozo's colleagues in the `/people' directory and the bogus scripts in `/cgi-bin', you can specify: wget -I /people,/cgi-bin http://host/people/bozo/ `-X LIST' `--exclude LIST' `exclude_directories = LIST' `-X' option is exactly the reverse of `-I'--this is a list of directories *excluded* from the download. E.g. if you do not want Wget to download things from `/cgi-bin' directory, specify `-X /cgi-bin' on the command line. The same as with `-A'/`-R', these two options can be combined to get a better fine-tuning of downloading subdirectories. E.g. if you want to load all the files from `/pub' hierarchy except for `/pub/worthless', specify `-I/pub -X/pub/worthless'. `-np' `--no-parent' `no_parent = on' The simplest, and often very useful way of limiting directories is disallowing retrieval of the links that refer to the hierarchy "upper" than the beginning directory, i.e. disallowing ascent to the parent directory/directories. The `--no-parent' option (short `-np') is useful in this case. Using it guarantees that you will never leave the existing hierarchy. Supposing you issue Wget with: wget -r --no-parent http://somehost/~luzer/my-archive/ You may rest assured that none of the references to `/~his-girls-homepage/' or `/~luzer/all-my-mpegs/' will be followed. Only the archive you are interested in will be downloaded. Essentially, `--no-parent' is similar to `-I/~luzer/my-archive', only it handles redirections in a more intelligent fashion.  File: wget.info, Node: FTP Links, Prev: Directory-Based Limits, Up: Following Links Following FTP Links =================== The rules for FTP are somewhat specific, as it is necessary for them to be. FTP links in HTML documents are often included for purposes of reference, and it is often inconvenient to download them by default. To have FTP links followed from HTML documents, you need to specify the `--follow-ftp' option. Having done that, FTP links will span hosts regardless of `-H' setting. This is logical, as FTP links rarely point to the same host where the HTTP server resides. For similar reasons, the `-L' options has no effect on such downloads. On the other hand, domain acceptance (`-D') and suffix rules (`-A' and `-R') apply normally. Also note that followed links to FTP directories will not be retrieved recursively further.  File: wget.info, Node: Time-Stamping, Next: Startup File, Prev: Following Links, Up: Top Time-Stamping ************* One of the most important aspects of mirroring information from the Internet is updating your archives. Downloading the whole archive again and again, just to replace a few changed files is expensive, both in terms of wasted bandwidth and money, and the time to do the update. This is why all the mirroring tools offer the option of incremental updating. Such an updating mechanism means that the remote server is scanned in search of "new" files. Only those new files will be downloaded in the place of the old ones. A file is considered new if one of these two conditions are met: 1. A file of that name does not already exist locally. 2. A file of that name does exist, but the remote file was modified more recently than the local file. To implement this, the program needs to be aware of the time of last modification of both remote and local files. Such information are called the "time-stamps". The time-stamping in GNU Wget is turned on using `--timestamping' (`-N') option, or through `timestamping = on' directive in `.wgetrc'. With this option, for each file it intends to download, Wget will check whether a local file of the same name exists. If it does, and the remote file is older, Wget will not download it. If the local file does not exist, or the sizes of the files do not match, Wget will download the remote file no matter what the time-stamps say. * Menu: * Time-Stamping Usage:: * HTTP Time-Stamping Internals:: * FTP Time-Stamping Internals::  File: wget.info, Node: Time-Stamping Usage, Next: HTTP Time-Stamping Internals, Prev: Time-Stamping, Up: Time-Stamping Time-Stamping Usage =================== The usage of time-stamping is simple. Say you would like to download a file so that it keeps its date of modification. wget -S http://www.gnu.ai.mit.edu/ A simple `ls -l' shows that the time stamp on the local file equals the state of the `Last-Modified' header, as returned by the server. As you can see, the time-stamping info is preserved locally, even without `-N'. Several days later, you would like Wget to check if the remote file has changed, and download it if it has. wget -N http://www.gnu.ai.mit.edu/ Wget will ask the server for the last-modified date. If the local file is newer, the remote file will not be re-fetched. However, if the remote file is more recent, Wget will proceed fetching it normally. The same goes for FTP. For example: wget ftp://ftp.ifi.uio.no/pub/emacs/gnus/* `ls' will show that the timestamps are set according to the state on the remote server. Reissuing the command with `-N' will make Wget re-fetch *only* the files that have been modified. In both HTTP and FTP retrieval Wget will time-stamp the local file correctly (with or without `-N') if it gets the stamps, i.e. gets the directory listing for FTP or the `Last-Modified' header for HTTP. If you wished to mirror the GNU archive every week, you would use the following command every week: wget --timestamping -r ftp://prep.ai.mit.edu/pub/gnu/  File: wget.info, Node: HTTP Time-Stamping Internals, Next: FTP Time-Stamping Internals, Prev: Time-Stamping Usage, Up: Time-Stamping HTTP Time-Stamping Internals ============================ Time-stamping in HTTP is implemented by checking of the `Last-Modified' header. If you wish to retrieve the file `foo.html' through HTTP, Wget will check whether `foo.html' exists locally. If it doesn't, `foo.html' will be retrieved unconditionally. If the file does exist locally, Wget will first check its local time-stamp (similar to the way `ls -l' checks it), and then send a `HEAD' request to the remote server, demanding the information on the remote file. The `Last-Modified' header is examined to find which file was modified more recently (which makes it "newer"). If the remote file is newer, it will be downloaded; if it is older, Wget will give up.(1) Arguably, HTTP time-stamping should be implemented using the `If-Modified-Since' request. ---------- Footnotes ---------- (1) As an additional check, Wget will look at the `Content-Length' header, and compare the sizes; if they are not the same, the remote file will be downloaded no matter what the time-stamp says.  File: wget.info, Node: FTP Time-Stamping Internals, Prev: HTTP Time-Stamping Internals, Up: Time-Stamping FTP Time-Stamping Internals =========================== In theory, FTP time-stamping works much the same as HTTP, only FTP has no headers--time-stamps must be received from the directory listings. For each directory files must be retrieved from, Wget will use the `LIST' command to get the listing. It will try to analyze the listing, assuming that it is a Unix `ls -l' listing, and extract the time-stamps. The rest is exactly the same as for HTTP. Assumption that every directory listing is a Unix-style listing may sound extremely constraining, but in practice it is not, as many non-Unix FTP servers use the Unixoid listing format because most (all?) of the clients understand it. Bear in mind that RFC959 defines no standard way to get a file list, let alone the time-stamps. We can only hope that a future standard will define this. Another non-standard solution includes the use of `MDTM' command that is supported by some FTP servers (including the popular `wu-ftpd'), which returns the exact time of the specified file. Wget may support this command in the future.  File: wget.info, Node: Startup File, Next: Examples, Prev: Time-Stamping, Up: Top Startup File ************ Once you know how to change default settings of Wget through command line arguments, you may wish to make some of those settings permanent. You can do that in a convenient way by creating the Wget startup file--`.wgetrc'. Besides `.wgetrc' is the "main" initialization file, it is convenient to have a special facility for storing passwords. Thus Wget reads and interprets the contents of `$HOME/.netrc', if it finds it. You can find `.netrc' format in your system manuals. Wget reads `.wgetrc' upon startup, recognizing a limited set of commands. * Menu: * Wgetrc Location:: Location of various wgetrc files. * Wgetrc Syntax:: Syntax of wgetrc. * Wgetrc Commands:: List of available commands. * Sample Wgetrc:: A wgetrc example.  File: wget.info, Node: Wgetrc Location, Next: Wgetrc Syntax, Prev: Startup File, Up: Startup File Wgetrc Location =============== When initializing, Wget will look for a "global" startup file, `/usr/local/etc/wgetrc' by default (or some prefix other than `/usr/local', if Wget was not installed there) and read commands from there, if it exists. Then it will look for the user's file. If the environmental variable `WGETRC' is set, Wget will try to load that file. Failing that, no further attempts will be made. If `WGETRC' is not set, Wget will try to load `$HOME/.wgetrc'. The fact that user's settings are loaded after the system-wide ones means that in case of collision user's wgetrc *overrides* the system-wide wgetrc (in `/usr/local/etc/wgetrc' by default). Fascist admins, away! 070701000565aa000081a4000000020000000200000001372ff1f4000096d4000000660000004500000000000000000000001b00000004reloc/doc/wget/wget.info-2This is Info file wget.info, produced by Makeinfo version 1.67 from the input file ./wget.texi. INFO-DIR-SECTION Net Utilities INFO-DIR-SECTION World Wide Web START-INFO-DIR-ENTRY * Wget: (wget). The non-interactive network downloader. END-INFO-DIR-ENTRY This file documents the the GNU Wget utility for downloading network data. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled "Copying" and "GNU General Public License" are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.  File: wget.info, Node: Wgetrc Syntax, Next: Wgetrc Commands, Prev: Wgetrc Location, Up: Startup File Wgetrc Syntax ============= The syntax of a wgetrc command is simple: variable = value The "variable" will also be called "command". Valid "values" are different for different commands. The commands are case-insensitive and underscore-insensitive. Thus `DIr__PrefiX' is the same as `dirprefix'. Empty lines, lines beginning with `#' and lines containing white-space only are discarded. Commands that expect a comma-separated list will clear the list on an empty command. So, if you wish to reset the rejection list specified in global `wgetrc', you can do it with: reject =  File: wget.info, Node: Wgetrc Commands, Next: Sample Wgetrc, Prev: Wgetrc Syntax, Up: Startup File Wgetrc Commands =============== The complete set of commands is listed below, the letter after `=' denoting the value the command takes. It is `on/off' for `on' or `off' (which can also be `1' or `0'), STRING for any non-empty string or N for a positive integer. For example, you may specify `use_proxy = off' to disable use of proxy servers by default. You may use `inf' for infinite values, where appropriate. Most of the commands have their equivalent command-line option (*Note Invoking::), except some more obscure or rarely used ones. accept/reject = STRING Same as `-A'/`-R' (*Note Types of Files::). add_hostdir = on/off Enable/disable host-prefixed file names. `-nH' disables it. continue = on/off Enable/disable continuation of the retrieval, the same as `-c' (which enables it). background = on/off Enable/disable going to background, the same as `-b' (which enables it). base = STRING Set base for relative URLs, the same as `-B'. cache = on/off When set to off, disallow server-caching. See the `-C' option. convert links = on/off Convert non-relative links locally. The same as `-k'. cut_dirs = N Ignore N remote directory components. debug = on/off Debug mode, same as `-d'. delete_after = on/off Delete after download, the same as `--delete-after'. dir_prefix = STRING Top of directory tree, the same as `-P'. dirstruct = on/off Turning dirstruct on or off, the same as `-x' or `-nd', respectively. domains = STRING Same as `-D' (*Note Domain Acceptance::). dot_bytes = N Specify the number of bytes "contained" in a dot, as seen throughout the retrieval (1024 by default). You can postfix the value with `k' or `m', representing kilobytes and megabytes, respectively. With dot settings you can tailor the dot retrieval to suit your needs, or you can use the predefined "styles" (*Note Download Options::). dots_in_line = N Specify the number of dots that will be printed in each line throughout the retrieval (50 by default). dot_spacing = N Specify the number of dots in a single cluster (10 by default). dot_style = STRING Specify the dot retrieval "style", as with `--dot-style'. exclude_directories = STRING Specify a comma-separated list of directories you wish to exclude from download, the same as `-X' (*Note Directory-Based Limits::). exclude_domains = STRING Same as `--exclude-domains' (*Note Domain Acceptance::). follow_ftp = on/off Follow FTP links from HTML documents, the same as `-f'. force_html = on/off If set to on, force the input filename to be regarded as an HTML document, the same as `-F'. ftp_proxy = STRING Use STRING as FTP proxy, instead of the one specified in environment. glob = on/off Turn globbing on/off, the same as `-g'. header = STRING Define an additional header, like `--header'. http_passwd = STRING Set HTTP password. http_proxy = STRING Use STRING as HTTP proxy, instead of the one specified in environment. http_user = STRING Set HTTP user to STRING. ignore_length = on/off When set to on, ignore `Content-Length' header; the same as `--ignore-length'. include_directories = STRING Specify a comma-separated list of directories you wish to follow when downloading, the same as `-I'. input = STRING Read the URLs from STRING, like `-i'. kill_longer = on/off Consider data longer than specified in content-length header as invalid (and retry getting it). The default behaviour is to save as much data as there is, provided there is more than or equal to the value in `Content-Length'. logfile = STRING Set logfile, the same as `-o'. login = STRING Your user name on the remote machine, for FTP. Defaults to `anonymous'. mirror = on/off Turn mirroring on/off. The same as `-m'. netrc = on/off Turn reading netrc on or off. noclobber = on/off Same as `-nc'. no_parent = on/off Disallow retrieving outside the directory hierarchy, like `--no-parent' (*Note Directory-Based Limits::). no_proxy = STRING Use STRING as the comma-separated list of domains to avoid in proxy loading, instead of the one specified in environment. output_document = STRING Set the output filename, the same as `-O'. passive_ftp = on/off Set passive FTP, the same as `--passive-ftp'. passwd = STRING Set your FTP password to PASSWORD. Without this setting, the password defaults to `username@hostname.domainname'. proxy_user = STRING Set proxy authentication user name to STRING, like `--proxy-user'. proxy_passwd = STRING Set proxy authentication password to STRING, like `--proxy-passwd'. quiet = on/off Quiet mode, the same as `-q'. quota = QUOTA Specify the download quota, which is useful to put in global wgetrc. When download quota is specified, Wget will stop retrieving after the download sum has become greater than quota. The quota can be specified in bytes (default), kbytes `k' appended) or mbytes (`m' appended). Thus `quota = 5m' will set the quota to 5 mbytes. Note that the user's startup file overrides system settings. reclevel = N Recursion level, the same as `-l'. recursive = on/off Recursive on/off, the same as `-r'. relative_only = on/off Follow only relative links, the same as `-L' (*Note Relative Links::). remove_listing = on/off If set to on, remove FTP listings downloaded by Wget. Setting it to off is the same as `-nr'. retr_symlinks = on/off When set to on, retrieve symbolic links as if they were plain files; the same as `--retr-symlinks'. robots = on/off Use (or not) `/robots.txt' file (*Note Robots::). Be sure to know what you are doing before changing the default (which is `on'). server_response = on/off Choose whether or not to print the HTTP and FTP server responses, the same as `-S'. simple_host_check = on/off Same as `-nh' (*Note Host Checking::). span_hosts = on/off Same as `-H'. timeout = N Set timeout value, the same as `-T'. timestamping = on/off Turn timestamping on/off. The same as `-N' (*Note Time-Stamping::). tries = N Set number of retries per URL, the same as `-t'. use_proxy = on/off Turn proxy support on/off. The same as `-Y'. verbose = on/off Turn verbose on/off, the same as `-v'/`-nv'. wait = N Wait N seconds between retrievals, the same as `-w'.  File: wget.info, Node: Sample Wgetrc, Prev: Wgetrc Commands, Up: Startup File Sample Wgetrc ============= This is the sample initialization file, as given in the distribution. It is divided in two section--one for global usage (suitable for global startup file), and one for local usage (suitable for `$HOME/.wgetrc'). Be careful about the things you change. Note that all the lines are commented out. For any line to have effect, you must remove the `#' prefix at the beginning of line. ### ### Sample Wget initialization file .wgetrc ### ## You can use this file to change the default behaviour of wget or to ## avoid having to type many many command-line options. This file does ## not contain a comprehensive list of commands -- look at the manual ## to find out what you can put into this file. ## ## Wget initialization file can reside in /usr/local/etc/wgetrc ## (global, for all users) or $HOME/.wgetrc (for a single user). ## ## To use any of the settings in this file, you will have to uncomment ## them (and probably change them). ## ## Global settings (useful for setting up in /usr/local/etc/wgetrc). ## Think well before you change them, since they may reduce wget's ## functionality, and make it behave contrary to the documentation: ## # You can set retrieve quota for beginners by specifying a value # optionally followed by 'K' (kilobytes) or 'M' (megabytes). The # default quota is unlimited. #quota = inf # You can lower (or raise) the default number of retries when # downloading a file (default is 20). #tries = 20 # Lowering the maximum depth of the recursive retrieval is handy to # prevent newbies from going too "deep" when they unwittingly start # the recursive retrieval. The default is 5. #reclevel = 5 # Many sites are behind firewalls that do not allow initiation of # connections from the outside. On these sites you have to use the # `passive' feature of FTP. If you are behind such a firewall, you # can turn this on to make Wget use passive FTP by default. #passive_ftp = off ## ## Local settings (for a user to set in his $HOME/.wgetrc). It is ## *highly* undesirable to put these settings in the global file, since ## they are potentially dangerous to "normal" users. ## ## Even when setting up your own ~/.wgetrc, you should know what you ## are doing before doing so. ## # Set this to on to use timestamping by default: #timestamping = off # It is a good idea to make Wget send your email address in a `From:' # header with your request (so that server administrators can contact # you in case of errors). Wget does *not* send `From:' by default. #header = From: Your Name # You can set up other headers, like Accept-Language. Accept-Language # is *not* sent by default. #header = Accept-Language: en # You can set the default proxy for Wget to use. It will override the # value in the environment. #http_proxy = http://proxy.yoyodyne.com:18023/ # If you do not want to use proxy at all, set this to off. #use_proxy = on # You can customize the retrieval outlook. Valid options are default, # binary, mega and micro. #dot_style = default # Setting this to off makes Wget not download /robots.txt. Be sure to # know *exactly* what /robots.txt is and how it is used before changing # the default! #robots = on # It can be useful to make Wget wait between connections. Set this to # the number of seconds you want Wget to wait. #wait = 0 # You can force creating directory structure, even if a single is being # retrieved, by setting this to on. #dirstruct = off # You can turn on recursive retrieving by default (don't do this if # you are not sure you know what it means) by setting this to on. #recursive = off # To have Wget follow FTP links from HTML files by default, set this # to on: #follow_ftp = off  File: wget.info, Node: Examples, Next: Various, Prev: Startup File, Up: Top Examples ******** The examples are classified into three sections, because of clarity. The first section is a tutorial for beginners. The second section explains some of the more complex program features. The third section contains advice for mirror administrators, as well as even more complex features (that some would call perverted). * Menu: * Simple Usage:: Simple, basic usage of the program. * Advanced Usage:: Advanced techniques of usage. * Guru Usage:: Mirroring and the hairy stuff.  File: wget.info, Node: Simple Usage, Next: Advanced Usage, Prev: Examples, Up: Examples Simple Usage ============ * Say you want to download a URL. Just type: wget http://fly.cc.fer.hr/ The response will be something like: --13:30:45-- http://fly.cc.fer.hr:80/en/ => `index.html' Connecting to fly.cc.fer.hr:80... connected! HTTP request sent, awaiting response... 200 OK Length: 4,694 [text/html] 0K -> .... [100%] 13:30:46 (23.75 KB/s) - `index.html' saved [4694/4694] * But what will happen if the connection is slow, and the file is lengthy? The connection will probably fail before the whole file is retrieved, more than once. In this case, Wget will try getting the file until it either gets the whole of it, or exceeds the default number of retries (this being 20). It is easy to change the number of tries to 45, to insure that the whole file will arrive safely: wget --tries=45 http://fly.cc.fer.hr/jpg/flyweb.jpg * Now let's leave Wget to work in the background, and write its progress to log file `log'. It is tiring to type `--tries', so we shall use `-t'. wget -t 45 -o log http://fly.cc.fer.hr/jpg/flyweb.jpg & The ampersand at the end of the line makes sure that Wget works in the background. To unlimit the number of retries, use `-t inf'. * The usage of FTP is as simple. Wget will take care of login and password. $ wget ftp://gnjilux.cc.fer.hr/welcome.msg --10:08:47-- ftp://gnjilux.cc.fer.hr:21/welcome.msg => `welcome.msg' Connecting to gnjilux.cc.fer.hr:21... connected! Logging in as anonymous ... Logged in! ==> TYPE I ... done. ==> CWD not needed. ==> PORT ... done. ==> RETR welcome.msg ... done. Length: 1,340 (unauthoritative) 0K -> . [100%] 10:08:48 (1.28 MB/s) - `welcome.msg' saved [1340] * If you specify a directory, Wget will retrieve the directory listing, parse it and convert it to HTML. Try: wget ftp://prep.ai.mit.edu/pub/gnu/ lynx index.html  File: wget.info, Node: Advanced Usage, Next: Guru Usage, Prev: Simple Usage, Up: Examples Advanced Usage ============== * You would like to read the list of URLs from a file? Not a problem with that: wget -i file If you specify `-' as file name, the URLs will be read from standard input. * Create a mirror image of GNU WWW site (with the same directory structure the original has) with only one try per document, saving the log of the activities to `gnulog': wget -r -t1 http://www.gnu.ai.mit.edu/ -o gnulog * Retrieve the first layer of yahoo links: wget -r -l1 http://www.yahoo.com/ * Retrieve the index.html of `www.lycos.com', showing the original server headers: wget -S http://www.lycos.com/ * Save the server headers with the file: wget -s http://www.lycos.com/ more index.html * Retrieve the first two levels of `wuarchive.wustl.edu', saving them to /tmp. wget -P/tmp -l2 ftp://wuarchive.wustl.edu/ * You want to download all the GIFs from an HTTP directory. `wget http://host/dir/*.gif' doesn't work, since HTTP retrieval does not support globbing. In that case, use: wget -r -l1 --no-parent -A.gif http://host/dir/ It is a bit of a kludge, but it works. `-r -l1' means to retrieve recursively (*Note Recursive Retrieval::), with maximum depth of 1. `--no-parent' means that references to the parent directory are ignored (*Note Directory-Based Limits::), and `-A.gif' means to download only the GIF files. `-A "*.gif"' would have worked too. * Suppose you were in the middle of downloading, when Wget was interrupted. Now you do not want to clobber the files already present. It would be: wget -nc -r http://www.gnu.ai.mit.edu/ * If you want to encode your own username and password to HTTP or FTP, use the appropriate URL syntax (*Note URL Format::). wget ftp://hniksic:mypassword@jagor.srce.hr/.emacs * If you do not like the default retrieval visualization (1K dots with 10 dots per cluster and 50 dots per line), you can customize it through dot settings (*Note Wgetrc Commands::). For example, many people like the "binary" style of retrieval, with 8K dots and 512K lines: wget --dot-style=binary ftp://prep.ai.mit.edu/pub/gnu/README You can experiment with other styles, like: wget --dot-style=mega ftp://ftp.xemacs.org/pub/xemacs/xemacs-20.4/xemacs-20.4.tar.gz wget --dot-style=micro http://fly.cc.fer.hr/ To make these settings permanent, put them in your `.wgetrc', as described before (*Note Sample Wgetrc::).  File: wget.info, Node: Guru Usage, Prev: Advanced Usage, Up: Examples Guru Usage ========== * If you wish Wget to keep a mirror of a page (or FTP subdirectories), use `--mirror' (`-m'), which is the shorthand for `-r -N'. You can put Wget in the crontab file asking it to recheck a site each Sunday: crontab 0 0 * * 0 wget --mirror ftp://ftp.xemacs.org/pub/xemacs/ -o /home/me/weeklog * You may wish to do the same with someone's home page. But you do not want to download all those images--you're only interested in HTML. wget --mirror -A.html http://www.w3.org/ * But what about mirroring the hosts networkologically close to you? It seems so awfully slow because of all that DNS resolving. Just use `-D' (*Note Domain Acceptance::). wget -rN -Dsrce.hr http://www.srce.hr/ Now Wget will correctly find out that `regoc.srce.hr' is the same as `www.srce.hr', but will not even take into consideration the link to `www.mit.edu'. * You have a presentation and would like the dumb absolute links to be converted to relative? Use `-k': wget -k -r URL * You would like the output documents to go to standard output instead of to files? OK, but Wget will automatically shut up (turn on `--quiet') to prevent mixing of Wget output and the retrieved documents. wget -O - http://jagor.srce.hr/ http://www.srce.hr/ You can also combine the two options and make weird pipelines to retrieve the documents from remote hotlists: wget -O - http://cool.list.com/ | wget --force-html -i -  File: wget.info, Node: Various, Next: Appendices, Prev: Examples, Up: Top Various ******* This chapter contains all the stuff that could not fit anywhere else. * Menu: * Proxies:: Support for proxy servers * Distribution:: Getting the latest version. * Mailing List:: Wget mailing list for announcements and discussion. * Reporting Bugs:: How and where to report bugs. * Portability:: The systems Wget works on. * Signals:: Signal-handling performed by Wget.  File: wget.info, Node: Proxies, Next: Distribution, Prev: Various, Up: Various Proxies ======= "Proxies" are special-purpose HTTP servers designed to transfer data from remote servers to local clients. One typical use of proxies is lightening network load for users behind a slow connection. This is achieved by channeling all HTTP and FTP requests through the proxy which caches the transferred data. When a cached resource is requested again, proxy will return the data from cache. Another use for proxies is for companies that separate (for security reasons) their internal networks from the rest of Internet. In order to obtain information from the Web, their users connect and retrieve remote data using an authorized proxy. Wget supports proxies for both HTTP and FTP retrievals. The standard way to specify proxy location, which Wget recognizes, is using the following environment variables: `http_proxy' This variable should contain the URL of the proxy for HTTP connections. `ftp_proxy' This variable should contain the URL of the proxy for HTTP connections. It is quite common that HTTP_PROXY and FTP_PROXY are set to the same URL. `no_proxy' This variable should contain a comma-separated list of domain extensions proxy should *not* be used for. For instance, if the value of `no_proxy' is `.mit.edu', proxy will not be used to retrieve documents from MIT. In addition to the environment variables, proxy location and settings may be specified from within Wget itself. `-Y on/off' `--proxy=on/off' `proxy = on/off' This option may be used to turn the proxy support on or off. Proxy support is on by default, provided that the appropriate environment variables are set. `http_proxy = URL' `ftp_proxy = URL' `no_proxy = STRING' These startup file variables allow you to override the proxy settings specified by the environment. Some proxy servers require authorization to enable you to use them. The authorization consists of "username" and "password", which must be sent by Wget. As with HTTP authorization, several authentication schemes exist. For proxy authorization only the `Basic' authentication scheme is currently implemented. You may specify your username and password either through the proxy URL or through the command-line options. Assuming that the company's proxy is located at `proxy.srce.hr' at port 8001, a proxy URL location containing authorization data might look like this: http://hniksic:mypassword@proxy.company.com:8001/ Alternatively, you may use the `proxy-user' and `proxy-password' options, and the equivalent `.wgetrc' settings `proxy_user' and `proxy_passwd' to set the proxy username and password.  File: wget.info, Node: Distribution, Next: Mailing List, Prev: Proxies, Up: Various Distribution ============ Like all GNU utilities, the latest version of Wget can be found at the master GNU archive site prep.ai.mit.edu, and its mirrors. For example, Wget 1.5.3 can be found at `ftp://prep.ai.mit.edu/pub/gnu/wget-1.5.3.tar.gz'  File: wget.info, Node: Mailing List, Next: Reporting Bugs, Prev: Distribution, Up: Various Mailing List ============ Wget has its own mailing list at , thanks to Karsten Thygesen. The mailing list is for discussion of Wget features and web, reporting Wget bugs (those that you think may be of interest to the public) and mailing announcements. You are welcome to subscribe. The more people on the list, the better! To subscribe, send mail to . the magic word `subscribe' in the subject line. Unsubscribe by mailing to . The mailing list is archived at `http://fly.cc.fer.hr/archive/wget'.  File: wget.info, Node: Reporting Bugs, Next: Portability, Prev: Mailing List, Up: Various Reporting Bugs ============== You are welcome to send bug reports about GNU Wget to . The bugs that you think are of the interest to the public (i.e. more people should be informed about them) can be Cc-ed to the mailing list at . Before actually submitting a bug report, please try to follow a few simple guidelines. 1. Please try to ascertain that the behaviour you see really is a bug. If Wget crashes, it's a bug. If Wget does not behave as documented, it's a bug. If things work strange, but you are not sure about the way they are supposed to work, it might well be a bug. 2. Try to repeat the bug in as simple circumstances as possible. E.g. if Wget crashes on `wget -rLl0 -t5 -Y0 http://yoyodyne.com -o /tmp/log', you should try to see if it will crash with a simpler set of options. Also, while I will probably be interested to know the contents of your `.wgetrc' file, just dumping it into the debug message is probably a bad idea. Instead, you should first try to see if the bug repeats with `.wgetrc' moved out of the way. Only if it turns out that `.wgetrc' settings affect the bug, should you mail me the relevant parts of the file. 3. Please start Wget with `-d' option and send the log (or the relevant parts of it). If Wget was compiled without debug support, recompile it. It is *much* easier to trace bugs with debug support on. 4. If Wget has crashed, try to run it in a debugger, e.g. `gdb `which wget` core' and type `where' to get the backtrace. 5. Find where the bug is, fix it and send me the patches. :-)  File: wget.info, Node: Portability, Next: Signals, Prev: Reporting Bugs, Up: Various Portability =========== Since Wget uses GNU Autoconf for building and configuring, and avoids using "special" ultra-mega-cool features of any particular Unix, it should compile (and work) on all common Unix flavors. Various Wget versions have been compiled and tested under many kinds of Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital Unix), Ultrix, *BSD, IRIX, and others; refer to the file `MACHINES' in the distribution directory for a comprehensive list. If you compile it on an architecture not listed there, please let me know so I can update it. Wget should also compile on the other Unix systems, not listed in `MACHINES'. If it doesn't, please let me know. Thanks to kind contributors, this version of Wget compiles and works on Microsoft Windows 95 and Windows NT platforms. It has been compiled successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers, with Winsock as networking software. Naturally, it is crippled of some features available on Unix, but it should work as a substitute for people stuck with Windows. Note that the Windows port is *neither tested nor maintained* by me--all questions and problems should be reported to Wget mailing list at where the maintainers will look at them.  File: wget.info, Node: Signals, Prev: Portability, Up: Various Signals ======= Since the purpose of Wget is background work, it catches the hangup signal (`SIGHUP') and ignores it. If the output was on standard output, it will be redirected to a file named `wget-log'. Otherwise, `SIGHUP' is ignored. This is convenient when you wish to redirect the output of Wget after having started it. $ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz & $ kill -HUP %% # Redirect the output to wget-log Other than that, Wget will not try to interfere with signals in any way. `C-c', `kill -TERM' and `kill -KILL' should kill it alike.  File: wget.info, Node: Appendices, Next: Copying, Prev: Various, Up: Top Appendices ********** This chapter contains some references I consider useful, like the Robots Exclusion Standard specification, as well as a list of contributors to GNU Wget. * Menu: * Robots:: Wget as a WWW robot. * Security Considerations:: Security with Wget. * Contributors:: People who helped.  File: wget.info, Node: Robots, Next: Security Considerations, Prev: Appendices, Up: Appendices Robots ====== Since Wget is able to traverse the web, it counts as one of the Web "robots". Thus Wget understands "Robots Exclusion Standard" (RES)--contents of `/robots.txt', used by server administrators to shield parts of their systems from wanderings of Wget. Norobots support is turned on only when retrieving recursively, and *never* for the first page. Thus, you may issue: wget -r http://fly.cc.fer.hr/ First the index of fly.cc.fer.hr will be downloaded. If Wget finds anything worth downloading on the same host, only *then* will it load the robots, and decide whether or not to load the links after all. `/robots.txt' is loaded only once per host. Wget does not support the robots `META' tag. The description of the norobots standard was written, and is maintained by Martijn Koster . With his permission, I contribute a (slightly modified) texified version of the RES. * Menu: * Introduction to RES:: * RES Format:: * User-Agent Field:: * Disallow Field:: * Norobots Examples::  File: wget.info, Node: Introduction to RES, Next: RES Format, Prev: Robots, Up: Robots Introduction to RES ------------------- "WWW Robots" (also called "wanderers" or "spiders") are programs that traverse many pages in the World Wide Web by recursively retrieving linked pages. For more information see the robots page. In 1993 and 1994 there have been occasions where robots have visited WWW servers where they weren't welcome for various reasons. Sometimes these reasons were robot specific, e.g. certain robots swamped servers with rapid-fire requests, or retrieved the same files repeatedly. In other situations robots traversed parts of WWW servers that weren't suitable, e.g. very deep virtual trees, duplicated information, temporary information, or cgi-scripts with side-effects (such as voting). These incidents indicated the need for established mechanisms for WWW servers to indicate to robots which parts of their server should not be accessed. This standard addresses this need with an operational solution. This document represents a consensus on 30 June 1994 on the robots mailing list (`robots@webcrawler.com'), between the majority of robot authors and other people with an interest in robots. It has also been open for discussion on the Technical World Wide Web mailing list (`www-talk@info.cern.ch'). This document is based on a previous working draft under the same title. It is not an official standard backed by a standards body, or owned by any commercial organization. It is not enforced by anybody, and there no guarantee that all current and future robots will use it. Consider it a common facility the majority of robot authors offer the WWW community to protect WWW server against unwanted accesses by their robots. The latest version of this document can be found at `http://info.webcrawler.com/mak/projects/robots/norobots.html'.  File: wget.info, Node: RES Format, Next: User-Agent Field, Prev: Introduction to RES, Up: Robots RES Format ---------- The format and semantics of the `/robots.txt' file are as follows: The file consists of one or more records separated by one or more blank lines (terminated by `CR', `CR/NL', or `NL'). Each record contains lines of the form: : The field name is case insensitive. Comments can be included in file using UNIX bourne shell conventions: the `#' character is used to indicate that preceding space (if any) and the remainder of the line up to the line termination is discarded. Lines containing only a comment are discarded completely, and therefore do not indicate a record boundary. The record starts with one or more User-agent lines, followed by one or more Disallow lines, as detailed below. Unrecognized headers are ignored. The presence of an empty `/robots.txt' file has no explicit associated semantics, it will be treated as if it was not present, i.e. all robots will consider themselves welcome.  File: wget.info, Node: User-Agent Field, Next: Disallow Field, Prev: RES Format, Up: Robots User-Agent Field ---------------- The value of this field is the name of the robot the record is describing access policy for. If more than one User-agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. The robot should be liberal in interpreting this field. A case insensitive substring match of the name without version information is recommended. If the value is `*', the record describes the default access policy for any robot that has not matched any of the other records. It is not allowed to have multiple such records in the `/robots.txt' file.  File: wget.info, Node: Disallow Field, Next: Norobots Examples, Prev: User-Agent Field, Up: Robots Disallow Field -------------- The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved. For example, `Disallow: /help' disallows both `/help.html' and `/help/index.html', whereas `Disallow: /help/' would disallow `/help/index.html' but allow `/help.html'. Any empty value, indicates that all URLs can be retrieved. At least one Disallow field needs to be present in a record.  File: wget.info, Node: Norobots Examples, Prev: Disallow Field, Up: Robots Norobots Examples ----------------- The following example `/robots.txt' file specifies that no robots should visit any URL starting with `/cyberworld/map/' or `/tmp/': # robots.txt for http://www.site.com/ User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example `/robots.txt' file specifies that no robots should visit any URL starting with `/cyberworld/map/', except the robot called `cybermapper': # robots.txt for http://www.site.com/ User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: /  File: wget.info, Node: Security Considerations, Next: Contributors, Prev: Robots, Up: Appendices Security Considerations ======================= When using Wget, you must be aware that it sends unencrypted passwords through the network, which may present a security problem. Here are the main issues, and some solutions. 1. The passwords on the command line are visible using `ps'. If this is a problem, avoid putting passwords from the command line--e.g. you can use `.netrc' for this. 2. Using the insecure "basic" authentication scheme, unencrypted passwords are transmitted through the network routers and gateways. 3. The FTP passwords are also in no way encrypted. There is no good solution for this at the moment. 4. Although the "normal" output of Wget tries to hide the passwords, debugging logs show them, in all forms. This problem is avoided by being careful when you send debug logs (yes, even when you send them to me).  File: wget.info, Node: Contributors, Prev: Security Considerations, Up: Appendices Contributors ============ GNU Wget was written by Hrvoje Niksic . However, its development could never have gone as far as it has, were it not for the help of many people, either with bug reports, feature proposals, patches, or letters saying "Thanks!". Special thanks goes to the following people (no particular order): * Karsten Thygesen--donated the mailing list and the initial FTP space. * Shawn McHorse--bug reports and patches. * Kaveh R. Ghazi--on-the-fly `ansi2knr'-ization. * Gordon Matzigkeit--`.netrc' support. * Zlatko Calusic, Tomislav Vujec and Drazen Kacar--feature suggestions and "philosophical" discussions. * Darko Budor--initial port to Windows. * Antonio Rosella--help and suggestions, plust the Italian translation. * Tomislav Petrovic, Mario Mikocevic--many bug reports and suggestions. * Francois Pinard--many thorough bug reports and discussions. * Karl Eichwalder--lots of help with internationalization and other things. * Junio Hamano--donated support for Opie and HTTP `Digest' authentication. * Brian Gough--a generous donation. The following people have provided patches, bug/build reports, useful suggestions, beta testing services, fan mail and all the other things that make maintenance so much fun: Tim Adam, Martin Baehr, Dieter Baron, Roger Beeman and the Gurus at Cisco, Mark Boyns, John Burden, Wanderlei Cavassin, Gilles Cedoc, Tim Charron, Noel Cragg, Kristijan Conkas, Damir Dzeko, Andrew Davison, Ulrich Drepper, Marc Duponcheel, Aleksandar Erkalovic, Andy Eskilsson, Masashi Fujita, Howard Gayle, Marcel Gerrits, Hans Grobler, Mathieu Guillaume, Karl Heuer, Gregor Hoffleit, Erik Magnus Hulthen, Richard Huveneers, Simon Josefsson, Mario Juric, Goran Kezunovic, Robert Kleine, Fila Kolodny, Alexander Kourakos, Martin Kraemer, Simos KSenitellis, Tage Stabell-Kulo, Hrvoje Lacko, Dave Love, Jordan Mendelson, Lin Zhe Min, Charlie Negyesi, Andrew Pollock, Steve Pothier, Marin Purgar, Jan Prikryl, Keith Refson, Tobias Ringstrom, Juan Jose Rodrigues, Heinz Salzmann, Robert Schmidt, Toomas Soome, Sven Sternberger, Markus Strasser, Szakacsits Szabolcs, Mike Thomas, Russell Vincent, Douglas E. Wegscheid, Jasmin Zainul, Bojan Zdrnja, Kristijan Zimmer. Apologies to all who I accidentally left out, and many thanks to all the subscribers of the Wget mailing list. 070701000565ab000081a4000000020000000200000001372ff1f400006dc4000000660000004500000000000000000000001b00000004reloc/doc/wget/wget.info-3This is Info file wget.info, produced by Makeinfo version 1.67 from the input file ./wget.texi. INFO-DIR-SECTION Net Utilities INFO-DIR-SECTION World Wide Web START-INFO-DIR-ENTRY * Wget: (wget). The non-interactive network downloader. END-INFO-DIR-ENTRY This file documents the the GNU Wget utility for downloading network data. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled "Copying" and "GNU General Public License" are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.  File: wget.info, Node: Copying, Next: Concept Index, Prev: Appendices, Up: Top GNU GENERAL PUBLIC LICENSE ************************** Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 675 Mass Ave, Cambridge, MA 02139, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble ======== The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 1. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 2. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 3. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a. You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b. You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c. If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 4. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a. Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b. Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c. Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 5. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 6. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 7. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 8. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 9. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 10. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 11. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 12. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 13. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs ============================================= If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. ONE LINE TO GIVE THE PROGRAM'S NAME AND AN IDEA OF WHAT IT DOES. Copyright (C) 19YY NAME OF AUTHOR This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19YY NAME OF AUTHOR Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. SIGNATURE OF TY COON, 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License.  File: wget.info, Node: Concept Index, Prev: Copying, Up: Top Concept Index ************* * Menu: * .netrc: Startup File. * .wgetrc: Startup File. * accept directories: Directory-Based Limits. * accept suffixes: Types of Files. * accept wildcards: Types of Files. * all hosts: All Hosts. * append to log: Logging and Input File Options. * arguments: Invoking. * authentication: HTTP Options. * bug reports: Reporting Bugs. * bugs: Reporting Bugs. * cache: HTTP Options. * command line: Invoking. * Content-Length, ignore: HTTP Options. * continue retrieval: Download Options. * contributors: Contributors. * conversion of links: Recursive Retrieval Options. * copying: Copying. * cut directories: Directory Options. * debug: Logging and Input File Options. * delete after retrieval: Recursive Retrieval Options. * directories: Directory-Based Limits. * directories, exclude: Directory-Based Limits. * directories, include: Directory-Based Limits. * directory limits: Directory-Based Limits. * directory prefix: Directory Options. * DNS lookup: Host Checking. * dot style: Download Options. * examples: Examples. * exclude directories: Directory-Based Limits. * execute wgetrc command: Basic Startup Options. * features: Overview. * filling proxy cache: Recursive Retrieval Options. * follow FTP links: Recursive Accept/Reject Options. * following ftp links: FTP Links. * following links: Following Links. * force html: Logging and Input File Options. * ftp time-stamping: FTP Time-Stamping Internals. * globbing, toggle: FTP Options. * GPL: Copying. * hangup: Signals. * header, add: HTTP Options. * host checking: Host Checking. * host lookup: Host Checking. * http password: HTTP Options. * http time-stamping: HTTP Time-Stamping Internals. * http user: HTTP Options. * ignore length: HTTP Options. * include directories: Directory-Based Limits. * incremental updating: Time-Stamping. * input-file: Logging and Input File Options. * invoking: Invoking. * latest version: Distribution. * links: Following Links. * links conversion: Recursive Retrieval Options. * list: Mailing List. * location of wgetrc: Wgetrc Location. * log file: Logging and Input File Options. * mailing list: Mailing List. * mirroring: Guru Usage. * no parent: Directory-Based Limits. * no warranty: Copying. * no-clobber: Download Options. * nohup: Invoking. * norobots disallow: Disallow Field. * norobots examples: Norobots Examples. * norobots format: RES Format. * norobots introduction: Introduction to RES. * norobots user-agent: User-Agent Field. * number of retries: Download Options. * operating systems: Portability. * option syntax: Option Syntax. * output file: Logging and Input File Options. * overview: Overview. * passive ftp: FTP Options. * pause: Download Options. * portability: Portability. * proxies: Proxies. * proxy <1>: Download Options. * proxy: HTTP Options. * proxy authentication: HTTP Options. * proxy filling: Recursive Retrieval Options. * proxy password: HTTP Options. * proxy user: HTTP Options. * quiet: Logging and Input File Options. * quota: Download Options. * recursion: Recursive Retrieval. * recursive retrieval: Recursive Retrieval. * redirecting output: Guru Usage. * reject directories: Directory-Based Limits. * reject suffixes: Types of Files. * reject wildcards: Types of Files. * relative links: Relative Links. * reporting bugs: Reporting Bugs. * retries: Download Options. * retrieval tracing style: Download Options. * retrieve symbolic links: FTP Options. * retrieving: Recursive Retrieval. * robots: Robots. * robots.txt: Robots. * sample wgetrc: Sample Wgetrc. * security: Security Considerations. * server maintenance: Robots. * server response, print: Download Options. * server response, save: HTTP Options. * signal handling: Signals. * span hosts: All Hosts. * spider: Download Options. * startup: Startup File. * startup file: Startup File. * suffixes, accept: Types of Files. * suffixes, reject: Types of Files. * syntax of options: Option Syntax. * syntax of wgetrc: Wgetrc Syntax. * time-stamping: Time-Stamping. * time-stamping usage: Time-Stamping Usage. * timeout: Download Options. * timestamping: Time-Stamping. * tries: Download Options. * types of files: Types of Files. * updating the archives: Time-Stamping. * URL: URL Format. * URL syntax: URL Format. * usage, time-stamping: Time-Stamping Usage. * user-agent: HTTP Options. * various: Various. * verbose: Logging and Input File Options. * wait: Download Options. * Wget as spider: Download Options. * wgetrc: Startup File. * wgetrc commands: Wgetrc Commands. * wgetrc location: Wgetrc Location. * wgetrc syntax: Wgetrc Syntax. * wildcards, accept: Types of Files. * wildcards, reject: Types of Files. 070701000565ac000081a4000000020000000200000001372ff1f40001b969000000660000004500000000000000000000001900000004reloc/doc/wget/wget.texi\input texinfo @c -*-texinfo-*- @c %**start of header @setfilename wget.info @settitle GNU Wget Manual @c Disable the monstrous rectangles beside overfull hbox-es. @finalout @c Use `odd' to print double-sided. @setchapternewpage on @c %**end of header @iftex @c Remove this if you don't use A4 paper. @afourpaper @end iftex @set VERSION 1.5.3 @set UPDATED Sep 1998 @dircategory Net Utilities @dircategory World Wide Web @direntry * Wget: (wget). The non-interactive network downloader. @end direntry @ifinfo This file documents the the GNU Wget utility for downloading network data. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. @ignore Permission is granted to process this file through TeX and print the results, provided the printed document carries a copying permission notice identical to this one except for the removal of this paragraph (this paragraph not being relevant to the printed manual). @end ignore Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled ``Copying'' and ``GNU General Public License'' are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. @end ifinfo @titlepage @title GNU Wget @subtitle The noninteractive downloading utility @subtitle Updated for Wget @value{VERSION}, @value{UPDATED} @author by Hrvoje Nik@v{s}i@'{c} @page @vskip 0pt plus 1filll Copyright @copyright{} 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled ``Copying'' and ``GNU General Public License'' are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the Free Software Foundation. @end titlepage @ifinfo @node Top, Overview, (dir), (dir) @top Wget @value{VERSION} This manual documents version @value{VERSION} of GNU Wget, the freely available utility for network download. Copyright @copyright{} 1996, 1997, 1998 Free Software Foundation, Inc. @menu * Overview:: Features of Wget. * Invoking:: Wget command-line arguments. * Recursive Retrieval:: Description of recursive retrieval. * Following Links:: The available methods of chasing links. * Time-Stamping:: Mirroring according to time-stamps. * Startup File:: Wget's initialization file. * Examples:: Examples of usage. * Various:: The stuff that doesn't fit anywhere else. * Appendices:: Some useful references. * Copying:: You may give out copies of Wget. * Concept Index:: Topics covered by this manual. @end menu @end ifinfo @node Overview, Invoking, Top, Top @chapter Overview @cindex overview @cindex features GNU Wget is a freely available network utility to retrieve files from the World Wide Web, using @sc{http} (Hyper Text Transfer Protocol) and @sc{ftp} (File Transfer Protocol), the two most widely used Internet protocols. It has many useful features to make downloading easier, some of them being: @itemize @bullet @item Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user's presence, which can be a great hindrance when transferring a lot of data. @sp 1 @item Wget is capable of descending recursively through the structure of @sc{html} documents and @sc{ftp} directory trees, making a local copy of the directory hierarchy similar to the one on the remote server. This feature can be used to mirror archives and home pages, or traverse the web in search of data, like a @sc{www} robot (@xref{Robots}). In that spirit, Wget understands the @code{norobots} convention. @sp 1 @item File name wildcard matching and recursive mirroring of directories are available when retrieving via @sc{ftp}. Wget can read the time-stamp information given by both @sc{http} and @sc{ftp} servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of @sc{ftp} sites, as well as home pages. @sp 1 @item Wget works exceedingly well on slow or unstable connections, retrying the document until it is fully retrieved, or until a user-specified retry count is surpassed. It will try to resume the download from the point of interruption, using @code{REST} with @sc{ftp} and @code{Range} with @sc{http} servers that support them. @sp 1 @item By default, Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. However, if you are behind a firewall that requires that you use a socks style gateway, you can get the socks library and build wget with support for socks. Wget also supports the passive @sc{ftp} downloading as an option. @sp 1 @item Builtin features offer mechanisms to tune which links you wish to follow (@xref{Following Links}). @sp 1 @item The retrieval is conveniently traced with printing dots, each dot representing a fixed amount of data received (1KB by default). These representations can be customized to your preferences. @sp 1 @item Most of the features are fully configurable, either through command line options, or via the initialization file @file{.wgetrc} (@xref{Startup File}). Wget allows you to define @dfn{global} startup files (@file{/usr/local/etc/wgetrc} by default) for site settings. @sp 1 @item Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (@xref{Copying}). @end itemize @node Invoking, Recursive Retrieval, Overview, Top @chapter Invoking @cindex invoking @cindex command line @cindex arguments @cindex nohup By default, Wget is very simple to invoke. The basic syntax is: @example wget [@var{option}]@dots{} [@var{URL}]@dots{} @end example Wget will simply download all the @sc{url}s specified on the command line. @var{URL} is a @dfn{Uniform Resource Locator}, as defined below. However, you may wish to change some of the default parameters of Wget. You can do it two ways: permanently, adding the appropriate command to @file{.wgetrc} (@xref{Startup File}), or specifying it on the command line. @menu * URL Format:: * Option Syntax:: * Basic Startup Options:: * Logging and Input File Options:: * Download Options:: * Directory Options:: * HTTP Options:: * FTP Options:: * Recursive Retrieval Options:: * Recursive Accept/Reject Options:: @end menu @node URL Format, Option Syntax, Invoking, Invoking @section URL Format @cindex URL @cindex URL syntax @dfn{URL} is an acronym for Uniform Resource Locator. A uniform resource locator is a compact string representation for a resource available via the Internet. Wget recognizes the @sc{url} syntax as per @sc{rfc1738}. This is the most widely used form (square brackets denote optional parts): @example http://host[:port]/directory/file ftp://host[:port]/directory/file @end example You can also encode your username and password within a @sc{url}: @example ftp://user:password@@host/path http://user:password@@host/path @end example Either @var{user} or @var{password}, or both, may be left out. If you leave out either the @sc{http} username or password, no authentication will be sent. If you leave out the @sc{ftp} username, @samp{anonymous} will be used. If you leave out the @sc{ftp} password, your email address will be supplied as a default password.@footnote{If you have a @file{.netrc} file in your home directory, password will also be searched for there.} You can encode unsafe characters in a @sc{url} as @samp{%xy}, @code{xy} being the hexadecimal representation of the character's @sc{ascii} value. Some common unsafe characters include @samp{%} (quoted as @samp{%25}), @samp{:} (quoted as @samp{%3A}), and @samp{@@} (quoted as @samp{%40}). Refer to @sc{rfc1738} for a comprehensive list of unsafe characters. Wget also supports the @code{type} feature for @sc{ftp} @sc{url}s. By default, @sc{ftp} documents are retrieved in the binary mode (type @samp{i}), which means that they are downloaded unchanged. Another useful mode is the @samp{a} (@dfn{ASCII}) mode, which converts the line delimiters between the different operating systems, and is thus useful for text files. Here is an example: @example ftp://host/directory/file;type=a @end example Two alternative variants of @sc{url} specification are also supported, because of historical (hysterical?) reasons and their wide-spreadedness. @sc{ftp}-only syntax (supported by @code{NcFTP}): @example host:/dir/file @end example @sc{http}-only syntax (introduced by @code{Netscape}): @example host[:port]/dir/file @end example These two alternative forms are deprecated, and may cease being supported in the future. If you do not understand the difference between these notations, or do not know which one to use, just use the plain ordinary format you use with your favorite browser, like @code{Lynx} or @code{Netscape}. @node Option Syntax, Basic Startup Options, URL Format, Invoking @section Option Syntax @cindex option syntax @cindex syntax of options Since Wget uses GNU getopts to process its arguments, every option has a short form and a long form. Long options are more convenient to remember, but take time to type. You may freely mix different option styles, or specify options after the command-line arguments. Thus you may write: @example wget -r --tries=10 http://fly.cc.fer.hr/ -o log @end example The space between the option accepting an argument and the argument may be omitted. Instead @samp{-o log} you can write @samp{-olog}. You may put several options that do not require arguments together, like: @example wget -drc @var{URL} @end example This is a complete equivalent of: @example wget -d -r -c @var{URL} @end example Since the options can be specified after the arguments, you may terminate them with @samp{--}. So the following will try to download @sc{url} @samp{-x}, reporting failure to @file{log}: @example wget -o log -- -x @end example The options that accept comma-separated lists all respect the convention that specifying an empty list clears its value. This can be useful to clear the @file{.wgetrc} settings. For instance, if your @file{.wgetrc} sets @code{exclude_directories} to @file{/cgi-bin}, the following example will first reset it, and then set it to exclude @file{/~nobody} and @file{/~somebody}. You can also clear the lists in @file{.wgetrc} (@xref{Wgetrc Syntax}). @example wget -X '' -X /~nobody,/~somebody @end example @node Basic Startup Options, Logging and Input File Options, Option Syntax, Invoking @section Basic Startup Options @table @samp @item -V @itemx --version Display the version of Wget. @item -h @itemx --help Print a help message describing all of Wget's command-line options. @item -b @itemx --background Go to background immediately after startup. If no output file is specified via the @samp{-o}, output is redirected to @file{wget-log}. @cindex execute wgetrc command @item -e @var{command} @itemx --execute @var{command} Execute @var{command} as if it were a part of @file{.wgetrc} (@xref{Startup File}). A command thus invoked will be executed @emph{after} the commands in @file{.wgetrc}, thus taking precedence over them. @end table @node Logging and Input File Options, Download Options, Basic Startup Options, Invoking @section Logging and Input File Options @table @samp @cindex output file @cindex log file @item -o @var{logfile} @itemx --output-file=@var{logfile} Log all messages to @var{logfile}. The messages are normally reported to standard error. @cindex append to log @item -a @var{logfile} @itemx --append-output=@var{logfile} Append to @var{logfile}. This is the same as @samp{-o}, only it appends to @var{logfile} instead of overwriting the old log file. If @var{logfile} does not exist, a new file is created. @cindex debug @item -d @itemx --debug Turn on debug output, meaning various information important to the developers of Wget if it does not work properly. Your system administrator may have chosen to compile Wget without debug support, in which case @samp{-d} will not work. Please note that compiling with debug support is always safe---Wget compiled with the debug support will @emph{not} print any debug info unless requested with @samp{-d}. @xref{Reporting Bugs} for more information on how to use @samp{-d} for sending bug reports. @cindex quiet @item -q @itemx --quiet Turn off Wget's output. @cindex verbose @item -v @itemx --verbose Turn on verbose output, with all the available data. The default output is verbose. @item -nv @itemx --non-verbose Non-verbose output---turn off verbose without being completely quiet (use @samp{-q} for that), which means that error messages and basic information still get printed. @cindex input-file @item -i @var{file} @itemx --input-file=@var{file} Read @sc{url}s from @var{file}, in which case no @sc{url}s need to be on the command line. If there are @sc{url}s both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. The @var{file} need not be an @sc{html} document (but no harm if it is)---it is enough if the @sc{url}s are just listed sequentially. However, if you specify @samp{--force-html}, the document will be regarded as @samp{html}. In that case you may have problems with relative links, which you can solve either by adding @code{} to the documents or by specifying @samp{--base=@var{url}} on the command line. @cindex force html @item -F @itemx --force-html When input is read from a file, force it to be treated as an @sc{html} file. This enables you to retrieve relative links from existing @sc{html} files on your local disk, by adding @code{} to @sc{html}, or using the @samp{--base} command-line option. @end table @node Download Options, Directory Options, Logging and Input File Options, Invoking @section Download Options @table @samp @cindex retries @cindex tries @cindex number of retries @item -t @var{number} @itemx --tries=@var{number} Set number of retries to @var{number}. Specify 0 or @samp{inf} for infinite retrying. @item -O @var{file} @itemx --output-document=@var{file} The documents will not be written to the appropriate files, but all will be concatenated together and written to @var{file}. If @var{file} already exists, it will be overwritten. If the @var{file} is @samp{-}, the documents will be written to standard output. Including this option automatically sets the number of tries to 1. @cindex no-clobber @item -nc @itemx --no-clobber Do not clobber existing files when saving to directory hierarchy within recursive retrieval of several files. This option is @emph{extremely} useful when you wish to continue where you left off with retrieval of many files. If the files have the @samp{.html} or (yuck) @samp{.htm} suffix, they will be loaded from the local disk, and parsed as if they have been retrieved from the Web. @cindex continue retrieval @item -c @itemx --continue Continue getting an existing file. This is useful when you want to finish up the download started by another program, or a previous instance of Wget. Thus you can write: @example wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z @end example If there is a file name @file{ls-lR.Z} in the current directory, Wget will assume that it is the first portion of the remote file, and will require the server to continue the retrieval from an offset equal to the length of the local file. Note that you need not specify this option if all you want is Wget to continue retrieving where it left off when the connection is lost---Wget does this by default. You need this option only when you want to continue retrieval of a file already halfway retrieved, saved by another @sc{ftp} client, or left by Wget being killed. Without @samp{-c}, the previous example would just begin to download the remote file to @file{ls-lR.Z.1}. The @samp{-c} option is also applicable for @sc{http} servers that support the @code{Range} header. @cindex dot style @cindex retrieval tracing style @item --dot-style=@var{style} Set the retrieval style to @var{style}. Wget traces the retrieval of each document by printing dots on the screen, each dot representing a fixed amount of retrieved data. Any number of dots may be separated in a @dfn{cluster}, to make counting easier. This option allows you to choose one of the pre-defined styles, determining the number of bytes represented by a dot, the number of dots in a cluster, and the number of dots on the line. With the @code{default} style each dot represents 1K, there are ten dots in a cluster and 50 dots in a line. The @code{binary} style has a more ``computer''-like orientation---8K dots, 16-dots clusters and 48 dots per line (which makes for 384K lines). The @code{mega} style is suitable for downloading very large files---each dot represents 64K retrieved, there are eight dots in a cluster, and 48 dots on each line (so each line contains 3M). The @code{micro} style is exactly the reverse; it is suitable for downloading small files, with 128-byte dots, 8 dots per cluster, and 48 dots (6K) per line. @item -N @itemx --timestamping Turn on time-stamping. @xref{Time-Stamping} for details. @cindex server response, print @item -S @itemx --server-response Print the headers sent by @sc{http} servers and responses sent by @sc{ftp} servers. @cindex Wget as spider @cindex spider @item --spider When invoked with this option, Wget will behave as a Web @dfn{spider}, which means that it will not download the pages, just check that they are there. You can use it to check your bookmarks, e.g. with: @example wget --spider --force-html -i bookmarks.html @end example This feature needs much more work for Wget to get close to the functionality of real @sc{www} spiders. @cindex timeout @item -T seconds @itemx --timeout=@var{seconds} Set the read timeout to @var{seconds} seconds. Whenever a network read is issued, the file descriptor is checked for a timeout, which could otherwise leave a pending connection (uninterrupted read). The default timeout is 900 seconds (fifteen minutes). Setting timeout to 0 will disable checking for timeouts. Please do not lower the default timeout value with this option unless you know what you are doing. @cindex pause @cindex wait @item -w @var{seconds} @itemx --wait=@var{seconds} Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the @code{m} suffix, in hours using @code{h} suffix, or in days using @code{d} suffix. Specifying a large value for this option is useful if the network or the destination host is down, so that Wget can wait long enough to reasonably expect the network error to be fixed before the retry. @cindex proxy @item -Y on/off @itemx --proxy=on/off Turn proxy support on or off. The proxy is on by default if the appropriate environmental variable is defined. @cindex quota @item -Q @var{quota} @itemx --quota=@var{quota} Specify download quota for automatic retrievals. The value can be specified in bytes (default), kilobytes (with @samp{k} suffix), or megabytes (with @samp{m} suffix). Note that quota will never affect downloading a single file. So if you specify @samp{wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz}, all of the @file{ls-lR.gz} will be downloaded. The same goes even when several @sc{url}s are specified on the command-line. However, quota is respected when retrieving either recursively, or from an input file. Thus you may safely type @samp{wget -Q2m -i sites}---download will be aborted when the quota is exceeded. Setting quota to 0 or to @samp{inf} unlimits the download quota. @end table @node Directory Options, HTTP Options, Download Options, Invoking @section Directory Options @table @samp @item -nd @itemx --no-directories Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the filenames will get extensions @samp{.n}). @item -x @itemx --force-directories The opposite of @samp{-nd}---create a hierarchy of directories, even if one would not have been created otherwise. E.g. @samp{wget -x http://fly.cc.fer.hr/robots.txt} will save the downloaded file to @file{fly.cc.fer.hr/robots.txt}. @item -nH @itemx --no-host-directories Disable generation of host-prefixed directories. By default, invoking Wget with @samp{-r http://fly.cc.fer.hr/} will create a structure of directories beginning with @file{fly.cc.fer.hr/}. This option disables such behavior. @cindex cut directories @item --cut-dirs=@var{number} Ignore @var{number} directory components. This is useful for getting a fine-grained control over the directory where recursive retrieval will be saved. Take, for example, the directory at @samp{ftp://ftp.xemacs.org/pub/xemacs/}. If you retrieve it with @samp{-r}, it will be saved locally under @file{ftp.xemacs.org/pub/xemacs/}. While the @samp{-nH} option can remove the @file{ftp.xemacs.org/} part, you are still stuck with @file{pub/xemacs}. This is where @samp{--cut-dirs} comes in handy; it makes Wget not ``see'' @var{number} remote directory components. Here are several examples of how @samp{--cut-dirs} option works. @example @group No options -> ftp.xemacs.org/pub/xemacs/ -nH -> pub/xemacs/ -nH --cut-dirs=1 -> xemacs/ -nH --cut-dirs=2 -> . --cut-dirs=1 -> ftp.xemacs.org/xemacs/ ... @end group @end example If you just want to get rid of the directory structure, this option is similar to a combination of @samp{-nd} and @samp{-P}. However, unlike @samp{-nd}, @samp{--cut-dirs} does not lose with subdirectories---for instance, with @samp{-nH --cut-dirs=1}, a @file{beta/} subdirectory will be placed to @file{xemacs/beta}, as one would expect. @cindex directory prefix @item -P @var{prefix} @itemx --directory-prefix=@var{prefix} Set directory prefix to @var{prefix}. The @dfn{directory prefix} is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is @samp{.} (the current directory). @end table @node HTTP Options, FTP Options, Directory Options, Invoking @section HTTP Options @table @samp @cindex http user @cindex http password @cindex authentication @item --http-user=@var{user} @itemx --http-passwd=@var{password} Specify the username @var{user} and password @var{password} on an @sc{http} server. According to the type of the challenge, Wget will encode them using either the @code{basic} (insecure) or the @code{digest} authentication scheme. Another way to specify username and password is in the @sc{url} itself (@xref{URL Format}). For more information about security issues with Wget, @xref{Security Considerations}. @cindex proxy @cindex cache @item -C on/off @itemx --cache=on/off When set to off, disable server-side cache. In this case, Wget will send the remote server an appropriate directive (@samp{Pragma: no-cache}) to get the file from the remote service, rather than returning the cached version. This is especially useful for retrieving and flushing out-of-date documents on proxy servers. Caching is allowed by default. @cindex Content-Length, ignore @cindex ignore length @item --ignore-length Unfortunately, some @sc{http} servers (@sc{cgi} programs, to be more precise) send out bogus @code{Content-Length} headers, which makes Wget go wild, as it thinks not all the document was retrieved. You can spot this syndrome if Wget retries getting the same document again and again, each time claiming that the (otherwise normal) connection has closed on the very same byte. With this option, Wget will ignore the @code{Content-Length} header---as if it never existed. @cindex header, add @item --header=@var{additional-header} Define an @var{additional-header} to be passed to the @sc{http} servers. Headers must contain a @samp{:} preceded by one or more non-blank characters, and must not contain newlines. You may define more than one additional header by specifying @samp{--header} more than once. @example @group wget --header='Accept-Charset: iso-8859-2' \ --header='Accept-Language: hr' \ http://fly.cc.fer.hr/ @end group @end example Specification of an empty string as the header value will clear all previous user-defined headers. @cindex proxy user @cindex proxy password @cindex proxy authentication @item --proxy-user=@var{user} @itemx --proxy-passwd=@var{password} Specify the username @var{user} and password @var{password} for authentication on a proxy server. Wget will encode them using the @code{basic} authentication scheme. @cindex server response, save @item -s @itemx --save-headers Save the headers sent by the @sc{http} server to the file, preceding the actual contents, with an empty line as the separator. @cindex user-agent @item -U @var{agent-string} @itemx --user-agent=@var{agent-string} Identify as @var{agent-string} to the @sc{http} server. The @sc{http} protocol allows the clients to identify themselves using a @code{User-Agent} header field. This enables distinguishing the @sc{www} software, usually for statistical purposes or for tracing of protocol violations. Wget normally identifies as @samp{Wget/@var{version}}, @var{version} being the current version number of Wget. However, some sites have been known to impose the policy of tailoring the output according to the @code{User-Agent}-supplied information. While conceptually this is not such a bad idea, it has been abused by servers denying information to clients other than @code{Mozilla} or Microsoft @code{Internet Explorer}. This option allows you to change the @code{User-Agent} line issued by Wget. Use of this option is discouraged, unless you really know what you are doing. @strong{NOTE} that Netscape Communications Corp. has claimed that false transmissions of @samp{Mozilla} as the @code{User-Agent} are a copyright infringement, which will be prosecuted. @strong{DO NOT} misrepresent Wget as Mozilla. @end table @node FTP Options, Recursive Retrieval Options, HTTP Options, Invoking @section FTP Options @table @samp @cindex retrieve symbolic links @item --retr-symlinks Retrieve symbolic links on @sc{ftp} sites as if they were plain files, i.e. don't just create links locally. @cindex globbing, toggle @item -g on/off @itemx --glob=on/off Turn @sc{ftp} globbing on or off. Globbing means you may use the shell-like special characters (@dfn{wildcards}), like @samp{*}, @samp{?}, @samp{[} and @samp{]} to retrieve more than one file from the same directory at once, like: @example wget ftp://gnjilux.cc.fer.hr/*.msg @end example By default, globbing will be turned on if the @sc{url} contains a globbing character. This option may be used to turn globbing on or off permanently. You may have to quote the @sc{url} to protect it from being expanded by your shell. Globbing makes Wget look for a directory listing, which is system-specific. This is why it currently works only with Unix @sc{ftp} servers (and the ones emulating Unix @code{ls} output). @cindex passive ftp @item --passive-ftp Use the @dfn{passive} @sc{ftp} retrieval scheme, in which the client initiates the data connection. This is sometimes required for @sc{ftp} to work behind firewalls. @end table @node Recursive Retrieval Options, Recursive Accept/Reject Options, FTP Options, Invoking @section Recursive Retrieval Options @table @samp @item -r @itemx --recursive Turn on recursive retrieving. @xref{Recursive Retrieval} for more details. @item -l @var{depth} @itemx --level=@var{depth} Specify recursion maximum depth level @var{depth} (@xref{Recursive Retrieval}). The default maximum depth is 5. @cindex proxy filling @cindex delete after retrieval @cindex filling proxy cache @item --delete-after This option tells Wget to delete every single file it downloads, @emph{after} having done so. It is useful for pre-fetching popular pages through proxy, e.g.: @example wget -r -nd --delete-after http://whatever.com/~popular/page/ @end example The @samp{-r} option is to retrieve recursively, and @samp{-nd} not to create directories. @cindex conversion of links @cindex links conversion @item -k @itemx --convert-links Convert the non-relative links to relative ones locally. Only the references to the documents actually downloaded will be converted; the rest will be left unchanged. Note that only at the end of the download can Wget know which links have been downloaded. Because of that, much of the work done by @samp{-k} will be performed at the end of the downloads. @item -m @itemx --mirror Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps @sc{ftp} directory listings. It is currently equivalent to @samp{-r -N -l inf -nr}. @item -nr @itemx --dont-remove-listing Don't remove the temporary @file{.listing} files generated by @sc{ftp} retrievals. Normally, these files contain the raw directory listings received from @sc{ftp} servers. Not removing them can be useful to access the full remote file list when running a mirror, or for debugging purposes. @end table @node Recursive Accept/Reject Options, , Recursive Retrieval Options, Invoking @section Recursive Accept/Reject Options @table @samp @item -A @var{acclist} --accept @var{acclist} @itemx -R @var{rejlist} --reject @var{rejlist} Specify comma-separated lists of file name suffixes or patterns to accept or reject (@xref{Types of Files} for more details). @item -D @var{domain-list} @itemx --domains=@var{domain-list} Set domains to be accepted and @sc{dns} looked-up, where @var{domain-list} is a comma-separated list. Note that it does @emph{not} turn on @samp{-H}. This option speeds things up, even if only one host is spanned (@xref{Domain Acceptance}). @item --exclude-domains @var{domain-list} Exclude the domains given in a comma-separated @var{domain-list} from @sc{dns}-lookup (@xref{Domain Acceptance}). @item -L @itemx --relative Follow relative links only. Useful for retrieving a specific home page without any distractions, not even those from the same hosts (@xref{Relative Links}). @cindex follow FTP links @item --follow-ftp Follow @sc{ftp} links from @sc{html} documents. Without this option, Wget will ignore all the @sc{ftp} links. @item -H @itemx --span-hosts Enable spanning across hosts when doing recursive retrieving (@xref{All Hosts}). @item -I @var{list} @itemx --include-directories=@var{list} Specify a comma-separated list of directories you wish to follow when downloading (@xref{Directory-Based Limits} for more details.) Elements of @var{list} may contain wildcards. @item -X @var{list} @itemx --exclude-directories=@var{list} Specify a comma-separated list of directories you wish to exclude from download (@xref{Directory-Based Limits} for more details.) Elements of @var{list} may contain wildcards. @item -nh @itemx --no-host-lookup Disable the time-consuming @sc{dns} lookup of almost all hosts (@xref{Host Checking}). @item -np @item --no-parent Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files @emph{below} a certain hierarchy will be downloaded. @xref{Directory-Based Limits} for more details. @end table @node Recursive Retrieval, Following Links, Invoking, Top @chapter Recursive Retrieval @cindex recursion @cindex retrieving @cindex recursive retrieval GNU Wget is capable of traversing parts of the Web (or a single @sc{http} or @sc{ftp} server), depth-first following links and directory structure. This is called @dfn{recursive} retrieving, or @dfn{recursion}. With @sc{http} @sc{url}s, Wget retrieves and parses the @sc{html} from the given @sc{url}, documents, retrieving the files the @sc{html} document was referring to, through markups like @code{href}, or @code{src}. If the freshly downloaded file is also of type @code{text/html}, it will be parsed and followed further. The maximum @dfn{depth} to which the retrieval may descend is specified with the @samp{-l} option (the default maximum depth is five layers). @xref{Recursive Retrieval}. When retrieving an @sc{ftp} @sc{url} recursively, Wget will retrieve all the data from the given directory tree (including the subdirectories up to the specified depth) on the remote server, creating its mirror image locally. @sc{ftp} retrieval is also limited by the @code{depth} parameter. By default, Wget will create a local directory tree, corresponding to the one found on the remote server. Recursive retrieving can find a number of applications, the most important of which is mirroring. It is also useful for @sc{www} presentations, and any other opportunities where slow network connections should be bypassed by storing the files locally. You should be warned that invoking recursion may cause grave overloading on your system, because of the fast exchange of data through the network; all of this may hamper other users' work. The same stands for the foreign server you are mirroring---the more requests it gets in a rows, the greater is its load. Careless retrieving can also fill your file system unctrollably, which can grind the machine to a halt. The load can be minimized by lowering the maximum recursion level (@samp{-l}) and/or by lowering the number of retries (@samp{-t}). You may also consider using the @samp{-w} option to slow down your requests to the remote servers, as well as the numerous options to narrow the number of followed links (@xref{Following Links}). Recursive retrieval is a good thing when used properly. Please take all precautions not to wreak havoc through carelessness. @node Following Links, Time-Stamping, Recursive Retrieval, Top @chapter Following Links @cindex links @cindex following links When retrieving recursively, one does not wish to retrieve the loads of unnecessary data. Most of the time the users bear in mind exactly what they want to download, and want Wget to follow only specific links. For example, if you wish to download the music archive from @samp{fly.cc.fer.hr}, you will not want to download all the home pages that happen to be referenced by an obscure part of the archive. Wget possesses several mechanisms that allows you to fine-tune which links it will follow. @menu * Relative Links:: Follow relative links only. * Host Checking:: Follow links on the same host. * Domain Acceptance:: Check on a list of domains. * All Hosts:: No host restrictions. * Types of Files:: Getting only certain files. * Directory-Based Limits:: Getting only certain directories. * FTP Links:: Following FTP links. @end menu @node Relative Links, Host Checking, Following Links, Following Links @section Relative Links @cindex relative links When only relative links are followed (option @samp{-L}), recursive retrieving will never span hosts. No time-expensive @sc{dns}-lookups will be performed, and the process will be very fast, with the minimum strain of the network. This will suit your needs often, especially when mirroring the output of various @code{x2html} converters, since they generally output relative links. @node Host Checking, Domain Acceptance, Relative Links, Following Links @section Host Checking @cindex DNS lookup @cindex host lookup @cindex host checking The drawback of following the relative links solely is that humans often tend to mix them with absolute links to the very same host, and the very same page. In this mode (which is the default mode for following links) all @sc{url}s the that refer to the same host will be retrieved. The problem with this option are the aliases of the hosts and domains. Thus there is no way for Wget to know that @samp{regoc.srce.hr} and @samp{www.srce.hr} are the same host, or that @samp{fly.cc.fer.hr} is the same as @samp{fly.cc.etf.hr}. Whenever an absolute link is encountered, the host is @sc{dns}-looked-up with @code{gethostbyname} to check whether we are maybe dealing with the same hosts. Although the results of @code{gethostbyname} are cached, it is still a great slowdown, e.g. when dealing with large indices of home pages on different hosts (because each of the hosts must be and @sc{dns}-resolved to see whether it just @emph{might} an alias of the starting host). To avoid the overhead you may use @samp{-nh}, which will turn off @sc{dns}-resolving and make Wget compare hosts literally. This will make things run much faster, but also much less reliable (e.g. @samp{www.srce.hr} and @samp{regoc.srce.hr} will be flagged as different hosts). Note that modern @sc{http} servers allows one IP address to host several @dfn{virtual servers}, each having its own directory hieratchy. Such ``servers'' are distinguished by their hostnames (all of which point to the same IP address); for this to work, a client must send a @code{Host} header, which is what Wget does. However, in that case Wget @emph{must not} try to divine a host's ``real'' address, nor try to use the same hostname for each access, i.e. @samp{-nh} must be turned on. In other words, the @samp{-nh} option must be used to enabling the retrieval from virtual servers distinguished by their hostnames. As the number of such server setups grow, the behavior of @samp{-nh} may become the default in the future. @node Domain Acceptance, All Hosts, Host Checking, Following Links @section Domain Acceptance With the @samp{-D} option you may specify the domains that will be followed. The hosts the domain of which is not in this list will not be @sc{dns}-resolved. Thus you can specify @samp{-Dmit.edu} just to make sure that @strong{nothing outside of @sc{mit} gets looked up}. This is very important and useful. It also means that @samp{-D} does @emph{not} imply @samp{-H} (span all hosts), which must be specified explicitly. Feel free to use this options since it will speed things up, with almost all the reliability of checking for all hosts. Thus you could invoke @example wget -r -D.hr http://fly.cc.fer.hr/ @end example to make sure that only the hosts in @samp{.hr} domain get @sc{dns}-looked-up for being equal to @samp{fly.cc.fer.hr}. So @samp{fly.cc.etf.hr} will be checked (only once!) and found equal, but @samp{www.gnu.ai.mit.edu} will not even be checked. Of course, domain acceptance can be used to limit the retrieval to particular domains with spanning of hosts in them, but then you must specify @samp{-H} explicitly. E.g.: @example wget -r -H -Dmit.edu,stanford.edu http://www.mit.edu/ @end example will start with @samp{http://www.mit.edu/}, following links across @sc{mit} and Stanford. If there are domains you want to exclude specifically, you can do it with @samp{--exclude-domains}, which accepts the same type of arguments of @samp{-D}, but will @emph{exclude} all the listed domains. For example, if you want to download all the hosts from @samp{foo.edu} domain, with the exception of @samp{sunsite.foo.edu}, you can do it like this: @example wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu http://www.foo.edu/ @end example @node All Hosts, Types of Files, Domain Acceptance, Following Links @section All Hosts @cindex all hosts @cindex span hosts When @samp{-H} is specified without @samp{-D}, all hosts are freely spanned. There are no restrictions whatsoever as to what part of the net Wget will go to fetch documents, other than maximum retrieval depth. If a page references @samp{www.yahoo.com}, so be it. Such an option is rarely useful for itself. @node Types of Files, Directory-Based Limits, All Hosts, Following Links @section Types of Files @cindex types of files When downloading material from the web, you will often want to restrict the retrieval to only certain file types. For example, if you are interested in downloading @sc{gifs}, you will not be overjoyed to get loads of Postscript documents, and vice versa. Wget offers two options to deal with this problem. Each option description lists a short name, a long name, and the equivalent command in @file{.wgetrc}. @cindex accept wildcards @cindex accept suffixes @cindex wildcards, accept @cindex suffixes, accept @table @samp @item -A @var{acclist} @itemx --accept @var{acclist} @itemx accept = @var{acclist} The argument to @samp{--accept} option is a list of file suffixes or patterns that Wget will download during recursive retrieval. A suffix is the ending part of a file, and consists of ``normal'' letters, e.g. @samp{gif} or @samp{.jpg}. A matching pattern contains shell-like wildcards, e.g. @samp{books*} or @samp{zelazny*196[0-9]*}. So, specifying @samp{wget -A gif,jpg} will make Wget download only the files ending with @samp{gif} or @samp{jpg}, i.e. @sc{gif}s and @sc{jpeg}s. On the other hand, @samp{wget -A "zelazny*196[0-9]*"} will download only files beginning with @samp{zelazny} and containing numbers from 1960 to 1969 anywhere within. Look up the manual of your shell for a description of how pattern matching works. Of course, any number of suffixes and patterns can be combined into a comma-separated list, and given as an argument to @samp{-A}. @cindex reject wildcards @cindex reject suffixes @cindex wildcards, reject @cindex suffixes, reject @item -R @var{rejlist} @itemx --reject @var{rejlist} @itemx reject = @var{rejlist} The @samp{--reject} option works the same way as @samp{--accept}, only its logic is the reverse; Wget will download all files @emph{except} the ones matching the suffixes (or patterns) in the list. So, if you want to download a whole page except for the cumbersome @sc{mpeg}s and @sc{.au} files, you can use @samp{wget -R mpg,mpeg,au}. Analogously, to download all files except the ones beginning with @samp{bjork}, use @samp{wget -R "bjork*"}. The quotes are to prevent expansion by the shell. @end table The @samp{-A} and @samp{-R} options may be combined to achieve even better fine-tuning of which files to retrieve. E.g. @samp{wget -A "*zelazny*" -R .ps} will download all the files having @samp{zelazny} as a part of their name, but @emph{not} the postscript files. Note that these two options do not affect the downloading of @sc{html} files; Wget must load all the @sc{html}s to know where to go at all---recursive retrieval would make no sense otherwise. @node Directory-Based Limits, FTP Links, Types of Files, Following Links @section Directory-Based Limits @cindex directories @cindex directory limits Regardless of other link-following facilities, it is often useful to place the restriction of what files to retrieve based on the directories those files are placed in. There can be many reasons for this---the home pages may be organized in a reasonable directory structure; or some directories may contain useless information, e.g. @file{/cgi-bin} or @file{/dev} directories. Wget offers three different options to deal with this requirement. Each option description lists a short name, a long name, and the equivalent command in @file{.wgetrc}. @cindex directories, include @cindex include directories @cindex accept directories @table @samp @item -I @var{list} @itemx --include @var{list} @itemx include_directories = @var{list} @samp{-I} option accepts a comma-separated list of directories included in the retrieval. Any other directories will simply be ignored. The directories are absolute paths. So, if you wish to download from @samp{http://host/people/bozo/} following only links to bozo's colleagues in the @file{/people} directory and the bogus scripts in @file{/cgi-bin}, you can specify: @example wget -I /people,/cgi-bin http://host/people/bozo/ @end example @cindex directories, exclude @cindex exclude directories @cindex reject directories @item -X @var{list} @itemx --exclude @var{list} @itemx exclude_directories = @var{list} @samp{-X} option is exactly the reverse of @samp{-I}---this is a list of directories @emph{excluded} from the download. E.g. if you do not want Wget to download things from @file{/cgi-bin} directory, specify @samp{-X /cgi-bin} on the command line. The same as with @samp{-A}/@samp{-R}, these two options can be combined to get a better fine-tuning of downloading subdirectories. E.g. if you want to load all the files from @file{/pub} hierarchy except for @file{/pub/worthless}, specify @samp{-I/pub -X/pub/worthless}. @cindex no parent @item -np @itemx --no-parent @itemx no_parent = on The simplest, and often very useful way of limiting directories is disallowing retrieval of the links that refer to the hierarchy @dfn{upper} than the beginning directory, i.e. disallowing ascent to the parent directory/directories. The @samp{--no-parent} option (short @samp{-np}) is useful in this case. Using it guarantees that you will never leave the existing hierarchy. Supposing you issue Wget with: @example wget -r --no-parent http://somehost/~luzer/my-archive/ @end example You may rest assured that none of the references to @file{/~his-girls-homepage/} or @file{/~luzer/all-my-mpegs/} will be followed. Only the archive you are interested in will be downloaded. Essentially, @samp{--no-parent} is similar to @samp{-I/~luzer/my-archive}, only it handles redirections in a more intelligent fashion. @end table @node FTP Links, , Directory-Based Limits, Following Links @section Following FTP Links @cindex following ftp links The rules for @sc{ftp} are somewhat specific, as it is necessary for them to be. @sc{ftp} links in @sc{html} documents are often included for purposes of reference, and it is often inconvenient to download them by default. To have @sc{ftp} links followed from @sc{html} documents, you need to specify the @samp{--follow-ftp} option. Having done that, @sc{ftp} links will span hosts regardless of @samp{-H} setting. This is logical, as @sc{ftp} links rarely point to the same host where the @sc{http} server resides. For similar reasons, the @samp{-L} options has no effect on such downloads. On the other hand, domain acceptance (@samp{-D}) and suffix rules (@samp{-A} and @samp{-R}) apply normally. Also note that followed links to @sc{ftp} directories will not be retrieved recursively further. @node Time-Stamping, Startup File, Following Links, Top @chapter Time-Stamping @cindex time-stamping @cindex timestamping @cindex updating the archives @cindex incremental updating One of the most important aspects of mirroring information from the Internet is updating your archives. Downloading the whole archive again and again, just to replace a few changed files is expensive, both in terms of wasted bandwidth and money, and the time to do the update. This is why all the mirroring tools offer the option of incremental updating. Such an updating mechanism means that the remote server is scanned in search of @dfn{new} files. Only those new files will be downloaded in the place of the old ones. A file is considered new if one of these two conditions are met: @enumerate @item A file of that name does not already exist locally. @item A file of that name does exist, but the remote file was modified more recently than the local file. @end enumerate To implement this, the program needs to be aware of the time of last modification of both remote and local files. Such information are called the @dfn{time-stamps}. The time-stamping in GNU Wget is turned on using @samp{--timestamping} (@samp{-N}) option, or through @code{timestamping = on} directive in @file{.wgetrc}. With this option, for each file it intends to download, Wget will check whether a local file of the same name exists. If it does, and the remote file is older, Wget will not download it. If the local file does not exist, or the sizes of the files do not match, Wget will download the remote file no matter what the time-stamps say. @menu * Time-Stamping Usage:: * HTTP Time-Stamping Internals:: * FTP Time-Stamping Internals:: @end menu @node Time-Stamping Usage, HTTP Time-Stamping Internals, Time-Stamping, Time-Stamping @section Time-Stamping Usage @cindex time-stamping usage @cindex usage, time-stamping The usage of time-stamping is simple. Say you would like to download a file so that it keeps its date of modification. @example wget -S http://www.gnu.ai.mit.edu/ @end example A simple @code{ls -l} shows that the time stamp on the local file equals the state of the @code{Last-Modified} header, as returned by the server. As you can see, the time-stamping info is preserved locally, even without @samp{-N}. Several days later, you would like Wget to check if the remote file has changed, and download it if it has. @example wget -N http://www.gnu.ai.mit.edu/ @end example Wget will ask the server for the last-modified date. If the local file is newer, the remote file will not be re-fetched. However, if the remote file is more recent, Wget will proceed fetching it normally. The same goes for @sc{ftp}. For example: @example wget ftp://ftp.ifi.uio.no/pub/emacs/gnus/* @end example @code{ls} will show that the timestamps are set according to the state on the remote server. Reissuing the command with @samp{-N} will make Wget re-fetch @emph{only} the files that have been modified. In both @sc{http} and @sc{ftp} retrieval Wget will time-stamp the local file correctly (with or without @samp{-N}) if it gets the stamps, i.e. gets the directory listing for @sc{ftp} or the @code{Last-Modified} header for @sc{http}. If you wished to mirror the GNU archive every week, you would use the following command every week: @example wget --timestamping -r ftp://prep.ai.mit.edu/pub/gnu/ @end example @node HTTP Time-Stamping Internals, FTP Time-Stamping Internals, Time-Stamping Usage, Time-Stamping @section HTTP Time-Stamping Internals @cindex http time-stamping Time-stamping in @sc{http} is implemented by checking of the @code{Last-Modified} header. If you wish to retrieve the file @file{foo.html} through @sc{http}, Wget will check whether @file{foo.html} exists locally. If it doesn't, @file{foo.html} will be retrieved unconditionally. If the file does exist locally, Wget will first check its local time-stamp (similar to the way @code{ls -l} checks it), and then send a @code{HEAD} request to the remote server, demanding the information on the remote file. The @code{Last-Modified} header is examined to find which file was modified more recently (which makes it ``newer''). If the remote file is newer, it will be downloaded; if it is older, Wget will give up.@footnote{As an additional check, Wget will look at the @code{Content-Length} header, and compare the sizes; if they are not the same, the remote file will be downloaded no matter what the time-stamp says.} Arguably, @sc{http} time-stamping should be implemented using the @code{If-Modified-Since} request. @node FTP Time-Stamping Internals, , HTTP Time-Stamping Internals, Time-Stamping @section FTP Time-Stamping Internals @cindex ftp time-stamping In theory, @sc{ftp} time-stamping works much the same as @sc{http}, only @sc{ftp} has no headers---time-stamps must be received from the directory listings. For each directory files must be retrieved from, Wget will use the @code{LIST} command to get the listing. It will try to analyze the listing, assuming that it is a Unix @code{ls -l} listing, and extract the time-stamps. The rest is exactly the same as for @sc{http}. Assumption that every directory listing is a Unix-style listing may sound extremely constraining, but in practice it is not, as many non-Unix @sc{ftp} servers use the Unixoid listing format because most (all?) of the clients understand it. Bear in mind that @sc{rfc959} defines no standard way to get a file list, let alone the time-stamps. We can only hope that a future standard will define this. Another non-standard solution includes the use of @code{MDTM} command that is supported by some @sc{ftp} servers (including the popular @code{wu-ftpd}), which returns the exact time of the specified file. Wget may support this command in the future. @node Startup File, Examples, Time-Stamping, Top @chapter Startup File @cindex startup file @cindex wgetrc @cindex .wgetrc @cindex startup @cindex .netrc Once you know how to change default settings of Wget through command line arguments, you may wish to make some of those settings permanent. You can do that in a convenient way by creating the Wget startup file---@file{.wgetrc}. Besides @file{.wgetrc} is the ``main'' initialization file, it is convenient to have a special facility for storing passwords. Thus Wget reads and interprets the contents of @file{$HOME/.netrc}, if it finds it. You can find @file{.netrc} format in your system manuals. Wget reads @file{.wgetrc} upon startup, recognizing a limited set of commands. @menu * Wgetrc Location:: Location of various wgetrc files. * Wgetrc Syntax:: Syntax of wgetrc. * Wgetrc Commands:: List of available commands. * Sample Wgetrc:: A wgetrc example. @end menu @node Wgetrc Location, Wgetrc Syntax, Startup File, Startup File @section Wgetrc Location @cindex wgetrc location @cindex location of wgetrc When initializing, Wget will look for a @dfn{global} startup file, @file{/usr/local/etc/wgetrc} by default (or some prefix other than @file{/usr/local}, if Wget was not installed there) and read commands from there, if it exists. Then it will look for the user's file. If the environmental variable @code{WGETRC} is set, Wget will try to load that file. Failing that, no further attempts will be made. If @code{WGETRC} is not set, Wget will try to load @file{$HOME/.wgetrc}. The fact that user's settings are loaded after the system-wide ones means that in case of collision user's wgetrc @emph{overrides} the system-wide wgetrc (in @file{/usr/local/etc/wgetrc} by default). Fascist admins, away! @node Wgetrc Syntax, Wgetrc Commands, Wgetrc Location, Startup File @section Wgetrc Syntax @cindex wgetrc syntax @cindex syntax of wgetrc The syntax of a wgetrc command is simple: @example variable = value @end example The @dfn{variable} will also be called @dfn{command}. Valid @dfn{values} are different for different commands. The commands are case-insensitive and underscore-insensitive. Thus @samp{DIr__PrefiX} is the same as @samp{dirprefix}. Empty lines, lines beginning with @samp{#} and lines containing white-space only are discarded. Commands that expect a comma-separated list will clear the list on an empty command. So, if you wish to reset the rejection list specified in global @file{wgetrc}, you can do it with: @example reject = @end example @node Wgetrc Commands, Sample Wgetrc, Wgetrc Syntax, Startup File @section Wgetrc Commands @cindex wgetrc commands The complete set of commands is listed below, the letter after @samp{=} denoting the value the command takes. It is @samp{on/off} for @samp{on} or @samp{off} (which can also be @samp{1} or @samp{0}), @var{string} for any non-empty string or @var{n} for a positive integer. For example, you may specify @samp{use_proxy = off} to disable use of proxy servers by default. You may use @samp{inf} for infinite values, where appropriate. Most of the commands have their equivalent command-line option (@xref{Invoking}), except some more obscure or rarely used ones. @table @asis @item accept/reject = @var{string} Same as @samp{-A}/@samp{-R} (@xref{Types of Files}). @item add_hostdir = on/off Enable/disable host-prefixed file names. @samp{-nH} disables it. @item continue = on/off Enable/disable continuation of the retrieval, the same as @samp{-c} (which enables it). @item background = on/off Enable/disable going to background, the same as @samp{-b} (which enables it). @c @item backups = @var{number} @c #### Document me! @item base = @var{string} Set base for relative @sc{url}s, the same as @samp{-B}. @item cache = on/off When set to off, disallow server-caching. See the @samp{-C} option. @item convert links = on/off Convert non-relative links locally. The same as @samp{-k}. @item cut_dirs = @var{n} Ignore @var{n} remote directory components. @item debug = on/off Debug mode, same as @samp{-d}. @item delete_after = on/off Delete after download, the same as @samp{--delete-after}. @item dir_prefix = @var{string} Top of directory tree, the same as @samp{-P}. @item dirstruct = on/off Turning dirstruct on or off, the same as @samp{-x} or @samp{-nd}, respectively. @item domains = @var{string} Same as @samp{-D} (@xref{Domain Acceptance}). @item dot_bytes = @var{n} Specify the number of bytes ``contained'' in a dot, as seen throughout the retrieval (1024 by default). You can postfix the value with @samp{k} or @samp{m}, representing kilobytes and megabytes, respectively. With dot settings you can tailor the dot retrieval to suit your needs, or you can use the predefined @dfn{styles} (@xref{Download Options}). @item dots_in_line = @var{n} Specify the number of dots that will be printed in each line throughout the retrieval (50 by default). @item dot_spacing = @var{n} Specify the number of dots in a single cluster (10 by default). @item dot_style = @var{string} Specify the dot retrieval @dfn{style}, as with @samp{--dot-style}. @item exclude_directories = @var{string} Specify a comma-separated list of directories you wish to exclude from download, the same as @samp{-X} (@xref{Directory-Based Limits}). @item exclude_domains = @var{string} Same as @samp{--exclude-domains} (@xref{Domain Acceptance}). @item follow_ftp = on/off Follow @sc{ftp} links from @sc{html} documents, the same as @samp{-f}. @item force_html = on/off If set to on, force the input filename to be regarded as an @sc{html} document, the same as @samp{-F}. @item ftp_proxy = @var{string} Use @var{string} as @sc{ftp} proxy, instead of the one specified in environment. @item glob = on/off Turn globbing on/off, the same as @samp{-g}. @item header = @var{string} Define an additional header, like @samp{--header}. @item http_passwd = @var{string} Set @sc{http} password. @item http_proxy = @var{string} Use @var{string} as @sc{http} proxy, instead of the one specified in environment. @item http_user = @var{string} Set @sc{http} user to @var{string}. @item ignore_length = on/off When set to on, ignore @code{Content-Length} header; the same as @samp{--ignore-length}. @item include_directories = @var{string} Specify a comma-separated list of directories you wish to follow when downloading, the same as @samp{-I}. @item input = @var{string} Read the @sc{url}s from @var{string}, like @samp{-i}. @item kill_longer = on/off Consider data longer than specified in content-length header as invalid (and retry getting it). The default behaviour is to save as much data as there is, provided there is more than or equal to the value in @code{Content-Length}. @item logfile = @var{string} Set logfile, the same as @samp{-o}. @item login = @var{string} Your user name on the remote machine, for @sc{ftp}. Defaults to @samp{anonymous}. @item mirror = on/off Turn mirroring on/off. The same as @samp{-m}. @item netrc = on/off Turn reading netrc on or off. @item noclobber = on/off Same as @samp{-nc}. @item no_parent = on/off Disallow retrieving outside the directory hierarchy, like @samp{--no-parent} (@xref{Directory-Based Limits}). @item no_proxy = @var{string} Use @var{string} as the comma-separated list of domains to avoid in proxy loading, instead of the one specified in environment. @item output_document = @var{string} Set the output filename, the same as @samp{-O}. @item passive_ftp = on/off Set passive @sc{ftp}, the same as @samp{--passive-ftp}. @item passwd = @var{string} Set your @sc{ftp} password to @var{password}. Without this setting, the password defaults to @samp{username@@hostname.domainname}. @item proxy_user = @var{string} Set proxy authentication user name to @var{string}, like @samp{--proxy-user}. @item proxy_passwd = @var{string} Set proxy authentication password to @var{string}, like @samp{--proxy-passwd}. @item quiet = on/off Quiet mode, the same as @samp{-q}. @item quota = @var{quota} Specify the download quota, which is useful to put in global wgetrc. When download quota is specified, Wget will stop retrieving after the download sum has become greater than quota. The quota can be specified in bytes (default), kbytes @samp{k} appended) or mbytes (@samp{m} appended). Thus @samp{quota = 5m} will set the quota to 5 mbytes. Note that the user's startup file overrides system settings. @item reclevel = @var{n} Recursion level, the same as @samp{-l}. @item recursive = on/off Recursive on/off, the same as @samp{-r}. @item relative_only = on/off Follow only relative links, the same as @samp{-L} (@xref{Relative Links}). @item remove_listing = on/off If set to on, remove @sc{ftp} listings downloaded by Wget. Setting it to off is the same as @samp{-nr}. @item retr_symlinks = on/off When set to on, retrieve symbolic links as if they were plain files; the same as @samp{--retr-symlinks}. @item robots = on/off Use (or not) @file{/robots.txt} file (@xref{Robots}). Be sure to know what you are doing before changing the default (which is @samp{on}). @item server_response = on/off Choose whether or not to print the @sc{http} and @sc{ftp} server responses, the same as @samp{-S}. @item simple_host_check = on/off Same as @samp{-nh} (@xref{Host Checking}). @item span_hosts = on/off Same as @samp{-H}. @item timeout = @var{n} Set timeout value, the same as @samp{-T}. @item timestamping = on/off Turn timestamping on/off. The same as @samp{-N} (@xref{Time-Stamping}). @item tries = @var{n} Set number of retries per @sc{url}, the same as @samp{-t}. @item use_proxy = on/off Turn proxy support on/off. The same as @samp{-Y}. @item verbose = on/off Turn verbose on/off, the same as @samp{-v}/@samp{-nv}. @item wait = @var{n} Wait @var{n} seconds between retrievals, the same as @samp{-w}. @end table @node Sample Wgetrc, , Wgetrc Commands, Startup File @section Sample Wgetrc @cindex sample wgetrc This is the sample initialization file, as given in the distribution. It is divided in two section---one for global usage (suitable for global startup file), and one for local usage (suitable for @file{$HOME/.wgetrc}). Be careful about the things you change. Note that all the lines are commented out. For any line to have effect, you must remove the @samp{#} prefix at the beginning of line. @example ### ### Sample Wget initialization file .wgetrc ### ## You can use this file to change the default behaviour of wget or to ## avoid having to type many many command-line options. This file does ## not contain a comprehensive list of commands -- look at the manual ## to find out what you can put into this file. ## ## Wget initialization file can reside in /usr/local/etc/wgetrc ## (global, for all users) or $HOME/.wgetrc (for a single user). ## ## To use any of the settings in this file, you will have to uncomment ## them (and probably change them). ## ## Global settings (useful for setting up in /usr/local/etc/wgetrc). ## Think well before you change them, since they may reduce wget's ## functionality, and make it behave contrary to the documentation: ## # You can set retrieve quota for beginners by specifying a value # optionally followed by 'K' (kilobytes) or 'M' (megabytes). The # default quota is unlimited. #quota = inf # You can lower (or raise) the default number of retries when # downloading a file (default is 20). #tries = 20 # Lowering the maximum depth of the recursive retrieval is handy to # prevent newbies from going too "deep" when they unwittingly start # the recursive retrieval. The default is 5. #reclevel = 5 # Many sites are behind firewalls that do not allow initiation of # connections from the outside. On these sites you have to use the # `passive' feature of FTP. If you are behind such a firewall, you # can turn this on to make Wget use passive FTP by default. #passive_ftp = off ## ## Local settings (for a user to set in his $HOME/.wgetrc). It is ## *highly* undesirable to put these settings in the global file, since ## they are potentially dangerous to "normal" users. ## ## Even when setting up your own ~/.wgetrc, you should know what you ## are doing before doing so. ## # Set this to on to use timestamping by default: #timestamping = off # It is a good idea to make Wget send your email address in a `From:' # header with your request (so that server administrators can contact # you in case of errors). Wget does *not* send `From:' by default. #header = From: Your Name # You can set up other headers, like Accept-Language. Accept-Language # is *not* sent by default. #header = Accept-Language: en # You can set the default proxy for Wget to use. It will override the # value in the environment. #http_proxy = http://proxy.yoyodyne.com:18023/ # If you do not want to use proxy at all, set this to off. #use_proxy = on # You can customize the retrieval outlook. Valid options are default, # binary, mega and micro. #dot_style = default # Setting this to off makes Wget not download /robots.txt. Be sure to # know *exactly* what /robots.txt is and how it is used before changing # the default! #robots = on # It can be useful to make Wget wait between connections. Set this to # the number of seconds you want Wget to wait. #wait = 0 # You can force creating directory structure, even if a single is being # retrieved, by setting this to on. #dirstruct = off # You can turn on recursive retrieving by default (don't do this if # you are not sure you know what it means) by setting this to on. #recursive = off # To have Wget follow FTP links from HTML files by default, set this # to on: #follow_ftp = off @end example @node Examples, Various, Startup File, Top @chapter Examples @cindex examples The examples are classified into three sections, because of clarity. The first section is a tutorial for beginners. The second section explains some of the more complex program features. The third section contains advice for mirror administrators, as well as even more complex features (that some would call perverted). @menu * Simple Usage:: Simple, basic usage of the program. * Advanced Usage:: Advanced techniques of usage. * Guru Usage:: Mirroring and the hairy stuff. @end menu @node Simple Usage, Advanced Usage, Examples, Examples @section Simple Usage @itemize @bullet @item Say you want to download a @sc{url}. Just type: @example wget http://fly.cc.fer.hr/ @end example The response will be something like: @example @group --13:30:45-- http://fly.cc.fer.hr:80/en/ => `index.html' Connecting to fly.cc.fer.hr:80... connected! HTTP request sent, awaiting response... 200 OK Length: 4,694 [text/html] 0K -> .... [100%] 13:30:46 (23.75 KB/s) - `index.html' saved [4694/4694] @end group @end example @item But what will happen if the connection is slow, and the file is lengthy? The connection will probably fail before the whole file is retrieved, more than once. In this case, Wget will try getting the file until it either gets the whole of it, or exceeds the default number of retries (this being 20). It is easy to change the number of tries to 45, to insure that the whole file will arrive safely: @example wget --tries=45 http://fly.cc.fer.hr/jpg/flyweb.jpg @end example @item Now let's leave Wget to work in the background, and write its progress to log file @file{log}. It is tiring to type @samp{--tries}, so we shall use @samp{-t}. @example wget -t 45 -o log http://fly.cc.fer.hr/jpg/flyweb.jpg & @end example The ampersand at the end of the line makes sure that Wget works in the background. To unlimit the number of retries, use @samp{-t inf}. @item The usage of @sc{ftp} is as simple. Wget will take care of login and password. @example @group $ wget ftp://gnjilux.cc.fer.hr/welcome.msg --10:08:47-- ftp://gnjilux.cc.fer.hr:21/welcome.msg => `welcome.msg' Connecting to gnjilux.cc.fer.hr:21... connected! Logging in as anonymous ... Logged in! ==> TYPE I ... done. ==> CWD not needed. ==> PORT ... done. ==> RETR welcome.msg ... done. Length: 1,340 (unauthoritative) 0K -> . [100%] 10:08:48 (1.28 MB/s) - `welcome.msg' saved [1340] @end group @end example @item If you specify a directory, Wget will retrieve the directory listing, parse it and convert it to @sc{html}. Try: @example wget ftp://prep.ai.mit.edu/pub/gnu/ lynx index.html @end example @end itemize @node Advanced Usage, Guru Usage, Simple Usage, Examples @section Advanced Usage @itemize @bullet @item You would like to read the list of @sc{url}s from a file? Not a problem with that: @example wget -i file @end example If you specify @samp{-} as file name, the @sc{url}s will be read from standard input. @item Create a mirror image of GNU @sc{www} site (with the same directory structure the original has) with only one try per document, saving the log of the activities to @file{gnulog}: @example wget -r -t1 http://www.gnu.ai.mit.edu/ -o gnulog @end example @item Retrieve the first layer of yahoo links: @example wget -r -l1 http://www.yahoo.com/ @end example @item Retrieve the index.html of @samp{www.lycos.com}, showing the original server headers: @example wget -S http://www.lycos.com/ @end example @item Save the server headers with the file: @example wget -s http://www.lycos.com/ more index.html @end example @item Retrieve the first two levels of @samp{wuarchive.wustl.edu}, saving them to /tmp. @example wget -P/tmp -l2 ftp://wuarchive.wustl.edu/ @end example @item You want to download all the @sc{gif}s from an @sc{http} directory. @samp{wget http://host/dir/*.gif} doesn't work, since @sc{http} retrieval does not support globbing. In that case, use: @example wget -r -l1 --no-parent -A.gif http://host/dir/ @end example It is a bit of a kludge, but it works. @samp{-r -l1} means to retrieve recursively (@xref{Recursive Retrieval}), with maximum depth of 1. @samp{--no-parent} means that references to the parent directory are ignored (@xref{Directory-Based Limits}), and @samp{-A.gif} means to download only the @sc{gif} files. @samp{-A "*.gif"} would have worked too. @item Suppose you were in the middle of downloading, when Wget was interrupted. Now you do not want to clobber the files already present. It would be: @example wget -nc -r http://www.gnu.ai.mit.edu/ @end example @item If you want to encode your own username and password to @sc{http} or @sc{ftp}, use the appropriate @sc{url} syntax (@xref{URL Format}). @example wget ftp://hniksic:mypassword@@jagor.srce.hr/.emacs @end example @item If you do not like the default retrieval visualization (1K dots with 10 dots per cluster and 50 dots per line), you can customize it through dot settings (@xref{Wgetrc Commands}). For example, many people like the ``binary'' style of retrieval, with 8K dots and 512K lines: @example wget --dot-style=binary ftp://prep.ai.mit.edu/pub/gnu/README @end example You can experiment with other styles, like: @example wget --dot-style=mega ftp://ftp.xemacs.org/pub/xemacs/xemacs-20.4/xemacs-20.4.tar.gz wget --dot-style=micro http://fly.cc.fer.hr/ @end example To make these settings permanent, put them in your @file{.wgetrc}, as described before (@xref{Sample Wgetrc}). @end itemize @node Guru Usage, , Advanced Usage, Examples @section Guru Usage @cindex mirroring @itemize @bullet @item If you wish Wget to keep a mirror of a page (or @sc{ftp} subdirectories), use @samp{--mirror} (@samp{-m}), which is the shorthand for @samp{-r -N}. You can put Wget in the crontab file asking it to recheck a site each Sunday: @example crontab 0 0 * * 0 wget --mirror ftp://ftp.xemacs.org/pub/xemacs/ -o /home/me/weeklog @end example @item You may wish to do the same with someone's home page. But you do not want to download all those images---you're only interested in @sc{html}. @example wget --mirror -A.html http://www.w3.org/ @end example @item But what about mirroring the hosts networkologically close to you? It seems so awfully slow because of all that @sc{dns} resolving. Just use @samp{-D} (@xref{Domain Acceptance}). @example wget -rN -Dsrce.hr http://www.srce.hr/ @end example Now Wget will correctly find out that @samp{regoc.srce.hr} is the same as @samp{www.srce.hr}, but will not even take into consideration the link to @samp{www.mit.edu}. @item You have a presentation and would like the dumb absolute links to be converted to relative? Use @samp{-k}: @example wget -k -r @var{URL} @end example @cindex redirecting output @item You would like the output documents to go to standard output instead of to files? OK, but Wget will automatically shut up (turn on @samp{--quiet}) to prevent mixing of Wget output and the retrieved documents. @example wget -O - http://jagor.srce.hr/ http://www.srce.hr/ @end example You can also combine the two options and make weird pipelines to retrieve the documents from remote hotlists: @example wget -O - http://cool.list.com/ | wget --force-html -i - @end example @end itemize @node Various, Appendices, Examples, Top @chapter Various @cindex various This chapter contains all the stuff that could not fit anywhere else. @menu * Proxies:: Support for proxy servers * Distribution:: Getting the latest version. * Mailing List:: Wget mailing list for announcements and discussion. * Reporting Bugs:: How and where to report bugs. * Portability:: The systems Wget works on. * Signals:: Signal-handling performed by Wget. @end menu @node Proxies, Distribution, Various, Various @section Proxies @cindex proxies @dfn{Proxies} are special-purpose @sc{http} servers designed to transfer data from remote servers to local clients. One typical use of proxies is lightening network load for users behind a slow connection. This is achieved by channeling all @sc{http} and @sc{ftp} requests through the proxy which caches the transferred data. When a cached resource is requested again, proxy will return the data from cache. Another use for proxies is for companies that separate (for security reasons) their internal networks from the rest of Internet. In order to obtain information from the Web, their users connect and retrieve remote data using an authorized proxy. Wget supports proxies for both @sc{http} and @sc{ftp} retrievals. The standard way to specify proxy location, which Wget recognizes, is using the following environment variables: @table @code @item http_proxy This variable should contain the @sc{url} of the proxy for @sc{http} connections. @item ftp_proxy This variable should contain the @sc{url} of the proxy for @sc{http} connections. It is quite common that @sc{http_proxy} and @sc{ftp_proxy} are set to the same @sc{url}. @item no_proxy This variable should contain a comma-separated list of domain extensions proxy should @emph{not} be used for. For instance, if the value of @code{no_proxy} is @samp{.mit.edu}, proxy will not be used to retrieve documents from MIT. @end table In addition to the environment variables, proxy location and settings may be specified from within Wget itself. @table @samp @item -Y on/off @itemx --proxy=on/off @itemx proxy = on/off This option may be used to turn the proxy support on or off. Proxy support is on by default, provided that the appropriate environment variables are set. @item http_proxy = @var{URL} @itemx ftp_proxy = @var{URL} @itemx no_proxy = @var{string} These startup file variables allow you to override the proxy settings specified by the environment. @end table Some proxy servers require authorization to enable you to use them. The authorization consists of @dfn{username} and @dfn{password}, which must be sent by Wget. As with @sc{http} authorization, several authentication schemes exist. For proxy authorization only the @code{Basic} authentication scheme is currently implemented. You may specify your username and password either through the proxy @sc{url} or through the command-line options. Assuming that the company's proxy is located at @samp{proxy.srce.hr} at port 8001, a proxy @sc{url} location containing authorization data might look like this: @example http://hniksic:mypassword@@proxy.company.com:8001/ @end example Alternatively, you may use the @samp{proxy-user} and @samp{proxy-password} options, and the equivalent @file{.wgetrc} settings @code{proxy_user} and @code{proxy_passwd} to set the proxy username and password. @node Distribution, Mailing List, Proxies, Various @section Distribution @cindex latest version Like all GNU utilities, the latest version of Wget can be found at the master GNU archive site prep.ai.mit.edu, and its mirrors. For example, Wget @value{VERSION} can be found at @url{ftp://prep.ai.mit.edu/pub/gnu/wget-@value{VERSION}.tar.gz} @node Mailing List, Reporting Bugs, Distribution, Various @section Mailing List @cindex mailing list @cindex list Wget has its own mailing list at @email{wget@@sunsite.auc.dk}, thanks to Karsten Thygesen. The mailing list is for discussion of Wget features and web, reporting Wget bugs (those that you think may be of interest to the public) and mailing announcements. You are welcome to subscribe. The more people on the list, the better! To subscribe, send mail to @email{wget-subscribe@@sunsite.auc.dk}. the magic word @samp{subscribe} in the subject line. Unsubscribe by mailing to @email{wget-unsubscribe@@sunsite.auc.dk}. The mailing list is archived at @url{http://fly.cc.fer.hr/archive/wget}. @node Reporting Bugs, Portability, Mailing List, Various @section Reporting Bugs @cindex bugs @cindex reporting bugs @cindex bug reports You are welcome to send bug reports about GNU Wget to @email{bug-wget@@gnu.org}. The bugs that you think are of the interest to the public (i.e. more people should be informed about them) can be Cc-ed to the mailing list at @email{wget@@sunsite.auc.dk}. Before actually submitting a bug report, please try to follow a few simple guidelines. @enumerate @item Please try to ascertain that the behaviour you see really is a bug. If Wget crashes, it's a bug. If Wget does not behave as documented, it's a bug. If things work strange, but you are not sure about the way they are supposed to work, it might well be a bug. @item Try to repeat the bug in as simple circumstances as possible. E.g. if Wget crashes on @samp{wget -rLl0 -t5 -Y0 http://yoyodyne.com -o /tmp/log}, you should try to see if it will crash with a simpler set of options. Also, while I will probably be interested to know the contents of your @file{.wgetrc} file, just dumping it into the debug message is probably a bad idea. Instead, you should first try to see if the bug repeats with @file{.wgetrc} moved out of the way. Only if it turns out that @file{.wgetrc} settings affect the bug, should you mail me the relevant parts of the file. @item Please start Wget with @samp{-d} option and send the log (or the relevant parts of it). If Wget was compiled without debug support, recompile it. It is @emph{much} easier to trace bugs with debug support on. @item If Wget has crashed, try to run it in a debugger, e.g. @code{gdb `which wget` core} and type @code{where} to get the backtrace. @item Find where the bug is, fix it and send me the patches. :-) @end enumerate @node Portability, Signals, Reporting Bugs, Various @section Portability @cindex portability @cindex operating systems Since Wget uses GNU Autoconf for building and configuring, and avoids using ``special'' ultra--mega--cool features of any particular Unix, it should compile (and work) on all common Unix flavors. Various Wget versions have been compiled and tested under many kinds of Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital Unix), Ultrix, *BSD, IRIX, and others; refer to the file @file{MACHINES} in the distribution directory for a comprehensive list. If you compile it on an architecture not listed there, please let me know so I can update it. Wget should also compile on the other Unix systems, not listed in @file{MACHINES}. If it doesn't, please let me know. Thanks to kind contributors, this version of Wget compiles and works on Microsoft Windows 95 and Windows NT platforms. It has been compiled successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers, with Winsock as networking software. Naturally, it is crippled of some features available on Unix, but it should work as a substitute for people stuck with Windows. Note that the Windows port is @strong{neither tested nor maintained} by me---all questions and problems should be reported to Wget mailing list at @email{wget@@sunsite.auc.dk} where the maintainers will look at them. @node Signals, , Portability, Various @section Signals @cindex signal handling @cindex hangup Since the purpose of Wget is background work, it catches the hangup signal (@code{SIGHUP}) and ignores it. If the output was on standard output, it will be redirected to a file named @file{wget-log}. Otherwise, @code{SIGHUP} is ignored. This is convenient when you wish to redirect the output of Wget after having started it. @example $ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz & $ kill -HUP %% # Redirect the output to wget-log @end example Other than that, Wget will not try to interfere with signals in any way. @kbd{C-c}, @code{kill -TERM} and @code{kill -KILL} should kill it alike. @node Appendices, Copying, Various, Top @chapter Appendices This chapter contains some references I consider useful, like the Robots Exclusion Standard specification, as well as a list of contributors to GNU Wget. @menu * Robots:: Wget as a WWW robot. * Security Considerations:: Security with Wget. * Contributors:: People who helped. @end menu @node Robots, Security Considerations, Appendices, Appendices @section Robots @cindex robots @cindex robots.txt @cindex server maintenance Since Wget is able to traverse the web, it counts as one of the Web @dfn{robots}. Thus Wget understands @dfn{Robots Exclusion Standard} (@sc{res})---contents of @file{/robots.txt}, used by server administrators to shield parts of their systems from wanderings of Wget. Norobots support is turned on only when retrieving recursively, and @emph{never} for the first page. Thus, you may issue: @example wget -r http://fly.cc.fer.hr/ @end example First the index of fly.cc.fer.hr will be downloaded. If Wget finds anything worth downloading on the same host, only @emph{then} will it load the robots, and decide whether or not to load the links after all. @file{/robots.txt} is loaded only once per host. Wget does not support the robots @code{META} tag. The description of the norobots standard was written, and is maintained by Martijn Koster @email{m.koster@@webcrawler.com}. With his permission, I contribute a (slightly modified) texified version of the @sc{res}. @menu * Introduction to RES:: * RES Format:: * User-Agent Field:: * Disallow Field:: * Norobots Examples:: @end menu @node Introduction to RES, RES Format, Robots, Robots @subsection Introduction to RES @cindex norobots introduction @dfn{WWW Robots} (also called @dfn{wanderers} or @dfn{spiders}) are programs that traverse many pages in the World Wide Web by recursively retrieving linked pages. For more information see the robots page. In 1993 and 1994 there have been occasions where robots have visited @sc{www} servers where they weren't welcome for various reasons. Sometimes these reasons were robot specific, e.g. certain robots swamped servers with rapid-fire requests, or retrieved the same files repeatedly. In other situations robots traversed parts of @sc{www} servers that weren't suitable, e.g. very deep virtual trees, duplicated information, temporary information, or cgi-scripts with side-effects (such as voting). These incidents indicated the need for established mechanisms for @sc{www} servers to indicate to robots which parts of their server should not be accessed. This standard addresses this need with an operational solution. This document represents a consensus on 30 June 1994 on the robots mailing list (@code{robots@@webcrawler.com}), between the majority of robot authors and other people with an interest in robots. It has also been open for discussion on the Technical World Wide Web mailing list (@code{www-talk@@info.cern.ch}). This document is based on a previous working draft under the same title. It is not an official standard backed by a standards body, or owned by any commercial organization. It is not enforced by anybody, and there no guarantee that all current and future robots will use it. Consider it a common facility the majority of robot authors offer the @sc{www} community to protect @sc{www} server against unwanted accesses by their robots. The latest version of this document can be found at @url{http://info.webcrawler.com/mak/projects/robots/norobots.html}. @node RES Format, User-Agent Field, Introduction to RES, Robots @subsection RES Format @cindex norobots format The format and semantics of the @file{/robots.txt} file are as follows: The file consists of one or more records separated by one or more blank lines (terminated by @code{CR}, @code{CR/NL}, or @code{NL}). Each record contains lines of the form: @example : @end example The field name is case insensitive. Comments can be included in file using UNIX bourne shell conventions: the @samp{#} character is used to indicate that preceding space (if any) and the remainder of the line up to the line termination is discarded. Lines containing only a comment are discarded completely, and therefore do not indicate a record boundary. The record starts with one or more User-agent lines, followed by one or more Disallow lines, as detailed below. Unrecognized headers are ignored. The presence of an empty @file{/robots.txt} file has no explicit associated semantics, it will be treated as if it was not present, i.e. all robots will consider themselves welcome. @node User-Agent Field, Disallow Field, RES Format, Robots @subsection User-Agent Field @cindex norobots user-agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. The robot should be liberal in interpreting this field. A case insensitive substring match of the name without version information is recommended. If the value is @samp{*}, the record describes the default access policy for any robot that has not matched any of the other records. It is not allowed to have multiple such records in the @file{/robots.txt} file. @node Disallow Field, Norobots Examples, User-Agent Field, Robots @subsection Disallow Field @cindex norobots disallow The value of this field specifies a partial @sc{url} that is not to be visited. This can be a full path, or a partial path; any @sc{url} that starts with this value will not be retrieved. For example, @w{@samp{Disallow: /help}} disallows both @samp{/help.html} and @samp{/help/index.html}, whereas @w{@samp{Disallow: /help/}} would disallow @samp{/help/index.html} but allow @samp{/help.html}. Any empty value, indicates that all @sc{url}s can be retrieved. At least one Disallow field needs to be present in a record. @node Norobots Examples, , Disallow Field, Robots @subsection Norobots Examples @cindex norobots examples The following example @samp{/robots.txt} file specifies that no robots should visit any @sc{url} starting with @samp{/cyberworld/map/} or @samp{/tmp/}: @example # robots.txt for http://www.site.com/ User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear @end example This example @samp{/robots.txt} file specifies that no robots should visit any @sc{url} starting with @samp{/cyberworld/map/}, except the robot called @samp{cybermapper}: @example # robots.txt for http://www.site.com/ User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: @end example This example indicates that no robots should visit this site further: @example # go away User-agent: * Disallow: / @end example @node Security Considerations, Contributors, Robots, Appendices @section Security Considerations @cindex security When using Wget, you must be aware that it sends unencrypted passwords through the network, which may present a security problem. Here are the main issues, and some solutions. @enumerate @item The passwords on the command line are visible using @code{ps}. If this is a problem, avoid putting passwords from the command line---e.g. you can use @file{.netrc} for this. @item Using the insecure @dfn{basic} authentication scheme, unencrypted passwords are transmitted through the network routers and gateways. @item The @sc{ftp} passwords are also in no way encrypted. There is no good solution for this at the moment. @item Although the ``normal'' output of Wget tries to hide the passwords, debugging logs show them, in all forms. This problem is avoided by being careful when you send debug logs (yes, even when you send them to me). @end enumerate @node Contributors, , Security Considerations, Appendices @section Contributors @cindex contributors @iftex GNU Wget was written by Hrvoje Nik@v{s}i@'{c} @email{hniksic@@srce.hr}. @end iftex @ifinfo GNU Wget was written by Hrvoje Niksic @email{hniksic@@srce.hr}. @end ifinfo However, its development could never have gone as far as it has, were it not for the help of many people, either with bug reports, feature proposals, patches, or letters saying ``Thanks!''. Special thanks goes to the following people (no particular order): @itemize @bullet @item Karsten Thygesen---donated the mailing list and the initial @sc{ftp} space. @item Shawn McHorse---bug reports and patches. @item Kaveh R. Ghazi---on-the-fly @code{ansi2knr}-ization. @item Gordon Matzigkeit---@file{.netrc} support. @item @iftex Zlatko @v{C}alu@v{s}i@'{c}, Tomislav Vujec and Dra@v{z}en Ka@v{c}ar---feature suggestions and ``philosophical'' discussions. @end iftex @ifinfo Zlatko Calusic, Tomislav Vujec and Drazen Kacar---feature suggestions and ``philosophical'' discussions. @end ifinfo @item Darko Budor---initial port to Windows. @item Antonio Rosella---help and suggestions, plust the Italian translation. @item @iftex Tomislav Petrovi@'{c}, Mario Miko@v{c}evi@'{c}---many bug reports and suggestions. @end iftex @ifinfo Tomislav Petrovic, Mario Mikocevic---many bug reports and suggestions. @end ifinfo @item @iftex Fran@,{c}ois Pinard---many thorough bug reports and discussions. @end iftex @ifinfo Francois Pinard---many thorough bug reports and discussions. @end ifinfo @item Karl Eichwalder---lots of help with internationalization and other things. @item Junio Hamano---donated support for Opie and @sc{http} @code{Digest} authentication. @item Brian Gough---a generous donation. @end itemize The following people have provided patches, bug/build reports, useful suggestions, beta testing services, fan mail and all the other things that make maintenance so much fun: Tim Adam, Martin Baehr, Dieter Baron, Roger Beeman and the Gurus at Cisco, Mark Boyns, John Burden, Wanderlei Cavassin, Gilles Cedoc, Tim Charron, Noel Cragg, @iftex Kristijan @v{C}onka@v{s}, @end iftex @ifinfo Kristijan Conkas, @end ifinfo @iftex Damir D@v{z}eko, @end iftex @ifinfo Damir Dzeko, @end ifinfo Andrew Davison, Ulrich Drepper, Marc Duponcheel, @iftex Aleksandar Erkalovi@'{c}, @end iftex @ifinfo Aleksandar Erkalovic, @end ifinfo Andy Eskilsson, Masashi Fujita, Howard Gayle, Marcel Gerrits, Hans Grobler, Mathieu Guillaume, Karl Heuer, Gregor Hoffleit, Erik Magnus Hulthen, Richard Huveneers, Simon Josefsson, @iftex Mario Juri@'{c}, @end iftex @ifinfo Mario Juric, @end ifinfo @iftex Goran Kezunovi@'{c}, @end iftex @ifinfo Goran Kezunovic, @end ifinfo Robert Kleine, Fila Kolodny, Alexander Kourakos, Martin Kraemer, @tex $\Sigma\acute{\iota}\mu o\varsigma\; \Xi\varepsilon\nu\iota\tau\acute{\epsilon}\lambda\lambda\eta\varsigma$ (Simos KSenitellis), @end tex @ifinfo Simos KSenitellis, @end ifinfo Tage Stabell-Kulo, Hrvoje Lacko, Dave Love, Jordan Mendelson, Lin Zhe Min, Charlie Negyesi, Andrew Pollock, Steve Pothier, Marin Purgar, Jan Prikryl, Keith Refson, Tobias Ringstrom, @c Texinfo doesn't grok @'{@i}, so we have to use TeX itself. @tex Juan Jos\'{e} Rodr\'{\i}gues, @end tex @ifinfo Juan Jose Rodrigues, @end ifinfo Heinz Salzmann, Robert Schmidt, Toomas Soome, Sven Sternberger, Markus Strasser, Szakacsits Szabolcs, Mike Thomas, Russell Vincent, Douglas E. Wegscheid, Jasmin Zainul, @iftex Bojan @v{Z}drnja, @end iftex @ifinfo Bojan Zdrnja, @end ifinfo Kristijan Zimmer. Apologies to all who I accidentally left out, and many thanks to all the subscribers of the Wget mailing list. @node Copying, Concept Index, Appendices, Top @unnumbered GNU GENERAL PUBLIC LICENSE @cindex copying @cindex GPL @center Version 2, June 1991 @display Copyright @copyright{} 1989, 1991 Free Software Foundation, Inc. 675 Mass Ave, Cambridge, MA 02139, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. @end display @unnumberedsec Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software---to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. @iftex @unnumberedsec TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION @end iftex @ifinfo @center TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION @end ifinfo @enumerate @item This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The ``Program'', below, refers to any such program or work, and a ``work based on the Program'' means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term ``modification''.) Each licensee is addressed as ``you''. Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. @item You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. @item You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: @enumerate a @item You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. @item You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. @item If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) @end enumerate These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. @item You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: @enumerate a @item Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, @item Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, @item Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) @end enumerate The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. @item You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. @item You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. @item Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. @item If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. @item If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. @item The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and ``any later version'', you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. @item If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. @iftex @heading NO WARRANTY @end iftex @ifinfo @center NO WARRANTY @end ifinfo @cindex no warranty @item BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM ``AS IS'' WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. @item IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. @end enumerate @iftex @heading END OF TERMS AND CONDITIONS @end iftex @ifinfo @center END OF TERMS AND CONDITIONS @end ifinfo @page @unnumberedsec How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the ``copyright'' line and a pointer to where the full notice is found. @smallexample @var{one line to give the program's name and an idea of what it does.} Copyright (C) 19@var{yy} @var{name of author} This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. @end smallexample Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: @smallexample Gnomovision version 69, Copyright (C) 19@var{yy} @var{name of author} Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. @end smallexample The hypothetical commands @samp{show w} and @samp{show c} should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than @samp{show w} and @samp{show c}; they could even be mouse-clicks or menu items---whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a ``copyright disclaimer'' for the program, if necessary. Here is a sample; alter the names: @smallexample @group Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. @var{signature of Ty Coon}, 1 April 1989 Ty Coon, President of Vice @end group @end smallexample This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License. @node Concept Index, , Copying, Top @unnumbered Concept Index @printindex cp @contents @bye 07070100010611000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000000a00000004reloc/etc07070100010612000081a4000000020000000200000001372ff1de00000cf1000000660000004500000000000000000000001100000004reloc/etc/wgetrc### ### Sample Wget initialization file .wgetrc ### ## You can use this file to change the default behaviour of wget or to ## avoid having to type many many command-line options. This file does ## not contain a comprehensive list of commands -- look at the manual ## to find out what you can put into this file. ## ## Wget initialization file can reside in /usr/local/etc/wgetrc ## (global, for all users) or $HOME/.wgetrc (for a single user). ## ## To use any of the settings in this file, you will have to uncomment ## them (and probably change them). ## ## Global settings (useful for setting up in /usr/local/etc/wgetrc). ## Think well before you change them, since they may reduce wget's ## functionality, and make it behave contrary to the documentation: ## # You can set retrieve quota for beginners by specifying a value # optionally followed by 'K' (kilobytes) or 'M' (megabytes). The # default quota is unlimited. #quota = inf # You can lower (or raise) the default number of retries when # downloading a file (default is 20). #tries = 20 # Lowering the maximum depth of the recursive retrieval is handy to # prevent newbies from going too "deep" when they unwittingly start # the recursive retrieval. The default is 5. #reclevel = 5 # Many sites are behind firewalls that do not allow initiation of # connections from the outside. On these sites you have to use the # `passive' feature of FTP. If you are behind such a firewall, you # can turn this on to make Wget use passive FTP by default. #passive_ftp = off ## ## Local settings (for a user to set in his $HOME/.wgetrc). It is ## *highly* undesirable to put these settings in the global file, since ## they are potentially dangerous to "normal" users. ## ## Even when setting up your own ~/.wgetrc, you should know what you ## are doing before doing so. ## # Set this to on to use timestamping by default: #timestamping = off # It is a good idea to make Wget send your email address in a `From:' # header with your request (so that server administrators can contact # you in case of errors). Wget does *not* send `From:' by default. #header = From: Your Name # You can set up other headers, like Accept-Language. Accept-Language # is *not* sent by default. #header = Accept-Language: en # You can set the default proxy for Wget to use. It will override the # value in the environment. #http_proxy = http://proxy.yoyodyne.com:18023/ # If you do not want to use proxy at all, set this to off. #use_proxy = on # You can customize the retrieval outlook. Valid options are default, # binary, mega and micro. #dot_style = default # Setting this to off makes Wget not download /robots.txt. Be sure to # know *exactly* what /robots.txt is and how it is used before changing # the default! #robots = on # It can be useful to make Wget wait between connections. Set this to # the number of seconds you want Wget to wait. #wait = 0 # You can force creating directory structure, even if a single is being # retrieved, by setting this to on. #dirstruct = off # You can turn on recursive retrieving by default (don't do this if # you are not sure you know what it means) by setting this to on. #recursive = off # To have Wget follow FTP links from HTML files by default, set this # to on: #follow_ftp = off 070701000565ad000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000000b00000004reloc/info070701000565ae000081a4000000020000000200000001372ff1dd000009c4000000660000004500000000000000000000001500000004reloc/info/wget.infoThis is Info file wget.info, produced by Makeinfo version 1.67 from the input file ./wget.texi. INFO-DIR-SECTION Net Utilities INFO-DIR-SECTION World Wide Web START-INFO-DIR-ENTRY * Wget: (wget). The non-interactive network downloader. END-INFO-DIR-ENTRY This file documents the the GNU Wget utility for downloading network data. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled "Copying" and "GNU General Public License" are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.  Indirect: wget.info-1: 955 wget.info-2: 50818 wget.info-3: 88475  Tag Table: (Indirect) Node: Top955 Node: Overview1832 Node: Invoking5006 Node: URL Format5815 Node: Option Syntax8147 Node: Basic Startup Options9571 Node: Logging and Input File Options10271 Node: Download Options12665 Node: Directory Options18450 Node: HTTP Options20928 Node: FTP Options24524 Node: Recursive Retrieval Options25717 Node: Recursive Accept/Reject Options27493 Node: Recursive Retrieval29575 Node: Following Links31871 Node: Relative Links32903 Node: Host Checking33417 Node: Domain Acceptance35450 Node: All Hosts37120 Node: Types of Files37547 Node: Directory-Based Limits39997 Node: FTP Links42637 Node: Time-Stamping43507 Node: Time-Stamping Usage45144 Node: HTTP Time-Stamping Internals46713 Node: FTP Time-Stamping Internals47922 Node: Startup File49130 Node: Wgetrc Location50003 Node: Wgetrc Syntax50818 Node: Wgetrc Commands51533 Node: Sample Wgetrc58229 Node: Examples62521 Node: Simple Usage63128 Node: Advanced Usage65522 Node: Guru Usage68273 Node: Various69935 Node: Proxies70459 Node: Distribution73224 Node: Mailing List73566 Node: Reporting Bugs74265 Node: Portability76050 Node: Signals77425 Node: Appendices78079 Node: Robots78494 Node: Introduction to RES79641 Node: RES Format81534 Node: User-Agent Field82638 Node: Disallow Field83402 Node: Norobots Examples84013 Node: Security Considerations84967 Node: Contributors85963 Node: Copying88475 Node: Concept Index107638  End Tag Table 070701000565af000081a4000000020000000200000001372ff1dd0000c682000000660000004500000000000000000000001700000004reloc/info/wget.info-1This is Info file wget.info, produced by Makeinfo version 1.67 from the input file ./wget.texi. INFO-DIR-SECTION Net Utilities INFO-DIR-SECTION World Wide Web START-INFO-DIR-ENTRY * Wget: (wget). The non-interactive network downloader. END-INFO-DIR-ENTRY This file documents the the GNU Wget utility for downloading network data. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled "Copying" and "GNU General Public License" are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.  File: wget.info, Node: Top, Next: Overview, Prev: (dir), Up: (dir) Wget 1.5.3 ********** This manual documents version 1.5.3 of GNU Wget, the freely available utility for network download. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. * Menu: * Overview:: Features of Wget. * Invoking:: Wget command-line arguments. * Recursive Retrieval:: Description of recursive retrieval. * Following Links:: The available methods of chasing links. * Time-Stamping:: Mirroring according to time-stamps. * Startup File:: Wget's initialization file. * Examples:: Examples of usage. * Various:: The stuff that doesn't fit anywhere else. * Appendices:: Some useful references. * Copying:: You may give out copies of Wget. * Concept Index:: Topics covered by this manual.  File: wget.info, Node: Overview, Next: Invoking, Prev: Top, Up: Top Overview ******** GNU Wget is a freely available network utility to retrieve files from the World Wide Web, using HTTP (Hyper Text Transfer Protocol) and FTP (File Transfer Protocol), the two most widely used Internet protocols. It has many useful features to make downloading easier, some of them being: * Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user's presence, which can be a great hindrance when transferring a lot of data. * Wget is capable of descending recursively through the structure of HTML documents and FTP directory trees, making a local copy of the directory hierarchy similar to the one on the remote server. This feature can be used to mirror archives and home pages, or traverse the web in search of data, like a WWW robot (*Note Robots::). In that spirit, Wget understands the `norobots' convention. * File name wildcard matching and recursive mirroring of directories are available when retrieving via FTP. Wget can read the time-stamp information given by both HTTP and FTP servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of FTP sites, as well as home pages. * Wget works exceedingly well on slow or unstable connections, retrying the document until it is fully retrieved, or until a user-specified retry count is surpassed. It will try to resume the download from the point of interruption, using `REST' with FTP and `Range' with HTTP servers that support them. * By default, Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. However, if you are behind a firewall that requires that you use a socks style gateway, you can get the socks library and build wget with support for socks. Wget also supports the passive FTP downloading as an option. * Builtin features offer mechanisms to tune which links you wish to follow (*Note Following Links::). * The retrieval is conveniently traced with printing dots, each dot representing a fixed amount of data received (1KB by default). These representations can be customized to your preferences. * Most of the features are fully configurable, either through command line options, or via the initialization file `.wgetrc' (*Note Startup File::). Wget allows you to define "global" startup files (`/usr/local/etc/wgetrc' by default) for site settings. * Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (*Note Copying::).  File: wget.info, Node: Invoking, Next: Recursive Retrieval, Prev: Overview, Up: Top Invoking ******** By default, Wget is very simple to invoke. The basic syntax is: wget [OPTION]... [URL]... Wget will simply download all the URLs specified on the command line. URL is a "Uniform Resource Locator", as defined below. However, you may wish to change some of the default parameters of Wget. You can do it two ways: permanently, adding the appropriate command to `.wgetrc' (*Note Startup File::), or specifying it on the command line. * Menu: * URL Format:: * Option Syntax:: * Basic Startup Options:: * Logging and Input File Options:: * Download Options:: * Directory Options:: * HTTP Options:: * FTP Options:: * Recursive Retrieval Options:: * Recursive Accept/Reject Options::  File: wget.info, Node: URL Format, Next: Option Syntax, Prev: Invoking, Up: Invoking URL Format ========== "URL" is an acronym for Uniform Resource Locator. A uniform resource locator is a compact string representation for a resource available via the Internet. Wget recognizes the URL syntax as per RFC1738. This is the most widely used form (square brackets denote optional parts): http://host[:port]/directory/file ftp://host[:port]/directory/file You can also encode your username and password within a URL: ftp://user:password@host/path http://user:password@host/path Either USER or PASSWORD, or both, may be left out. If you leave out either the HTTP username or password, no authentication will be sent. If you leave out the FTP username, `anonymous' will be used. If you leave out the FTP password, your email address will be supplied as a default password.(1) You can encode unsafe characters in a URL as `%xy', `xy' being the hexadecimal representation of the character's ASCII value. Some common unsafe characters include `%' (quoted as `%25'), `:' (quoted as `%3A'), and `@' (quoted as `%40'). Refer to RFC1738 for a comprehensive list of unsafe characters. Wget also supports the `type' feature for FTP URLs. By default, FTP documents are retrieved in the binary mode (type `i'), which means that they are downloaded unchanged. Another useful mode is the `a' ("ASCII") mode, which converts the line delimiters between the different operating systems, and is thus useful for text files. Here is an example: ftp://host/directory/file;type=a Two alternative variants of URL specification are also supported, because of historical (hysterical?) reasons and their wide-spreadedness. FTP-only syntax (supported by `NcFTP'): host:/dir/file HTTP-only syntax (introduced by `Netscape'): host[:port]/dir/file These two alternative forms are deprecated, and may cease being supported in the future. If you do not understand the difference between these notations, or do not know which one to use, just use the plain ordinary format you use with your favorite browser, like `Lynx' or `Netscape'. ---------- Footnotes ---------- (1) If you have a `.netrc' file in your home directory, password will also be searched for there.  File: wget.info, Node: Option Syntax, Next: Basic Startup Options, Prev: URL Format, Up: Invoking Option Syntax ============= Since Wget uses GNU getopts to process its arguments, every option has a short form and a long form. Long options are more convenient to remember, but take time to type. You may freely mix different option styles, or specify options after the command-line arguments. Thus you may write: wget -r --tries=10 http://fly.cc.fer.hr/ -o log The space between the option accepting an argument and the argument may be omitted. Instead `-o log' you can write `-olog'. You may put several options that do not require arguments together, like: wget -drc URL This is a complete equivalent of: wget -d -r -c URL Since the options can be specified after the arguments, you may terminate them with `--'. So the following will try to download URL `-x', reporting failure to `log': wget -o log -- -x The options that accept comma-separated lists all respect the convention that specifying an empty list clears its value. This can be useful to clear the `.wgetrc' settings. For instance, if your `.wgetrc' sets `exclude_directories' to `/cgi-bin', the following example will first reset it, and then set it to exclude `/~nobody' and `/~somebody'. You can also clear the lists in `.wgetrc' (*Note Wgetrc Syntax::). wget -X '' -X /~nobody,/~somebody  File: wget.info, Node: Basic Startup Options, Next: Logging and Input File Options, Prev: Option Syntax, Up: Invoking Basic Startup Options ===================== `-V' `--version' Display the version of Wget. `-h' `--help' Print a help message describing all of Wget's command-line options. `-b' `--background' Go to background immediately after startup. If no output file is specified via the `-o', output is redirected to `wget-log'. `-e COMMAND' `--execute COMMAND' Execute COMMAND as if it were a part of `.wgetrc' (*Note Startup File::). A command thus invoked will be executed *after* the commands in `.wgetrc', thus taking precedence over them.  File: wget.info, Node: Logging and Input File Options, Next: Download Options, Prev: Basic Startup Options, Up: Invoking Logging and Input File Options ============================== `-o LOGFILE' `--output-file=LOGFILE' Log all messages to LOGFILE. The messages are normally reported to standard error. `-a LOGFILE' `--append-output=LOGFILE' Append to LOGFILE. This is the same as `-o', only it appends to LOGFILE instead of overwriting the old log file. If LOGFILE does not exist, a new file is created. `-d' `--debug' Turn on debug output, meaning various information important to the developers of Wget if it does not work properly. Your system administrator may have chosen to compile Wget without debug support, in which case `-d' will not work. Please note that compiling with debug support is always safe--Wget compiled with the debug support will *not* print any debug info unless requested with `-d'. *Note Reporting Bugs:: for more information on how to use `-d' for sending bug reports. `-q' `--quiet' Turn off Wget's output. `-v' `--verbose' Turn on verbose output, with all the available data. The default output is verbose. `-nv' `--non-verbose' Non-verbose output--turn off verbose without being completely quiet (use `-q' for that), which means that error messages and basic information still get printed. `-i FILE' `--input-file=FILE' Read URLs from FILE, in which case no URLs need to be on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. The FILE need not be an HTML document (but no harm if it is)--it is enough if the URLs are just listed sequentially. However, if you specify `--force-html', the document will be regarded as `html'. In that case you may have problems with relative links, which you can solve either by adding `' to the documents or by specifying `--base=URL' on the command line. `-F' `--force-html' When input is read from a file, force it to be treated as an HTML file. This enables you to retrieve relative links from existing HTML files on your local disk, by adding `' to HTML, or using the `--base' command-line option.  File: wget.info, Node: Download Options, Next: Directory Options, Prev: Logging and Input File Options, Up: Invoking Download Options ================ `-t NUMBER' `--tries=NUMBER' Set number of retries to NUMBER. Specify 0 or `inf' for infinite retrying. `-O FILE' `--output-document=FILE' The documents will not be written to the appropriate files, but all will be concatenated together and written to FILE. If FILE already exists, it will be overwritten. If the FILE is `-', the documents will be written to standard output. Including this option automatically sets the number of tries to 1. `-nc' `--no-clobber' Do not clobber existing files when saving to directory hierarchy within recursive retrieval of several files. This option is *extremely* useful when you wish to continue where you left off with retrieval of many files. If the files have the `.html' or (yuck) `.htm' suffix, they will be loaded from the local disk, and parsed as if they have been retrieved from the Web. `-c' `--continue' Continue getting an existing file. This is useful when you want to finish up the download started by another program, or a previous instance of Wget. Thus you can write: wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z If there is a file name `ls-lR.Z' in the current directory, Wget will assume that it is the first portion of the remote file, and will require the server to continue the retrieval from an offset equal to the length of the local file. Note that you need not specify this option if all you want is Wget to continue retrieving where it left off when the connection is lost--Wget does this by default. You need this option only when you want to continue retrieval of a file already halfway retrieved, saved by another FTP client, or left by Wget being killed. Without `-c', the previous example would just begin to download the remote file to `ls-lR.Z.1'. The `-c' option is also applicable for HTTP servers that support the `Range' header. `--dot-style=STYLE' Set the retrieval style to STYLE. Wget traces the retrieval of each document by printing dots on the screen, each dot representing a fixed amount of retrieved data. Any number of dots may be separated in a "cluster", to make counting easier. This option allows you to choose one of the pre-defined styles, determining the number of bytes represented by a dot, the number of dots in a cluster, and the number of dots on the line. With the `default' style each dot represents 1K, there are ten dots in a cluster and 50 dots in a line. The `binary' style has a more "computer"-like orientation--8K dots, 16-dots clusters and 48 dots per line (which makes for 384K lines). The `mega' style is suitable for downloading very large files--each dot represents 64K retrieved, there are eight dots in a cluster, and 48 dots on each line (so each line contains 3M). The `micro' style is exactly the reverse; it is suitable for downloading small files, with 128-byte dots, 8 dots per cluster, and 48 dots (6K) per line. `-N' `--timestamping' Turn on time-stamping. *Note Time-Stamping:: for details. `-S' `--server-response' Print the headers sent by HTTP servers and responses sent by FTP servers. `--spider' When invoked with this option, Wget will behave as a Web "spider", which means that it will not download the pages, just check that they are there. You can use it to check your bookmarks, e.g. with: wget --spider --force-html -i bookmarks.html This feature needs much more work for Wget to get close to the functionality of real WWW spiders. `-T seconds' `--timeout=SECONDS' Set the read timeout to SECONDS seconds. Whenever a network read is issued, the file descriptor is checked for a timeout, which could otherwise leave a pending connection (uninterrupted read). The default timeout is 900 seconds (fifteen minutes). Setting timeout to 0 will disable checking for timeouts. Please do not lower the default timeout value with this option unless you know what you are doing. `-w SECONDS' `--wait=SECONDS' Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the `m' suffix, in hours using `h' suffix, or in days using `d' suffix. Specifying a large value for this option is useful if the network or the destination host is down, so that Wget can wait long enough to reasonably expect the network error to be fixed before the retry. `-Y on/off' `--proxy=on/off' Turn proxy support on or off. The proxy is on by default if the appropriate environmental variable is defined. `-Q QUOTA' `--quota=QUOTA' Specify download quota for automatic retrievals. The value can be specified in bytes (default), kilobytes (with `k' suffix), or megabytes (with `m' suffix). Note that quota will never affect downloading a single file. So if you specify `wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz', all of the `ls-lR.gz' will be downloaded. The same goes even when several URLs are specified on the command-line. However, quota is respected when retrieving either recursively, or from an input file. Thus you may safely type `wget -Q2m -i sites'--download will be aborted when the quota is exceeded. Setting quota to 0 or to `inf' unlimits the download quota.  File: wget.info, Node: Directory Options, Next: HTTP Options, Prev: Download Options, Up: Invoking Directory Options ================= `-nd' `--no-directories' Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the filenames will get extensions `.n'). `-x' `--force-directories' The opposite of `-nd'--create a hierarchy of directories, even if one would not have been created otherwise. E.g. `wget -x http://fly.cc.fer.hr/robots.txt' will save the downloaded file to `fly.cc.fer.hr/robots.txt'. `-nH' `--no-host-directories' Disable generation of host-prefixed directories. By default, invoking Wget with `-r http://fly.cc.fer.hr/' will create a structure of directories beginning with `fly.cc.fer.hr/'. This option disables such behavior. `--cut-dirs=NUMBER' Ignore NUMBER directory components. This is useful for getting a fine-grained control over the directory where recursive retrieval will be saved. Take, for example, the directory at `ftp://ftp.xemacs.org/pub/xemacs/'. If you retrieve it with `-r', it will be saved locally under `ftp.xemacs.org/pub/xemacs/'. While the `-nH' option can remove the `ftp.xemacs.org/' part, you are still stuck with `pub/xemacs'. This is where `--cut-dirs' comes in handy; it makes Wget not "see" NUMBER remote directory components. Here are several examples of how `--cut-dirs' option works. No options -> ftp.xemacs.org/pub/xemacs/ -nH -> pub/xemacs/ -nH --cut-dirs=1 -> xemacs/ -nH --cut-dirs=2 -> . --cut-dirs=1 -> ftp.xemacs.org/xemacs/ ... If you just want to get rid of the directory structure, this option is similar to a combination of `-nd' and `-P'. However, unlike `-nd', `--cut-dirs' does not lose with subdirectories--for instance, with `-nH --cut-dirs=1', a `beta/' subdirectory will be placed to `xemacs/beta', as one would expect. `-P PREFIX' `--directory-prefix=PREFIX' Set directory prefix to PREFIX. The "directory prefix" is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is `.' (the current directory).  File: wget.info, Node: HTTP Options, Next: FTP Options, Prev: Directory Options, Up: Invoking HTTP Options ============ `--http-user=USER' `--http-passwd=PASSWORD' Specify the username USER and password PASSWORD on an HTTP server. According to the type of the challenge, Wget will encode them using either the `basic' (insecure) or the `digest' authentication scheme. Another way to specify username and password is in the URL itself (*Note URL Format::). For more information about security issues with Wget, *Note Security Considerations::. `-C on/off' `--cache=on/off' When set to off, disable server-side cache. In this case, Wget will send the remote server an appropriate directive (`Pragma: no-cache') to get the file from the remote service, rather than returning the cached version. This is especially useful for retrieving and flushing out-of-date documents on proxy servers. Caching is allowed by default. `--ignore-length' Unfortunately, some HTTP servers (CGI programs, to be more precise) send out bogus `Content-Length' headers, which makes Wget go wild, as it thinks not all the document was retrieved. You can spot this syndrome if Wget retries getting the same document again and again, each time claiming that the (otherwise normal) connection has closed on the very same byte. With this option, Wget will ignore the `Content-Length' header--as if it never existed. `--header=ADDITIONAL-HEADER' Define an ADDITIONAL-HEADER to be passed to the HTTP servers. Headers must contain a `:' preceded by one or more non-blank characters, and must not contain newlines. You may define more than one additional header by specifying `--header' more than once. wget --header='Accept-Charset: iso-8859-2' \ --header='Accept-Language: hr' \ http://fly.cc.fer.hr/ Specification of an empty string as the header value will clear all previous user-defined headers. `--proxy-user=USER' `--proxy-passwd=PASSWORD' Specify the username USER and password PASSWORD for authentication on a proxy server. Wget will encode them using the `basic' authentication scheme. `-s' `--save-headers' Save the headers sent by the HTTP server to the file, preceding the actual contents, with an empty line as the separator. `-U AGENT-STRING' `--user-agent=AGENT-STRING' Identify as AGENT-STRING to the HTTP server. The HTTP protocol allows the clients to identify themselves using a `User-Agent' header field. This enables distinguishing the WWW software, usually for statistical purposes or for tracing of protocol violations. Wget normally identifies as `Wget/VERSION', VERSION being the current version number of Wget. However, some sites have been known to impose the policy of tailoring the output according to the `User-Agent'-supplied information. While conceptually this is not such a bad idea, it has been abused by servers denying information to clients other than `Mozilla' or Microsoft `Internet Explorer'. This option allows you to change the `User-Agent' line issued by Wget. Use of this option is discouraged, unless you really know what you are doing. *NOTE* that Netscape Communications Corp. has claimed that false transmissions of `Mozilla' as the `User-Agent' are a copyright infringement, which will be prosecuted. *DO NOT* misrepresent Wget as Mozilla.  File: wget.info, Node: FTP Options, Next: Recursive Retrieval Options, Prev: HTTP Options, Up: Invoking FTP Options =========== `--retr-symlinks' Retrieve symbolic links on FTP sites as if they were plain files, i.e. don't just create links locally. `-g on/off' `--glob=on/off' Turn FTP globbing on or off. Globbing means you may use the shell-like special characters ("wildcards"), like `*', `?', `[' and `]' to retrieve more than one file from the same directory at once, like: wget ftp://gnjilux.cc.fer.hr/*.msg By default, globbing will be turned on if the URL contains a globbing character. This option may be used to turn globbing on or off permanently. You may have to quote the URL to protect it from being expanded by your shell. Globbing makes Wget look for a directory listing, which is system-specific. This is why it currently works only with Unix FTP servers (and the ones emulating Unix `ls' output). `--passive-ftp' Use the "passive" FTP retrieval scheme, in which the client initiates the data connection. This is sometimes required for FTP to work behind firewalls.  File: wget.info, Node: Recursive Retrieval Options, Next: Recursive Accept/Reject Options, Prev: FTP Options, Up: Invoking Recursive Retrieval Options =========================== `-r' `--recursive' Turn on recursive retrieving. *Note Recursive Retrieval:: for more details. `-l DEPTH' `--level=DEPTH' Specify recursion maximum depth level DEPTH (*Note Recursive Retrieval::). The default maximum depth is 5. `--delete-after' This option tells Wget to delete every single file it downloads, *after* having done so. It is useful for pre-fetching popular pages through proxy, e.g.: wget -r -nd --delete-after http://whatever.com/~popular/page/ The `-r' option is to retrieve recursively, and `-nd' not to create directories. `-k' `--convert-links' Convert the non-relative links to relative ones locally. Only the references to the documents actually downloaded will be converted; the rest will be left unchanged. Note that only at the end of the download can Wget know which links have been downloaded. Because of that, much of the work done by `-k' will be performed at the end of the downloads. `-m' `--mirror' Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to `-r -N -l inf -nr'. `-nr' `--dont-remove-listing' Don't remove the temporary `.listing' files generated by FTP retrievals. Normally, these files contain the raw directory listings received from FTP servers. Not removing them can be useful to access the full remote file list when running a mirror, or for debugging purposes.  File: wget.info, Node: Recursive Accept/Reject Options, Prev: Recursive Retrieval Options, Up: Invoking Recursive Accept/Reject Options =============================== `-A ACCLIST --accept ACCLIST' `-R REJLIST --reject REJLIST' Specify comma-separated lists of file name suffixes or patterns to accept or reject (*Note Types of Files:: for more details). `-D DOMAIN-LIST' `--domains=DOMAIN-LIST' Set domains to be accepted and DNS looked-up, where DOMAIN-LIST is a comma-separated list. Note that it does *not* turn on `-H'. This option speeds things up, even if only one host is spanned (*Note Domain Acceptance::). `--exclude-domains DOMAIN-LIST' Exclude the domains given in a comma-separated DOMAIN-LIST from DNS-lookup (*Note Domain Acceptance::). `-L' `--relative' Follow relative links only. Useful for retrieving a specific home page without any distractions, not even those from the same hosts (*Note Relative Links::). `--follow-ftp' Follow FTP links from HTML documents. Without this option, Wget will ignore all the FTP links. `-H' `--span-hosts' Enable spanning across hosts when doing recursive retrieving (*Note All Hosts::). `-I LIST' `--include-directories=LIST' Specify a comma-separated list of directories you wish to follow when downloading (*Note Directory-Based Limits:: for more details.) Elements of LIST may contain wildcards. `-X LIST' `--exclude-directories=LIST' Specify a comma-separated list of directories you wish to exclude from download (*Note Directory-Based Limits:: for more details.) Elements of LIST may contain wildcards. `-nh' `--no-host-lookup' Disable the time-consuming DNS lookup of almost all hosts (*Note Host Checking::). `-np' `--no-parent' Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files *below* a certain hierarchy will be downloaded. *Note Directory-Based Limits:: for more details.  File: wget.info, Node: Recursive Retrieval, Next: Following Links, Prev: Invoking, Up: Top Recursive Retrieval ******************* GNU Wget is capable of traversing parts of the Web (or a single HTTP or FTP server), depth-first following links and directory structure. This is called "recursive" retrieving, or "recursion". With HTTP URLs, Wget retrieves and parses the HTML from the given URL, documents, retrieving the files the HTML document was referring to, through markups like `href', or `src'. If the freshly downloaded file is also of type `text/html', it will be parsed and followed further. The maximum "depth" to which the retrieval may descend is specified with the `-l' option (the default maximum depth is five layers). *Note Recursive Retrieval::. When retrieving an FTP URL recursively, Wget will retrieve all the data from the given directory tree (including the subdirectories up to the specified depth) on the remote server, creating its mirror image locally. FTP retrieval is also limited by the `depth' parameter. By default, Wget will create a local directory tree, corresponding to the one found on the remote server. Recursive retrieving can find a number of applications, the most important of which is mirroring. It is also useful for WWW presentations, and any other opportunities where slow network connections should be bypassed by storing the files locally. You should be warned that invoking recursion may cause grave overloading on your system, because of the fast exchange of data through the network; all of this may hamper other users' work. The same stands for the foreign server you are mirroring--the more requests it gets in a rows, the greater is its load. Careless retrieving can also fill your file system unctrollably, which can grind the machine to a halt. The load can be minimized by lowering the maximum recursion level (`-l') and/or by lowering the number of retries (`-t'). You may also consider using the `-w' option to slow down your requests to the remote servers, as well as the numerous options to narrow the number of followed links (*Note Following Links::). Recursive retrieval is a good thing when used properly. Please take all precautions not to wreak havoc through carelessness.  File: wget.info, Node: Following Links, Next: Time-Stamping, Prev: Recursive Retrieval, Up: Top Following Links *************** When retrieving recursively, one does not wish to retrieve the loads of unnecessary data. Most of the time the users bear in mind exactly what they want to download, and want Wget to follow only specific links. For example, if you wish to download the music archive from `fly.cc.fer.hr', you will not want to download all the home pages that happen to be referenced by an obscure part of the archive. Wget possesses several mechanisms that allows you to fine-tune which links it will follow. * Menu: * Relative Links:: Follow relative links only. * Host Checking:: Follow links on the same host. * Domain Acceptance:: Check on a list of domains. * All Hosts:: No host restrictions. * Types of Files:: Getting only certain files. * Directory-Based Limits:: Getting only certain directories. * FTP Links:: Following FTP links.  File: wget.info, Node: Relative Links, Next: Host Checking, Prev: Following Links, Up: Following Links Relative Links ============== When only relative links are followed (option `-L'), recursive retrieving will never span hosts. No time-expensive DNS-lookups will be performed, and the process will be very fast, with the minimum strain of the network. This will suit your needs often, especially when mirroring the output of various `x2html' converters, since they generally output relative links.  File: wget.info, Node: Host Checking, Next: Domain Acceptance, Prev: Relative Links, Up: Following Links Host Checking ============= The drawback of following the relative links solely is that humans often tend to mix them with absolute links to the very same host, and the very same page. In this mode (which is the default mode for following links) all URLs the that refer to the same host will be retrieved. The problem with this option are the aliases of the hosts and domains. Thus there is no way for Wget to know that `regoc.srce.hr' and `www.srce.hr' are the same host, or that `fly.cc.fer.hr' is the same as `fly.cc.etf.hr'. Whenever an absolute link is encountered, the host is DNS-looked-up with `gethostbyname' to check whether we are maybe dealing with the same hosts. Although the results of `gethostbyname' are cached, it is still a great slowdown, e.g. when dealing with large indices of home pages on different hosts (because each of the hosts must be and DNS-resolved to see whether it just *might* an alias of the starting host). To avoid the overhead you may use `-nh', which will turn off DNS-resolving and make Wget compare hosts literally. This will make things run much faster, but also much less reliable (e.g. `www.srce.hr' and `regoc.srce.hr' will be flagged as different hosts). Note that modern HTTP servers allows one IP address to host several "virtual servers", each having its own directory hieratchy. Such "servers" are distinguished by their hostnames (all of which point to the same IP address); for this to work, a client must send a `Host' header, which is what Wget does. However, in that case Wget *must not* try to divine a host's "real" address, nor try to use the same hostname for each access, i.e. `-nh' must be turned on. In other words, the `-nh' option must be used to enabling the retrieval from virtual servers distinguished by their hostnames. As the number of such server setups grow, the behavior of `-nh' may become the default in the future.  File: wget.info, Node: Domain Acceptance, Next: All Hosts, Prev: Host Checking, Up: Following Links Domain Acceptance ================= With the `-D' option you may specify the domains that will be followed. The hosts the domain of which is not in this list will not be DNS-resolved. Thus you can specify `-Dmit.edu' just to make sure that *nothing outside of MIT gets looked up*. This is very important and useful. It also means that `-D' does *not* imply `-H' (span all hosts), which must be specified explicitly. Feel free to use this options since it will speed things up, with almost all the reliability of checking for all hosts. Thus you could invoke wget -r -D.hr http://fly.cc.fer.hr/ to make sure that only the hosts in `.hr' domain get DNS-looked-up for being equal to `fly.cc.fer.hr'. So `fly.cc.etf.hr' will be checked (only once!) and found equal, but `www.gnu.ai.mit.edu' will not even be checked. Of course, domain acceptance can be used to limit the retrieval to particular domains with spanning of hosts in them, but then you must specify `-H' explicitly. E.g.: wget -r -H -Dmit.edu,stanford.edu http://www.mit.edu/ will start with `http://www.mit.edu/', following links across MIT and Stanford. If there are domains you want to exclude specifically, you can do it with `--exclude-domains', which accepts the same type of arguments of `-D', but will *exclude* all the listed domains. For example, if you want to download all the hosts from `foo.edu' domain, with the exception of `sunsite.foo.edu', you can do it like this: wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu http://www.foo.edu/  File: wget.info, Node: All Hosts, Next: Types of Files, Prev: Domain Acceptance, Up: Following Links All Hosts ========= When `-H' is specified without `-D', all hosts are freely spanned. There are no restrictions whatsoever as to what part of the net Wget will go to fetch documents, other than maximum retrieval depth. If a page references `www.yahoo.com', so be it. Such an option is rarely useful for itself.  File: wget.info, Node: Types of Files, Next: Directory-Based Limits, Prev: All Hosts, Up: Following Links Types of Files ============== When downloading material from the web, you will often want to restrict the retrieval to only certain file types. For example, if you are interested in downloading GIFS, you will not be overjoyed to get loads of Postscript documents, and vice versa. Wget offers two options to deal with this problem. Each option description lists a short name, a long name, and the equivalent command in `.wgetrc'. `-A ACCLIST' `--accept ACCLIST' `accept = ACCLIST' The argument to `--accept' option is a list of file suffixes or patterns that Wget will download during recursive retrieval. A suffix is the ending part of a file, and consists of "normal" letters, e.g. `gif' or `.jpg'. A matching pattern contains shell-like wildcards, e.g. `books*' or `zelazny*196[0-9]*'. So, specifying `wget -A gif,jpg' will make Wget download only the files ending with `gif' or `jpg', i.e. GIFs and JPEGs. On the other hand, `wget -A "zelazny*196[0-9]*"' will download only files beginning with `zelazny' and containing numbers from 1960 to 1969 anywhere within. Look up the manual of your shell for a description of how pattern matching works. Of course, any number of suffixes and patterns can be combined into a comma-separated list, and given as an argument to `-A'. `-R REJLIST' `--reject REJLIST' `reject = REJLIST' The `--reject' option works the same way as `--accept', only its logic is the reverse; Wget will download all files *except* the ones matching the suffixes (or patterns) in the list. So, if you want to download a whole page except for the cumbersome MPEGs and .AU files, you can use `wget -R mpg,mpeg,au'. Analogously, to download all files except the ones beginning with `bjork', use `wget -R "bjork*"'. The quotes are to prevent expansion by the shell. The `-A' and `-R' options may be combined to achieve even better fine-tuning of which files to retrieve. E.g. `wget -A "*zelazny*" -R .ps' will download all the files having `zelazny' as a part of their name, but *not* the postscript files. Note that these two options do not affect the downloading of HTML files; Wget must load all the HTMLs to know where to go at all--recursive retrieval would make no sense otherwise.  File: wget.info, Node: Directory-Based Limits, Next: FTP Links, Prev: Types of Files, Up: Following Links Directory-Based Limits ====================== Regardless of other link-following facilities, it is often useful to place the restriction of what files to retrieve based on the directories those files are placed in. There can be many reasons for this--the home pages may be organized in a reasonable directory structure; or some directories may contain useless information, e.g. `/cgi-bin' or `/dev' directories. Wget offers three different options to deal with this requirement. Each option description lists a short name, a long name, and the equivalent command in `.wgetrc'. `-I LIST' `--include LIST' `include_directories = LIST' `-I' option accepts a comma-separated list of directories included in the retrieval. Any other directories will simply be ignored. The directories are absolute paths. So, if you wish to download from `http://host/people/bozo/' following only links to bozo's colleagues in the `/people' directory and the bogus scripts in `/cgi-bin', you can specify: wget -I /people,/cgi-bin http://host/people/bozo/ `-X LIST' `--exclude LIST' `exclude_directories = LIST' `-X' option is exactly the reverse of `-I'--this is a list of directories *excluded* from the download. E.g. if you do not want Wget to download things from `/cgi-bin' directory, specify `-X /cgi-bin' on the command line. The same as with `-A'/`-R', these two options can be combined to get a better fine-tuning of downloading subdirectories. E.g. if you want to load all the files from `/pub' hierarchy except for `/pub/worthless', specify `-I/pub -X/pub/worthless'. `-np' `--no-parent' `no_parent = on' The simplest, and often very useful way of limiting directories is disallowing retrieval of the links that refer to the hierarchy "upper" than the beginning directory, i.e. disallowing ascent to the parent directory/directories. The `--no-parent' option (short `-np') is useful in this case. Using it guarantees that you will never leave the existing hierarchy. Supposing you issue Wget with: wget -r --no-parent http://somehost/~luzer/my-archive/ You may rest assured that none of the references to `/~his-girls-homepage/' or `/~luzer/all-my-mpegs/' will be followed. Only the archive you are interested in will be downloaded. Essentially, `--no-parent' is similar to `-I/~luzer/my-archive', only it handles redirections in a more intelligent fashion.  File: wget.info, Node: FTP Links, Prev: Directory-Based Limits, Up: Following Links Following FTP Links =================== The rules for FTP are somewhat specific, as it is necessary for them to be. FTP links in HTML documents are often included for purposes of reference, and it is often inconvenient to download them by default. To have FTP links followed from HTML documents, you need to specify the `--follow-ftp' option. Having done that, FTP links will span hosts regardless of `-H' setting. This is logical, as FTP links rarely point to the same host where the HTTP server resides. For similar reasons, the `-L' options has no effect on such downloads. On the other hand, domain acceptance (`-D') and suffix rules (`-A' and `-R') apply normally. Also note that followed links to FTP directories will not be retrieved recursively further.  File: wget.info, Node: Time-Stamping, Next: Startup File, Prev: Following Links, Up: Top Time-Stamping ************* One of the most important aspects of mirroring information from the Internet is updating your archives. Downloading the whole archive again and again, just to replace a few changed files is expensive, both in terms of wasted bandwidth and money, and the time to do the update. This is why all the mirroring tools offer the option of incremental updating. Such an updating mechanism means that the remote server is scanned in search of "new" files. Only those new files will be downloaded in the place of the old ones. A file is considered new if one of these two conditions are met: 1. A file of that name does not already exist locally. 2. A file of that name does exist, but the remote file was modified more recently than the local file. To implement this, the program needs to be aware of the time of last modification of both remote and local files. Such information are called the "time-stamps". The time-stamping in GNU Wget is turned on using `--timestamping' (`-N') option, or through `timestamping = on' directive in `.wgetrc'. With this option, for each file it intends to download, Wget will check whether a local file of the same name exists. If it does, and the remote file is older, Wget will not download it. If the local file does not exist, or the sizes of the files do not match, Wget will download the remote file no matter what the time-stamps say. * Menu: * Time-Stamping Usage:: * HTTP Time-Stamping Internals:: * FTP Time-Stamping Internals::  File: wget.info, Node: Time-Stamping Usage, Next: HTTP Time-Stamping Internals, Prev: Time-Stamping, Up: Time-Stamping Time-Stamping Usage =================== The usage of time-stamping is simple. Say you would like to download a file so that it keeps its date of modification. wget -S http://www.gnu.ai.mit.edu/ A simple `ls -l' shows that the time stamp on the local file equals the state of the `Last-Modified' header, as returned by the server. As you can see, the time-stamping info is preserved locally, even without `-N'. Several days later, you would like Wget to check if the remote file has changed, and download it if it has. wget -N http://www.gnu.ai.mit.edu/ Wget will ask the server for the last-modified date. If the local file is newer, the remote file will not be re-fetched. However, if the remote file is more recent, Wget will proceed fetching it normally. The same goes for FTP. For example: wget ftp://ftp.ifi.uio.no/pub/emacs/gnus/* `ls' will show that the timestamps are set according to the state on the remote server. Reissuing the command with `-N' will make Wget re-fetch *only* the files that have been modified. In both HTTP and FTP retrieval Wget will time-stamp the local file correctly (with or without `-N') if it gets the stamps, i.e. gets the directory listing for FTP or the `Last-Modified' header for HTTP. If you wished to mirror the GNU archive every week, you would use the following command every week: wget --timestamping -r ftp://prep.ai.mit.edu/pub/gnu/  File: wget.info, Node: HTTP Time-Stamping Internals, Next: FTP Time-Stamping Internals, Prev: Time-Stamping Usage, Up: Time-Stamping HTTP Time-Stamping Internals ============================ Time-stamping in HTTP is implemented by checking of the `Last-Modified' header. If you wish to retrieve the file `foo.html' through HTTP, Wget will check whether `foo.html' exists locally. If it doesn't, `foo.html' will be retrieved unconditionally. If the file does exist locally, Wget will first check its local time-stamp (similar to the way `ls -l' checks it), and then send a `HEAD' request to the remote server, demanding the information on the remote file. The `Last-Modified' header is examined to find which file was modified more recently (which makes it "newer"). If the remote file is newer, it will be downloaded; if it is older, Wget will give up.(1) Arguably, HTTP time-stamping should be implemented using the `If-Modified-Since' request. ---------- Footnotes ---------- (1) As an additional check, Wget will look at the `Content-Length' header, and compare the sizes; if they are not the same, the remote file will be downloaded no matter what the time-stamp says.  File: wget.info, Node: FTP Time-Stamping Internals, Prev: HTTP Time-Stamping Internals, Up: Time-Stamping FTP Time-Stamping Internals =========================== In theory, FTP time-stamping works much the same as HTTP, only FTP has no headers--time-stamps must be received from the directory listings. For each directory files must be retrieved from, Wget will use the `LIST' command to get the listing. It will try to analyze the listing, assuming that it is a Unix `ls -l' listing, and extract the time-stamps. The rest is exactly the same as for HTTP. Assumption that every directory listing is a Unix-style listing may sound extremely constraining, but in practice it is not, as many non-Unix FTP servers use the Unixoid listing format because most (all?) of the clients understand it. Bear in mind that RFC959 defines no standard way to get a file list, let alone the time-stamps. We can only hope that a future standard will define this. Another non-standard solution includes the use of `MDTM' command that is supported by some FTP servers (including the popular `wu-ftpd'), which returns the exact time of the specified file. Wget may support this command in the future.  File: wget.info, Node: Startup File, Next: Examples, Prev: Time-Stamping, Up: Top Startup File ************ Once you know how to change default settings of Wget through command line arguments, you may wish to make some of those settings permanent. You can do that in a convenient way by creating the Wget startup file--`.wgetrc'. Besides `.wgetrc' is the "main" initialization file, it is convenient to have a special facility for storing passwords. Thus Wget reads and interprets the contents of `$HOME/.netrc', if it finds it. You can find `.netrc' format in your system manuals. Wget reads `.wgetrc' upon startup, recognizing a limited set of commands. * Menu: * Wgetrc Location:: Location of various wgetrc files. * Wgetrc Syntax:: Syntax of wgetrc. * Wgetrc Commands:: List of available commands. * Sample Wgetrc:: A wgetrc example.  File: wget.info, Node: Wgetrc Location, Next: Wgetrc Syntax, Prev: Startup File, Up: Startup File Wgetrc Location =============== When initializing, Wget will look for a "global" startup file, `/usr/local/etc/wgetrc' by default (or some prefix other than `/usr/local', if Wget was not installed there) and read commands from there, if it exists. Then it will look for the user's file. If the environmental variable `WGETRC' is set, Wget will try to load that file. Failing that, no further attempts will be made. If `WGETRC' is not set, Wget will try to load `$HOME/.wgetrc'. The fact that user's settings are loaded after the system-wide ones means that in case of collision user's wgetrc *overrides* the system-wide wgetrc (in `/usr/local/etc/wgetrc' by default). Fascist admins, away! 070701000565b0000081a4000000020000000200000001372ff1de000096d4000000660000004500000000000000000000001700000004reloc/info/wget.info-2This is Info file wget.info, produced by Makeinfo version 1.67 from the input file ./wget.texi. INFO-DIR-SECTION Net Utilities INFO-DIR-SECTION World Wide Web START-INFO-DIR-ENTRY * Wget: (wget). The non-interactive network downloader. END-INFO-DIR-ENTRY This file documents the the GNU Wget utility for downloading network data. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled "Copying" and "GNU General Public License" are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.  File: wget.info, Node: Wgetrc Syntax, Next: Wgetrc Commands, Prev: Wgetrc Location, Up: Startup File Wgetrc Syntax ============= The syntax of a wgetrc command is simple: variable = value The "variable" will also be called "command". Valid "values" are different for different commands. The commands are case-insensitive and underscore-insensitive. Thus `DIr__PrefiX' is the same as `dirprefix'. Empty lines, lines beginning with `#' and lines containing white-space only are discarded. Commands that expect a comma-separated list will clear the list on an empty command. So, if you wish to reset the rejection list specified in global `wgetrc', you can do it with: reject =  File: wget.info, Node: Wgetrc Commands, Next: Sample Wgetrc, Prev: Wgetrc Syntax, Up: Startup File Wgetrc Commands =============== The complete set of commands is listed below, the letter after `=' denoting the value the command takes. It is `on/off' for `on' or `off' (which can also be `1' or `0'), STRING for any non-empty string or N for a positive integer. For example, you may specify `use_proxy = off' to disable use of proxy servers by default. You may use `inf' for infinite values, where appropriate. Most of the commands have their equivalent command-line option (*Note Invoking::), except some more obscure or rarely used ones. accept/reject = STRING Same as `-A'/`-R' (*Note Types of Files::). add_hostdir = on/off Enable/disable host-prefixed file names. `-nH' disables it. continue = on/off Enable/disable continuation of the retrieval, the same as `-c' (which enables it). background = on/off Enable/disable going to background, the same as `-b' (which enables it). base = STRING Set base for relative URLs, the same as `-B'. cache = on/off When set to off, disallow server-caching. See the `-C' option. convert links = on/off Convert non-relative links locally. The same as `-k'. cut_dirs = N Ignore N remote directory components. debug = on/off Debug mode, same as `-d'. delete_after = on/off Delete after download, the same as `--delete-after'. dir_prefix = STRING Top of directory tree, the same as `-P'. dirstruct = on/off Turning dirstruct on or off, the same as `-x' or `-nd', respectively. domains = STRING Same as `-D' (*Note Domain Acceptance::). dot_bytes = N Specify the number of bytes "contained" in a dot, as seen throughout the retrieval (1024 by default). You can postfix the value with `k' or `m', representing kilobytes and megabytes, respectively. With dot settings you can tailor the dot retrieval to suit your needs, or you can use the predefined "styles" (*Note Download Options::). dots_in_line = N Specify the number of dots that will be printed in each line throughout the retrieval (50 by default). dot_spacing = N Specify the number of dots in a single cluster (10 by default). dot_style = STRING Specify the dot retrieval "style", as with `--dot-style'. exclude_directories = STRING Specify a comma-separated list of directories you wish to exclude from download, the same as `-X' (*Note Directory-Based Limits::). exclude_domains = STRING Same as `--exclude-domains' (*Note Domain Acceptance::). follow_ftp = on/off Follow FTP links from HTML documents, the same as `-f'. force_html = on/off If set to on, force the input filename to be regarded as an HTML document, the same as `-F'. ftp_proxy = STRING Use STRING as FTP proxy, instead of the one specified in environment. glob = on/off Turn globbing on/off, the same as `-g'. header = STRING Define an additional header, like `--header'. http_passwd = STRING Set HTTP password. http_proxy = STRING Use STRING as HTTP proxy, instead of the one specified in environment. http_user = STRING Set HTTP user to STRING. ignore_length = on/off When set to on, ignore `Content-Length' header; the same as `--ignore-length'. include_directories = STRING Specify a comma-separated list of directories you wish to follow when downloading, the same as `-I'. input = STRING Read the URLs from STRING, like `-i'. kill_longer = on/off Consider data longer than specified in content-length header as invalid (and retry getting it). The default behaviour is to save as much data as there is, provided there is more than or equal to the value in `Content-Length'. logfile = STRING Set logfile, the same as `-o'. login = STRING Your user name on the remote machine, for FTP. Defaults to `anonymous'. mirror = on/off Turn mirroring on/off. The same as `-m'. netrc = on/off Turn reading netrc on or off. noclobber = on/off Same as `-nc'. no_parent = on/off Disallow retrieving outside the directory hierarchy, like `--no-parent' (*Note Directory-Based Limits::). no_proxy = STRING Use STRING as the comma-separated list of domains to avoid in proxy loading, instead of the one specified in environment. output_document = STRING Set the output filename, the same as `-O'. passive_ftp = on/off Set passive FTP, the same as `--passive-ftp'. passwd = STRING Set your FTP password to PASSWORD. Without this setting, the password defaults to `username@hostname.domainname'. proxy_user = STRING Set proxy authentication user name to STRING, like `--proxy-user'. proxy_passwd = STRING Set proxy authentication password to STRING, like `--proxy-passwd'. quiet = on/off Quiet mode, the same as `-q'. quota = QUOTA Specify the download quota, which is useful to put in global wgetrc. When download quota is specified, Wget will stop retrieving after the download sum has become greater than quota. The quota can be specified in bytes (default), kbytes `k' appended) or mbytes (`m' appended). Thus `quota = 5m' will set the quota to 5 mbytes. Note that the user's startup file overrides system settings. reclevel = N Recursion level, the same as `-l'. recursive = on/off Recursive on/off, the same as `-r'. relative_only = on/off Follow only relative links, the same as `-L' (*Note Relative Links::). remove_listing = on/off If set to on, remove FTP listings downloaded by Wget. Setting it to off is the same as `-nr'. retr_symlinks = on/off When set to on, retrieve symbolic links as if they were plain files; the same as `--retr-symlinks'. robots = on/off Use (or not) `/robots.txt' file (*Note Robots::). Be sure to know what you are doing before changing the default (which is `on'). server_response = on/off Choose whether or not to print the HTTP and FTP server responses, the same as `-S'. simple_host_check = on/off Same as `-nh' (*Note Host Checking::). span_hosts = on/off Same as `-H'. timeout = N Set timeout value, the same as `-T'. timestamping = on/off Turn timestamping on/off. The same as `-N' (*Note Time-Stamping::). tries = N Set number of retries per URL, the same as `-t'. use_proxy = on/off Turn proxy support on/off. The same as `-Y'. verbose = on/off Turn verbose on/off, the same as `-v'/`-nv'. wait = N Wait N seconds between retrievals, the same as `-w'.  File: wget.info, Node: Sample Wgetrc, Prev: Wgetrc Commands, Up: Startup File Sample Wgetrc ============= This is the sample initialization file, as given in the distribution. It is divided in two section--one for global usage (suitable for global startup file), and one for local usage (suitable for `$HOME/.wgetrc'). Be careful about the things you change. Note that all the lines are commented out. For any line to have effect, you must remove the `#' prefix at the beginning of line. ### ### Sample Wget initialization file .wgetrc ### ## You can use this file to change the default behaviour of wget or to ## avoid having to type many many command-line options. This file does ## not contain a comprehensive list of commands -- look at the manual ## to find out what you can put into this file. ## ## Wget initialization file can reside in /usr/local/etc/wgetrc ## (global, for all users) or $HOME/.wgetrc (for a single user). ## ## To use any of the settings in this file, you will have to uncomment ## them (and probably change them). ## ## Global settings (useful for setting up in /usr/local/etc/wgetrc). ## Think well before you change them, since they may reduce wget's ## functionality, and make it behave contrary to the documentation: ## # You can set retrieve quota for beginners by specifying a value # optionally followed by 'K' (kilobytes) or 'M' (megabytes). The # default quota is unlimited. #quota = inf # You can lower (or raise) the default number of retries when # downloading a file (default is 20). #tries = 20 # Lowering the maximum depth of the recursive retrieval is handy to # prevent newbies from going too "deep" when they unwittingly start # the recursive retrieval. The default is 5. #reclevel = 5 # Many sites are behind firewalls that do not allow initiation of # connections from the outside. On these sites you have to use the # `passive' feature of FTP. If you are behind such a firewall, you # can turn this on to make Wget use passive FTP by default. #passive_ftp = off ## ## Local settings (for a user to set in his $HOME/.wgetrc). It is ## *highly* undesirable to put these settings in the global file, since ## they are potentially dangerous to "normal" users. ## ## Even when setting up your own ~/.wgetrc, you should know what you ## are doing before doing so. ## # Set this to on to use timestamping by default: #timestamping = off # It is a good idea to make Wget send your email address in a `From:' # header with your request (so that server administrators can contact # you in case of errors). Wget does *not* send `From:' by default. #header = From: Your Name # You can set up other headers, like Accept-Language. Accept-Language # is *not* sent by default. #header = Accept-Language: en # You can set the default proxy for Wget to use. It will override the # value in the environment. #http_proxy = http://proxy.yoyodyne.com:18023/ # If you do not want to use proxy at all, set this to off. #use_proxy = on # You can customize the retrieval outlook. Valid options are default, # binary, mega and micro. #dot_style = default # Setting this to off makes Wget not download /robots.txt. Be sure to # know *exactly* what /robots.txt is and how it is used before changing # the default! #robots = on # It can be useful to make Wget wait between connections. Set this to # the number of seconds you want Wget to wait. #wait = 0 # You can force creating directory structure, even if a single is being # retrieved, by setting this to on. #dirstruct = off # You can turn on recursive retrieving by default (don't do this if # you are not sure you know what it means) by setting this to on. #recursive = off # To have Wget follow FTP links from HTML files by default, set this # to on: #follow_ftp = off  File: wget.info, Node: Examples, Next: Various, Prev: Startup File, Up: Top Examples ******** The examples are classified into three sections, because of clarity. The first section is a tutorial for beginners. The second section explains some of the more complex program features. The third section contains advice for mirror administrators, as well as even more complex features (that some would call perverted). * Menu: * Simple Usage:: Simple, basic usage of the program. * Advanced Usage:: Advanced techniques of usage. * Guru Usage:: Mirroring and the hairy stuff.  File: wget.info, Node: Simple Usage, Next: Advanced Usage, Prev: Examples, Up: Examples Simple Usage ============ * Say you want to download a URL. Just type: wget http://fly.cc.fer.hr/ The response will be something like: --13:30:45-- http://fly.cc.fer.hr:80/en/ => `index.html' Connecting to fly.cc.fer.hr:80... connected! HTTP request sent, awaiting response... 200 OK Length: 4,694 [text/html] 0K -> .... [100%] 13:30:46 (23.75 KB/s) - `index.html' saved [4694/4694] * But what will happen if the connection is slow, and the file is lengthy? The connection will probably fail before the whole file is retrieved, more than once. In this case, Wget will try getting the file until it either gets the whole of it, or exceeds the default number of retries (this being 20). It is easy to change the number of tries to 45, to insure that the whole file will arrive safely: wget --tries=45 http://fly.cc.fer.hr/jpg/flyweb.jpg * Now let's leave Wget to work in the background, and write its progress to log file `log'. It is tiring to type `--tries', so we shall use `-t'. wget -t 45 -o log http://fly.cc.fer.hr/jpg/flyweb.jpg & The ampersand at the end of the line makes sure that Wget works in the background. To unlimit the number of retries, use `-t inf'. * The usage of FTP is as simple. Wget will take care of login and password. $ wget ftp://gnjilux.cc.fer.hr/welcome.msg --10:08:47-- ftp://gnjilux.cc.fer.hr:21/welcome.msg => `welcome.msg' Connecting to gnjilux.cc.fer.hr:21... connected! Logging in as anonymous ... Logged in! ==> TYPE I ... done. ==> CWD not needed. ==> PORT ... done. ==> RETR welcome.msg ... done. Length: 1,340 (unauthoritative) 0K -> . [100%] 10:08:48 (1.28 MB/s) - `welcome.msg' saved [1340] * If you specify a directory, Wget will retrieve the directory listing, parse it and convert it to HTML. Try: wget ftp://prep.ai.mit.edu/pub/gnu/ lynx index.html  File: wget.info, Node: Advanced Usage, Next: Guru Usage, Prev: Simple Usage, Up: Examples Advanced Usage ============== * You would like to read the list of URLs from a file? Not a problem with that: wget -i file If you specify `-' as file name, the URLs will be read from standard input. * Create a mirror image of GNU WWW site (with the same directory structure the original has) with only one try per document, saving the log of the activities to `gnulog': wget -r -t1 http://www.gnu.ai.mit.edu/ -o gnulog * Retrieve the first layer of yahoo links: wget -r -l1 http://www.yahoo.com/ * Retrieve the index.html of `www.lycos.com', showing the original server headers: wget -S http://www.lycos.com/ * Save the server headers with the file: wget -s http://www.lycos.com/ more index.html * Retrieve the first two levels of `wuarchive.wustl.edu', saving them to /tmp. wget -P/tmp -l2 ftp://wuarchive.wustl.edu/ * You want to download all the GIFs from an HTTP directory. `wget http://host/dir/*.gif' doesn't work, since HTTP retrieval does not support globbing. In that case, use: wget -r -l1 --no-parent -A.gif http://host/dir/ It is a bit of a kludge, but it works. `-r -l1' means to retrieve recursively (*Note Recursive Retrieval::), with maximum depth of 1. `--no-parent' means that references to the parent directory are ignored (*Note Directory-Based Limits::), and `-A.gif' means to download only the GIF files. `-A "*.gif"' would have worked too. * Suppose you were in the middle of downloading, when Wget was interrupted. Now you do not want to clobber the files already present. It would be: wget -nc -r http://www.gnu.ai.mit.edu/ * If you want to encode your own username and password to HTTP or FTP, use the appropriate URL syntax (*Note URL Format::). wget ftp://hniksic:mypassword@jagor.srce.hr/.emacs * If you do not like the default retrieval visualization (1K dots with 10 dots per cluster and 50 dots per line), you can customize it through dot settings (*Note Wgetrc Commands::). For example, many people like the "binary" style of retrieval, with 8K dots and 512K lines: wget --dot-style=binary ftp://prep.ai.mit.edu/pub/gnu/README You can experiment with other styles, like: wget --dot-style=mega ftp://ftp.xemacs.org/pub/xemacs/xemacs-20.4/xemacs-20.4.tar.gz wget --dot-style=micro http://fly.cc.fer.hr/ To make these settings permanent, put them in your `.wgetrc', as described before (*Note Sample Wgetrc::).  File: wget.info, Node: Guru Usage, Prev: Advanced Usage, Up: Examples Guru Usage ========== * If you wish Wget to keep a mirror of a page (or FTP subdirectories), use `--mirror' (`-m'), which is the shorthand for `-r -N'. You can put Wget in the crontab file asking it to recheck a site each Sunday: crontab 0 0 * * 0 wget --mirror ftp://ftp.xemacs.org/pub/xemacs/ -o /home/me/weeklog * You may wish to do the same with someone's home page. But you do not want to download all those images--you're only interested in HTML. wget --mirror -A.html http://www.w3.org/ * But what about mirroring the hosts networkologically close to you? It seems so awfully slow because of all that DNS resolving. Just use `-D' (*Note Domain Acceptance::). wget -rN -Dsrce.hr http://www.srce.hr/ Now Wget will correctly find out that `regoc.srce.hr' is the same as `www.srce.hr', but will not even take into consideration the link to `www.mit.edu'. * You have a presentation and would like the dumb absolute links to be converted to relative? Use `-k': wget -k -r URL * You would like the output documents to go to standard output instead of to files? OK, but Wget will automatically shut up (turn on `--quiet') to prevent mixing of Wget output and the retrieved documents. wget -O - http://jagor.srce.hr/ http://www.srce.hr/ You can also combine the two options and make weird pipelines to retrieve the documents from remote hotlists: wget -O - http://cool.list.com/ | wget --force-html -i -  File: wget.info, Node: Various, Next: Appendices, Prev: Examples, Up: Top Various ******* This chapter contains all the stuff that could not fit anywhere else. * Menu: * Proxies:: Support for proxy servers * Distribution:: Getting the latest version. * Mailing List:: Wget mailing list for announcements and discussion. * Reporting Bugs:: How and where to report bugs. * Portability:: The systems Wget works on. * Signals:: Signal-handling performed by Wget.  File: wget.info, Node: Proxies, Next: Distribution, Prev: Various, Up: Various Proxies ======= "Proxies" are special-purpose HTTP servers designed to transfer data from remote servers to local clients. One typical use of proxies is lightening network load for users behind a slow connection. This is achieved by channeling all HTTP and FTP requests through the proxy which caches the transferred data. When a cached resource is requested again, proxy will return the data from cache. Another use for proxies is for companies that separate (for security reasons) their internal networks from the rest of Internet. In order to obtain information from the Web, their users connect and retrieve remote data using an authorized proxy. Wget supports proxies for both HTTP and FTP retrievals. The standard way to specify proxy location, which Wget recognizes, is using the following environment variables: `http_proxy' This variable should contain the URL of the proxy for HTTP connections. `ftp_proxy' This variable should contain the URL of the proxy for HTTP connections. It is quite common that HTTP_PROXY and FTP_PROXY are set to the same URL. `no_proxy' This variable should contain a comma-separated list of domain extensions proxy should *not* be used for. For instance, if the value of `no_proxy' is `.mit.edu', proxy will not be used to retrieve documents from MIT. In addition to the environment variables, proxy location and settings may be specified from within Wget itself. `-Y on/off' `--proxy=on/off' `proxy = on/off' This option may be used to turn the proxy support on or off. Proxy support is on by default, provided that the appropriate environment variables are set. `http_proxy = URL' `ftp_proxy = URL' `no_proxy = STRING' These startup file variables allow you to override the proxy settings specified by the environment. Some proxy servers require authorization to enable you to use them. The authorization consists of "username" and "password", which must be sent by Wget. As with HTTP authorization, several authentication schemes exist. For proxy authorization only the `Basic' authentication scheme is currently implemented. You may specify your username and password either through the proxy URL or through the command-line options. Assuming that the company's proxy is located at `proxy.srce.hr' at port 8001, a proxy URL location containing authorization data might look like this: http://hniksic:mypassword@proxy.company.com:8001/ Alternatively, you may use the `proxy-user' and `proxy-password' options, and the equivalent `.wgetrc' settings `proxy_user' and `proxy_passwd' to set the proxy username and password.  File: wget.info, Node: Distribution, Next: Mailing List, Prev: Proxies, Up: Various Distribution ============ Like all GNU utilities, the latest version of Wget can be found at the master GNU archive site prep.ai.mit.edu, and its mirrors. For example, Wget 1.5.3 can be found at `ftp://prep.ai.mit.edu/pub/gnu/wget-1.5.3.tar.gz'  File: wget.info, Node: Mailing List, Next: Reporting Bugs, Prev: Distribution, Up: Various Mailing List ============ Wget has its own mailing list at , thanks to Karsten Thygesen. The mailing list is for discussion of Wget features and web, reporting Wget bugs (those that you think may be of interest to the public) and mailing announcements. You are welcome to subscribe. The more people on the list, the better! To subscribe, send mail to . the magic word `subscribe' in the subject line. Unsubscribe by mailing to . The mailing list is archived at `http://fly.cc.fer.hr/archive/wget'.  File: wget.info, Node: Reporting Bugs, Next: Portability, Prev: Mailing List, Up: Various Reporting Bugs ============== You are welcome to send bug reports about GNU Wget to . The bugs that you think are of the interest to the public (i.e. more people should be informed about them) can be Cc-ed to the mailing list at . Before actually submitting a bug report, please try to follow a few simple guidelines. 1. Please try to ascertain that the behaviour you see really is a bug. If Wget crashes, it's a bug. If Wget does not behave as documented, it's a bug. If things work strange, but you are not sure about the way they are supposed to work, it might well be a bug. 2. Try to repeat the bug in as simple circumstances as possible. E.g. if Wget crashes on `wget -rLl0 -t5 -Y0 http://yoyodyne.com -o /tmp/log', you should try to see if it will crash with a simpler set of options. Also, while I will probably be interested to know the contents of your `.wgetrc' file, just dumping it into the debug message is probably a bad idea. Instead, you should first try to see if the bug repeats with `.wgetrc' moved out of the way. Only if it turns out that `.wgetrc' settings affect the bug, should you mail me the relevant parts of the file. 3. Please start Wget with `-d' option and send the log (or the relevant parts of it). If Wget was compiled without debug support, recompile it. It is *much* easier to trace bugs with debug support on. 4. If Wget has crashed, try to run it in a debugger, e.g. `gdb `which wget` core' and type `where' to get the backtrace. 5. Find where the bug is, fix it and send me the patches. :-)  File: wget.info, Node: Portability, Next: Signals, Prev: Reporting Bugs, Up: Various Portability =========== Since Wget uses GNU Autoconf for building and configuring, and avoids using "special" ultra-mega-cool features of any particular Unix, it should compile (and work) on all common Unix flavors. Various Wget versions have been compiled and tested under many kinds of Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital Unix), Ultrix, *BSD, IRIX, and others; refer to the file `MACHINES' in the distribution directory for a comprehensive list. If you compile it on an architecture not listed there, please let me know so I can update it. Wget should also compile on the other Unix systems, not listed in `MACHINES'. If it doesn't, please let me know. Thanks to kind contributors, this version of Wget compiles and works on Microsoft Windows 95 and Windows NT platforms. It has been compiled successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers, with Winsock as networking software. Naturally, it is crippled of some features available on Unix, but it should work as a substitute for people stuck with Windows. Note that the Windows port is *neither tested nor maintained* by me--all questions and problems should be reported to Wget mailing list at where the maintainers will look at them.  File: wget.info, Node: Signals, Prev: Portability, Up: Various Signals ======= Since the purpose of Wget is background work, it catches the hangup signal (`SIGHUP') and ignores it. If the output was on standard output, it will be redirected to a file named `wget-log'. Otherwise, `SIGHUP' is ignored. This is convenient when you wish to redirect the output of Wget after having started it. $ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz & $ kill -HUP %% # Redirect the output to wget-log Other than that, Wget will not try to interfere with signals in any way. `C-c', `kill -TERM' and `kill -KILL' should kill it alike.  File: wget.info, Node: Appendices, Next: Copying, Prev: Various, Up: Top Appendices ********** This chapter contains some references I consider useful, like the Robots Exclusion Standard specification, as well as a list of contributors to GNU Wget. * Menu: * Robots:: Wget as a WWW robot. * Security Considerations:: Security with Wget. * Contributors:: People who helped.  File: wget.info, Node: Robots, Next: Security Considerations, Prev: Appendices, Up: Appendices Robots ====== Since Wget is able to traverse the web, it counts as one of the Web "robots". Thus Wget understands "Robots Exclusion Standard" (RES)--contents of `/robots.txt', used by server administrators to shield parts of their systems from wanderings of Wget. Norobots support is turned on only when retrieving recursively, and *never* for the first page. Thus, you may issue: wget -r http://fly.cc.fer.hr/ First the index of fly.cc.fer.hr will be downloaded. If Wget finds anything worth downloading on the same host, only *then* will it load the robots, and decide whether or not to load the links after all. `/robots.txt' is loaded only once per host. Wget does not support the robots `META' tag. The description of the norobots standard was written, and is maintained by Martijn Koster . With his permission, I contribute a (slightly modified) texified version of the RES. * Menu: * Introduction to RES:: * RES Format:: * User-Agent Field:: * Disallow Field:: * Norobots Examples::  File: wget.info, Node: Introduction to RES, Next: RES Format, Prev: Robots, Up: Robots Introduction to RES ------------------- "WWW Robots" (also called "wanderers" or "spiders") are programs that traverse many pages in the World Wide Web by recursively retrieving linked pages. For more information see the robots page. In 1993 and 1994 there have been occasions where robots have visited WWW servers where they weren't welcome for various reasons. Sometimes these reasons were robot specific, e.g. certain robots swamped servers with rapid-fire requests, or retrieved the same files repeatedly. In other situations robots traversed parts of WWW servers that weren't suitable, e.g. very deep virtual trees, duplicated information, temporary information, or cgi-scripts with side-effects (such as voting). These incidents indicated the need for established mechanisms for WWW servers to indicate to robots which parts of their server should not be accessed. This standard addresses this need with an operational solution. This document represents a consensus on 30 June 1994 on the robots mailing list (`robots@webcrawler.com'), between the majority of robot authors and other people with an interest in robots. It has also been open for discussion on the Technical World Wide Web mailing list (`www-talk@info.cern.ch'). This document is based on a previous working draft under the same title. It is not an official standard backed by a standards body, or owned by any commercial organization. It is not enforced by anybody, and there no guarantee that all current and future robots will use it. Consider it a common facility the majority of robot authors offer the WWW community to protect WWW server against unwanted accesses by their robots. The latest version of this document can be found at `http://info.webcrawler.com/mak/projects/robots/norobots.html'.  File: wget.info, Node: RES Format, Next: User-Agent Field, Prev: Introduction to RES, Up: Robots RES Format ---------- The format and semantics of the `/robots.txt' file are as follows: The file consists of one or more records separated by one or more blank lines (terminated by `CR', `CR/NL', or `NL'). Each record contains lines of the form: : The field name is case insensitive. Comments can be included in file using UNIX bourne shell conventions: the `#' character is used to indicate that preceding space (if any) and the remainder of the line up to the line termination is discarded. Lines containing only a comment are discarded completely, and therefore do not indicate a record boundary. The record starts with one or more User-agent lines, followed by one or more Disallow lines, as detailed below. Unrecognized headers are ignored. The presence of an empty `/robots.txt' file has no explicit associated semantics, it will be treated as if it was not present, i.e. all robots will consider themselves welcome.  File: wget.info, Node: User-Agent Field, Next: Disallow Field, Prev: RES Format, Up: Robots User-Agent Field ---------------- The value of this field is the name of the robot the record is describing access policy for. If more than one User-agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. The robot should be liberal in interpreting this field. A case insensitive substring match of the name without version information is recommended. If the value is `*', the record describes the default access policy for any robot that has not matched any of the other records. It is not allowed to have multiple such records in the `/robots.txt' file.  File: wget.info, Node: Disallow Field, Next: Norobots Examples, Prev: User-Agent Field, Up: Robots Disallow Field -------------- The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved. For example, `Disallow: /help' disallows both `/help.html' and `/help/index.html', whereas `Disallow: /help/' would disallow `/help/index.html' but allow `/help.html'. Any empty value, indicates that all URLs can be retrieved. At least one Disallow field needs to be present in a record.  File: wget.info, Node: Norobots Examples, Prev: Disallow Field, Up: Robots Norobots Examples ----------------- The following example `/robots.txt' file specifies that no robots should visit any URL starting with `/cyberworld/map/' or `/tmp/': # robots.txt for http://www.site.com/ User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example `/robots.txt' file specifies that no robots should visit any URL starting with `/cyberworld/map/', except the robot called `cybermapper': # robots.txt for http://www.site.com/ User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: /  File: wget.info, Node: Security Considerations, Next: Contributors, Prev: Robots, Up: Appendices Security Considerations ======================= When using Wget, you must be aware that it sends unencrypted passwords through the network, which may present a security problem. Here are the main issues, and some solutions. 1. The passwords on the command line are visible using `ps'. If this is a problem, avoid putting passwords from the command line--e.g. you can use `.netrc' for this. 2. Using the insecure "basic" authentication scheme, unencrypted passwords are transmitted through the network routers and gateways. 3. The FTP passwords are also in no way encrypted. There is no good solution for this at the moment. 4. Although the "normal" output of Wget tries to hide the passwords, debugging logs show them, in all forms. This problem is avoided by being careful when you send debug logs (yes, even when you send them to me).  File: wget.info, Node: Contributors, Prev: Security Considerations, Up: Appendices Contributors ============ GNU Wget was written by Hrvoje Niksic . However, its development could never have gone as far as it has, were it not for the help of many people, either with bug reports, feature proposals, patches, or letters saying "Thanks!". Special thanks goes to the following people (no particular order): * Karsten Thygesen--donated the mailing list and the initial FTP space. * Shawn McHorse--bug reports and patches. * Kaveh R. Ghazi--on-the-fly `ansi2knr'-ization. * Gordon Matzigkeit--`.netrc' support. * Zlatko Calusic, Tomislav Vujec and Drazen Kacar--feature suggestions and "philosophical" discussions. * Darko Budor--initial port to Windows. * Antonio Rosella--help and suggestions, plust the Italian translation. * Tomislav Petrovic, Mario Mikocevic--many bug reports and suggestions. * Francois Pinard--many thorough bug reports and discussions. * Karl Eichwalder--lots of help with internationalization and other things. * Junio Hamano--donated support for Opie and HTTP `Digest' authentication. * Brian Gough--a generous donation. The following people have provided patches, bug/build reports, useful suggestions, beta testing services, fan mail and all the other things that make maintenance so much fun: Tim Adam, Martin Baehr, Dieter Baron, Roger Beeman and the Gurus at Cisco, Mark Boyns, John Burden, Wanderlei Cavassin, Gilles Cedoc, Tim Charron, Noel Cragg, Kristijan Conkas, Damir Dzeko, Andrew Davison, Ulrich Drepper, Marc Duponcheel, Aleksandar Erkalovic, Andy Eskilsson, Masashi Fujita, Howard Gayle, Marcel Gerrits, Hans Grobler, Mathieu Guillaume, Karl Heuer, Gregor Hoffleit, Erik Magnus Hulthen, Richard Huveneers, Simon Josefsson, Mario Juric, Goran Kezunovic, Robert Kleine, Fila Kolodny, Alexander Kourakos, Martin Kraemer, Simos KSenitellis, Tage Stabell-Kulo, Hrvoje Lacko, Dave Love, Jordan Mendelson, Lin Zhe Min, Charlie Negyesi, Andrew Pollock, Steve Pothier, Marin Purgar, Jan Prikryl, Keith Refson, Tobias Ringstrom, Juan Jose Rodrigues, Heinz Salzmann, Robert Schmidt, Toomas Soome, Sven Sternberger, Markus Strasser, Szakacsits Szabolcs, Mike Thomas, Russell Vincent, Douglas E. Wegscheid, Jasmin Zainul, Bojan Zdrnja, Kristijan Zimmer. Apologies to all who I accidentally left out, and many thanks to all the subscribers of the Wget mailing list. 070701000565b1000081a4000000020000000200000001372ff1de00006dc4000000660000004500000000000000000000001700000004reloc/info/wget.info-3This is Info file wget.info, produced by Makeinfo version 1.67 from the input file ./wget.texi. INFO-DIR-SECTION Net Utilities INFO-DIR-SECTION World Wide Web START-INFO-DIR-ENTRY * Wget: (wget). The non-interactive network downloader. END-INFO-DIR-ENTRY This file documents the the GNU Wget utility for downloading network data. Copyright (C) 1996, 1997, 1998 Free Software Foundation, Inc. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the sections entitled "Copying" and "GNU General Public License" are included exactly as in the original, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.  File: wget.info, Node: Copying, Next: Concept Index, Prev: Appendices, Up: Top GNU GENERAL PUBLIC LICENSE ************************** Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc. 675 Mass Ave, Cambridge, MA 02139, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble ======== The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 1. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 2. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 3. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a. You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b. You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c. If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 4. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a. Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b. Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c. Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 5. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 6. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 7. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 8. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 9. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 10. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 11. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 12. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 13. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs ============================================= If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. ONE LINE TO GIVE THE PROGRAM'S NAME AND AN IDEA OF WHAT IT DOES. Copyright (C) 19YY NAME OF AUTHOR This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) 19YY NAME OF AUTHOR Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. SIGNATURE OF TY COON, 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License.  File: wget.info, Node: Concept Index, Prev: Copying, Up: Top Concept Index ************* * Menu: * .netrc: Startup File. * .wgetrc: Startup File. * accept directories: Directory-Based Limits. * accept suffixes: Types of Files. * accept wildcards: Types of Files. * all hosts: All Hosts. * append to log: Logging and Input File Options. * arguments: Invoking. * authentication: HTTP Options. * bug reports: Reporting Bugs. * bugs: Reporting Bugs. * cache: HTTP Options. * command line: Invoking. * Content-Length, ignore: HTTP Options. * continue retrieval: Download Options. * contributors: Contributors. * conversion of links: Recursive Retrieval Options. * copying: Copying. * cut directories: Directory Options. * debug: Logging and Input File Options. * delete after retrieval: Recursive Retrieval Options. * directories: Directory-Based Limits. * directories, exclude: Directory-Based Limits. * directories, include: Directory-Based Limits. * directory limits: Directory-Based Limits. * directory prefix: Directory Options. * DNS lookup: Host Checking. * dot style: Download Options. * examples: Examples. * exclude directories: Directory-Based Limits. * execute wgetrc command: Basic Startup Options. * features: Overview. * filling proxy cache: Recursive Retrieval Options. * follow FTP links: Recursive Accept/Reject Options. * following ftp links: FTP Links. * following links: Following Links. * force html: Logging and Input File Options. * ftp time-stamping: FTP Time-Stamping Internals. * globbing, toggle: FTP Options. * GPL: Copying. * hangup: Signals. * header, add: HTTP Options. * host checking: Host Checking. * host lookup: Host Checking. * http password: HTTP Options. * http time-stamping: HTTP Time-Stamping Internals. * http user: HTTP Options. * ignore length: HTTP Options. * include directories: Directory-Based Limits. * incremental updating: Time-Stamping. * input-file: Logging and Input File Options. * invoking: Invoking. * latest version: Distribution. * links: Following Links. * links conversion: Recursive Retrieval Options. * list: Mailing List. * location of wgetrc: Wgetrc Location. * log file: Logging and Input File Options. * mailing list: Mailing List. * mirroring: Guru Usage. * no parent: Directory-Based Limits. * no warranty: Copying. * no-clobber: Download Options. * nohup: Invoking. * norobots disallow: Disallow Field. * norobots examples: Norobots Examples. * norobots format: RES Format. * norobots introduction: Introduction to RES. * norobots user-agent: User-Agent Field. * number of retries: Download Options. * operating systems: Portability. * option syntax: Option Syntax. * output file: Logging and Input File Options. * overview: Overview. * passive ftp: FTP Options. * pause: Download Options. * portability: Portability. * proxies: Proxies. * proxy <1>: Download Options. * proxy: HTTP Options. * proxy authentication: HTTP Options. * proxy filling: Recursive Retrieval Options. * proxy password: HTTP Options. * proxy user: HTTP Options. * quiet: Logging and Input File Options. * quota: Download Options. * recursion: Recursive Retrieval. * recursive retrieval: Recursive Retrieval. * redirecting output: Guru Usage. * reject directories: Directory-Based Limits. * reject suffixes: Types of Files. * reject wildcards: Types of Files. * relative links: Relative Links. * reporting bugs: Reporting Bugs. * retries: Download Options. * retrieval tracing style: Download Options. * retrieve symbolic links: FTP Options. * retrieving: Recursive Retrieval. * robots: Robots. * robots.txt: Robots. * sample wgetrc: Sample Wgetrc. * security: Security Considerations. * server maintenance: Robots. * server response, print: Download Options. * server response, save: HTTP Options. * signal handling: Signals. * span hosts: All Hosts. * spider: Download Options. * startup: Startup File. * startup file: Startup File. * suffixes, accept: Types of Files. * suffixes, reject: Types of Files. * syntax of options: Option Syntax. * syntax of wgetrc: Wgetrc Syntax. * time-stamping: Time-Stamping. * time-stamping usage: Time-Stamping Usage. * timeout: Download Options. * timestamping: Time-Stamping. * tries: Download Options. * types of files: Types of Files. * updating the archives: Time-Stamping. * URL: URL Format. * URL syntax: URL Format. * usage, time-stamping: Time-Stamping Usage. * user-agent: HTTP Options. * various: Various. * verbose: Logging and Input File Options. * wait: Download Options. * Wget as spider: Download Options. * wgetrc: Startup File. * wgetrc commands: Wgetrc Commands. * wgetrc location: Wgetrc Location. * wgetrc syntax: Wgetrc Syntax. * wildcards, accept: Types of Files. * wildcards, reject: Types of Files. 07070100010c00000041ed000000000000000100000003372ff22600000000000000660000004500000000000000000000000c00000004reloc/share070701000565b2000041ed000000000000000100000008372ff22600000000000000660000004500000000000000000000001300000004reloc/share/locale07070100010c01000041ed000000000000000100000003372ff22600000000000000660000004500000000000000000000001600000004reloc/share/locale/cs070701000565b3000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000002200000004reloc/share/locale/cs/LC_MESSAGES070701000565b4000081a4000000020000000200000001372ff1df00005771000000660000004500000000000000000000002a00000004reloc/share/locale/cs/LC_MESSAGES/wget.moRI#)   (*24:<AEMOm  S= ^~> L{8iTLa{" !!D U%$ ,  #&I / 'b I f = s )+ ,   . */6 K G _ .0] 1s ^ -7 u 35 6  4: 9* M7 Z8;B d<K l 2G E `>@ A?D3BCEF2BL\HJ(KFiIO$N[!vDMPQ({'(ISUM{VZTYXZ7[2UDgWa`{]_`^c Fbd,hei\p gi=jTkhmw/lnOooku 0 qs U t+y X rxi w  vy)!zR!|!f!W"|~xH"""}"4#T:#qW#####?L$^k$w$$$$ %' 9%R l%d ~% % % % &!I&7!h&e!&!&!&!&!&!&!'"A'("s't"'"(,#(H#(unspecifiedtime unknown ignoreddone. done. done. connected! Wrote HTML-ized index to `%s'. Wrote HTML-ized index to `%s' [%ld]. Write failed, closing control connection. Will try connecting to %s:%hu. Will not retrieve dirs since depth is %d (max %d). Warning: wildcards not supported in HTTP. Using `%s' as listing tmp file. Usage: %s [OPTION]... [URL]... Usage: %s NETRC [HOSTNAME] Unknown/unsupported protocolUnknown type `%c', closing control connection. Unknown errorUnknown authentication scheme. Try `%s --help' for more options. The sizes do not match (local %ld), retrieving. The server refuses login. Symlinks not supported, skipping symlink `%s'. Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc' command. Starting WinHelp %s Skipping directory `%s'. Retrying. Removing %s. Removing %s since it should be rejected. Removed `%s'. Remote file is newer, retrieving. Rejecting `%s'. Recursive retrieval: -r, --recursive recursive web-suck -- use with care!. -l, --level=NUMBER maximum recursion depth (0 to unlimit). --delete-after delete downloaded files. -k, --convert-links convert non-relative links to relative. -m, --mirror turn on options suitable for mirroring. -nr, --dont-remove-listing don't remove `.listing' files. Recursive accept/reject: -A, --accept=LIST list of accepted extensions. -R, --reject=LIST list of rejected extensions. -D, --domains=LIST list of accepted domains. --exclude-domains=LIST comma-separated list of rejected domains. -L, --relative follow relative links only. --follow-ftp follow FTP links from HTML documents. -H, --span-hosts go to foreign hosts when recursive. -I, --include-directories=LIST list of allowed directories. -X, --exclude-directories=LIST list of excluded directories. -nh, --no-host-lookup don't DNS-lookup hosts. -np, --no-parent don't ascend to the parent directory. Recursion depth %d exceeded max. depth %d. Read error (%s) in headers. Proxy %s: Must be HTTP. Output will be written to `%s'. Not sure Not descending to `%s' as it is excluded/not-included. No such file or directory `%s'. No such file `%s'. No such directory `%s'. No matches on pattern `%s'. No data receivedNo URLs found in %s. Malformed status lineMail bug reports and suggestions to . Login incorrect. Logging in as %s ... Logging and input file: -o, --output-file=FILE log messages to FILE. -a, --append-output=FILE append messages to FILE. -d, --debug print debug output. -q, --quiet quiet (no output). -v, --verbose be verbose (this is the default). -nv, --non-verbose turn off verboseness, without being quiet. -i, --input-file=FILE read URL-s from file. -F, --force-html treat input file as HTML. Logged in! Location: %s%s Local file `%s' is more recent, not retrieving. Loading robots.txt; please ignore errors. Link Length: %sLength: Last-modified header missing -- time-stamps turned off. Last-modified header invalid -- time-stamp ignored. Invalid port specificationInvalid name of the symlink, skipping. Invalid host nameInvalid PORT. Index of /%s on %s:%dHost not foundHTTP options: --http-user=USER set http user to USER. --http-passwd=PASS set http password to PASS. -C, --cache=on/off (dis)allow server-cached data (normally allowed). --ignore-length ignore `Content-Length' header field. --header=STRING insert STRING among the headers. --proxy-user=USER set USER as proxy username. --proxy-passwd=PASS set PASS as proxy password. -s, --save-headers save the HTTP headers to file. -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION. Giving up. GNU Wget %s, a non-interactive network retriever. File `%s' already there, will not retrieve. File `%s' already there, not retrieving. File Failed writing HTTP request. Failed to unlink symlink `%s': %s FTP options: --retr-symlinks retrieve FTP symbolic links. -g, --glob=on/off turn file name globbing on or off. --passive-ftp use the "passive" transfer mode. Error in server response, closing control connection. Error in server greeting. Error (%s): Link %s without a base provided. Error (%s): Base %s relative, without referer URL. End of file while parsing headers. ERROR: Redirection (%d) without location. Download: -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits). -O --output-document=FILE write documents to FILE. -nc, --no-clobber don't clobber existing files. -c, --continue restart getting an existing file. --dot-style=STYLE set retrieval display style. -N, --timestamping don't retrieve files if older than local. -S, --server-response print server response. --spider don't download anything. -T, --timeout=SECONDS set the read timeout to SECONDS. -w, --wait=SECONDS wait SECONDS between retrievals. -Y, --proxy=on/off turn proxy on or off. -Q, --quota=NUMBER set retrieval quota to NUMBER. Download quota (%s bytes) EXCEEDED! Directory Directories: -nd --no-directories don't create directories. -x, --force-directories force creation of directories. -nH, --no-host-directories don't create host directories. -P, --directory-prefix=PREFIX save files to PREFIX/... --cut-dirs=NUMBER ignore NUMBER remote directory components. Data transfer aborted. Creating symlink %s -> %s Could not find proxy host. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Converting %s... Control connection closed. Continuing in background. Connection to %s:%hu refused. Connecting to %s:%hu... Cannot write to `%s' (%s). Cannot parse PASV response. Cannot initiate PASV transfer. Cannot convert links in %s: %s Can't timestamp and not clobber old files at the same time. Can't be verbose and quiet at the same time. Bind error (%s). Authorization failed. Already have correct symlink %s -> %s ==> CWD not required. ==> CWD not needed. (try:%2d)(no description)%s: unrecognized option `--%s' %s: unrecognized option `%c%s' %s: unknown/unsupported file type. %s: option requires an argument -- %c %s: option `--%s' doesn't allow an argument %s: option `%s' requires an argument %s: option `%s' is ambiguous %s: option `%c%s' doesn't allow an argument %s: missing URL %s: illegal option -- `-n%c' %s: illegal option -- %c %s: debug support not compiled in. %s: corrupt time-stamp. %s: cannot stat %s: %s %s: Warning: uname failed: %s %s: Warning: reverse-lookup of local address did not yield FQDN! %s: Warning: gethostname failed %s: Warning: cannot reverse-lookup local IP address. %s: Warning: cannot determine local IP address. %s: Warning: Both system and user wgetrc point to `%s'. %s: Redirection to itself. %s: Invalid specification `%s' %s: Error in %s at line %d. %s: Couldn't find usable socket driver. %s: Cannot read %s (%s). %s: Cannot determine user-id. %s: BUG: unknown command `%s', value `%s'. %s: %s:%d: warning: "%s" token appears before any machine name %s: %s:%d: unknown token "%s" %s: %s: invalid command %s: %s: Please specify on or off. %s: %s: Not enough memory. %s: %s: Invalid specification `%s'. %s: %s, closing control connection. %s request sent, awaiting response... %s received, redirecting output to `%%s'. %s ERROR %d: %s. %s (%s) - `%s' saved [%ld] %s (%s) - `%s' saved [%ld/%ld]) %s (%s) - `%s' saved [%ld/%ld] %s (%s) - Read error at byte %ld/%ld (%s). %s (%s) - Read error at byte %ld (%s).%s (%s) - Data connection: %s; %s (%s) - Connection closed at byte %ld/%ld. %s (%s) - Connection closed at byte %ld. [following] [%s to go] (unauthoritative) (%s to go) (%s bytes) Written by Hrvoje Niksic . REST failed, starting from scratch. Mandatory arguments to long options are mandatory for short options too. FINISHED --%s-- Downloaded: %s bytes in %d files CTRL+Break received, redirecting output to `%s'. Execution continued in background. You may stop Wget by pressing CTRL+ALT+DELETE. [ skipping %dK ]neudnoas neznm je ignorovnahotovo. hotovo.hotovo. spojeno! Vpis adrese v HTML formtu byl zapsn do `%s'. Vpis adrese v HTML formtu byl zapsn do `%s' [%ld]. Nemohu zapsat data, uzavrm dic spojen. Pokusm se spojit s %s:%hu. Podadrese nebudu penet, protoe jsme ji v hloubce %d (maximum je %d). Varovn: HTTP nepodporuje olkov znaky. Seznam soubor bude doasn uloen v `%s'. Pouit: %s [PEPNA]... [URL]... Pouit: %s NETRC [NZEV POTAE] Neznm/nepodporovan protokolNeznm typ `%c', uzavrm dic spojen. Neznm chybaServer poaduje neznm zpsob autentifikace. Pkaz `%s --help' vype vznam platnch pepna. Velikosti se neshoduj (lokln %ld), penm. Server odmt pihlen. Peskakuji symbolick odkaz `%s', protoe tento systm symbolick odkazy nepodporuje. Zatek: -V, --version vypi informaci o verzi programu Wget a skoni -h, --help vypi tuto npovdu -b, --background po sputn pokrauj v bhu na pozad -e, --execute=PKAZ prove `.wgetrc' pkaz Spoutm WinHelp %s Peskakuji adres `%s'. Zkoum to znovu. Mau %s. Mau %s, protoe tento soubor nen poadovn. Vymazal jsem `%s'. Vzdlen soubor je novjho data, penm. Odmtm `%s'. Rekurzivn stahovn: -r, --recursive rekurzivn stahovn -- bute opatrn! -l, --level=NUMBER maximln hloubka rekurze (0 bez limitu) --delete-after po penosu sma staen soubory -k, --convert-links absolutn URL peve na relativn -m, --mirror zapni pepnae vhodn pro zrcadlen dat -nr, --dont-remove-listing nema soubory `.listing' s obsahy adres Omezen pi rekurzi: -A, --accept=SEZNAM seznam povolench extenz soubor -R, --reject=SEZNAM seznam nepovolench extenz soubor -D, --domains=SEZNAM seznam povolench domn --exclude-domains=SEZNAM seznam nepovolench domn -L, --relative nsleduj pouze relativn odkazy --follow-ftp nsleduj FTP odkazy v HTML dokumentech -H, --span-hosts natej dokumenty i z ostatnch server -I, --include-directories=SEZNAM seznam povolench adres -X, --exclude-directories=SEZNAM seznam vylouench adres -nh, --no-host-lookup nevyhledvej adresy v DNS -np, --no-parent nesestupuj do rodiovskho adrese Hloubka rekurze %d pekroila maximln povolenou hloubku %d. Chyba (%s) pi ten hlaviek. Proxy %s: Mus bt HTTP. Vstup bude zapsn do `%s'. Neznm typ Nesestupuji do adrese `%s', protoe tento adres se m vynechat i nebyl zadn k prochzen. Soubor i adres `%s' neexistuje. Soubor `%s' neexistuje. Adres `%s' neexistuje. Vzorku `%s' nic neodpovd. Nepila dn dataV souboru `%s' nebyla nalezena dn URL. Odpov serveru m zkomolen stavov dekZprvy o chybch a nvrhy na vylepen programu zaslejte na adresu (pouze anglicky). Komente k eskmu pekladu zaslejte na adresu . Chyba pi pihlen. Pihlauji se jako %s ... Protokolovn a vstupn soubor: -o, --output-file=SOUBOR do tohoto souboru ukldej protokol -a, --append-output=SOUBOR protokol pipoj na konec tohoto souboru -d, --debug vypisuj ladic informace -q, --quiet nevypisuj vbec nic -v, --verbose bu upovdan (implicitn zapnuto) -nv, --non-verbose vypisuj pouze nejdleitj informace -i, --input-file=SOUBOR poten URL odkazy nati z tohoto souboru -F, --force-html soubor s URL je v HTML formtu Pihlen! Pesmrovno na: %s%s Soubor `%s' nebudu penet, protoe lokln verze je novj. Natm `robots.txt'. Chybov hlen ignorujte, prosm. Sym. odkaz Dlka: %sDlka: Nebudu pouvat asov raztka (`time-stamps'), protoe hlavika "Last-modified" v odpovdi serveru schz. Ignoruji asov raztko souboru (`time-stamp'), protoe hlavika "Last-modified" obsahuje neplatn daje. Neplatn specifikace portuNeplatn nzev symoblickho odkazu, peskakuji. Neplatn jmno strojeNeplatn PORT. Obsah /%s na %s:%dPota nebyl nalezenPepnae pro HTTP: --http-user=UIVATEL uivatelsk jmno pro autorizovan http penos --http-passwd=HESLO heslo pro autorizovan http penos -C, --cache=on/off povol i zaka pouit vyrovnvac pamti na stran serveru (implicitn `on') --ignore-length ignoruj pole `Content-Length' v hlavice --header=ETZEC poli ETZEC serveru jako soust hlaviek --proxy-user=UIVATEL jmno uivatele vyadovan pro proxy penos --proxy-passwd=HESLO heslo pro proxy penos -s, --save-headers do stahovanho souboru ulo i HTTP hlaviky -U, --user-agent=AGENT msto identifikace `Wget/VERZE' poslej v hlavice identifikan etzec AGENT Vzdvm to. GNU Wget %s, program pro neinteraktivn stahovn soubor. Soubor `%s' nebudu penet, je ji zde. Soubor `%s' je ji zde, nebudu jej penet. Soubor HTTP poadavek nebylo mon odeslat. Nebylo mon odstranit symbolick odkaz `%s': %s Pepnae pro FTP protokol: --retr-symlinks stahuj i symbolick odkazy -g, --glob=on/off zapni i vypni expanzi olk ve jmnech soubor (implicitn `on') --passive-ftp pouij pasivn md penosu dat Server odpovdl chybn, uzavrm dic spojen. vodn odpov serveru je chybn. Chyba (%s): K relativnmu odkazu %s nelze najt bzov odkaz. Chyba (%s): Bzov odkaz %s nesm bt relativn. Hlavika nen pln. CHYBA: Pesmrovn (%d) bez udan nov adresy. Stahovn: -t, --tries=SLO poet pokus sthnout URL (0 donekonena) -O --output-document=SOUBOR staen dokumenty ukldej do tohoto souboru -nc, --no-clobber nepepisuj existujc soubory -c, --continue zani stahovat ji sten penesen data --dot-style=STYL nastav zpsob zobrazen pi stahovn dat -N, --timestamping nestahuj star soubory (zapni asov raztka) -S, --server-response vypisuj odpovdi serveru --spider nic nestahuj -T, --timeout=SEKUNDY nastav timeout pi ten na tuto hodnotu -w, --wait=SEKUND ped kadm stahovnm pokej SEKUND sekund -Y, --proxy=on/off zapni penos pes proxy (standardn `off') -Q, --quota=NUMBER nastav limit objemu uloench dat Pekroen limit objemu uloench dat (%s bajt)! Adres Adrese: -nd --no-directories nevytvej adrese -x, --force-directories vdy vytvej adrese -nH, --no-host-directories nevytvej adrese s adresou serveru -P, --directory-prefix=PREFIX ukldej data do PREFIX/... --cut-dirs=POET nevytvej prvnch POET podadres Penos dat byl pedasn ukonen. Vytvm symbolick odkaz %s -> %s Nemohu najt proxy server. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. Tento program je en v nadji, e bude uiten, avak BEZ JAKKOLI ZRUKY; neposkytuj se ani odvozen zruky PRODEJNOSTI anebo VHODNOSTI PRO URIT EL. Dal podrobnosti hledejte v Obecn veejn licenci GNU. Konvertuji %s... dic spojen uzaveno. Pokrauji v bhu na pozad. Spojen s %s:%hu odmtnuto. Navazuji spojen s %s:%hu... Nemohu zapsat do `%s' (%s). Odpov na PASV nen pochopiteln. Nemohu inicializovat penos pkazem PASV. Nedoku pevst odkazy v %s: %s Nedoku pouvat asov raztka a nemazat pitom star soubory. Nedoku bt upovdan a zitcha najednou. Chyba pi operaci "bind" (%s). Autorizace selhala. Korektn symbolick odkaz %s -> %s ji existuje. ==> CWD nen poteba. ==> CWD nen poteba. (pokus:%2d)(dn popis)%s: neznm pepna `--%s' %s: neznm pepna `%c%s' %s: neznm/nepodporovan typ souboru. %s: pepna vyaduje argument -- %c %s: pepna `--%s' nem argument %s: pepna `%s' vyaduje argument %s: pepna `%s' nen jednoznan %s: pepna `%c%s' nem argument %s: postrdm URL %s: neppustn pepna -- `-n%c' %s: neppustn pepna -- %c %s: program nebyl zkompilovn s podporou pro ladn. %s: asov raztko souboru je poruen. %s: voln `stat %s' skonilo chybou: %s %s: Varovn: voln "uname" skonilo chybou %s %s: Varovn: reverzn vyhledn lokln adresy nenavrtilo pln kvalifikovan jmno! %s: Varovn: voln "gethostname" skonilo chybou %s: Varovn: lokln IP adresa nem reverzn DNS zznam. %s: Varovn: nemohu urit lokln IP adresu. %s: Varovn: Globln i uivatelsk wgetrc jsou shodn uloeny v `%s'. %s: Pesmrovn na sebe sama. %s: Neplatn specifikace `%s' %s: Chyba v %s na dku %d. %s: Nemohu najt pouiteln ovlada socket. %s: Nemohu pest %s (%s). %s: Nemohu identifikovat uivatele. %s: Chyba: Neznm pkaz `%s', hodnota `%s'. %s: %s:%d varovn: token "%s" je uveden jet ped jakmkoliv nzvem potae %s: %s:%d: neznm token "%s" %s: %s: neplatn pkaz %s: %s: Zadejte prosm `on' nebo `off'. %s: %s: Nen dost pamti. %s: %s: Neplatn specifikace `%s' %s: %s, uzavrm dic spojen. %s poadavek odesln, ekm na odpov ... Zachycen signl %s , vstup pesmrovn do `%%s'. %s CHYBA %d: %s. %s (%s) - `%s' uloen [%ld] %s (%s) - `%s' uloeno [%ld/%ld]) %s (%s) - `%s' uloen [%ld/%ld] %s (%s) - Chyba pi ten dat na bajtu %ld/%ld (%s). %s (%s) - Chyba pi ten dat na bajtu %ld (%s).%s (%s) - Datov spojen: %s; %s (%s) - Spojen uzaveno na bajtu %ld/%ld. %s (%s) - Spojen uzaveno na bajtu %ld. [nsleduji] [%s zbv] (nen smrodatn) (%s zbv) (%s bajt) Autorem tohto programu je Hrvoje Niki Pkaz REST selhal, penm soubor od zatku. Argumenty, povinn u dlouhch pepna, jsou povinn i pro krtk verze pepna. KONEC --%s-- Celkem nateno %s bajt v %d souborech Stisknut CTRL+Break, pesmrovvm vstup do `%s' Program pokrauje v bhu na pozad. Wget lze zastavit stiskem CTRL+ALT+DELETE. [ peskakuji %dK ]Project-Id-Version: GNU wget 1.5.2-b1 POT-Creation-Date: 1998-09-21 19:08+0200 PO-Revision-Date: 1998-06-05 08:47 Last-Translator: Jan Prikryl Language-Team: Czech MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-2 Content-Transfer-Encoding: 8-bit 07070100010c02000041ed000000000000000100000003372ff22600000000000000660000004500000000000000000000001600000004reloc/share/locale/de070701000565b5000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000002200000004reloc/share/locale/de/LC_MESSAGES070701000565b6000081a4000000020000000200000001372ff1df00005987000000660000004500000000000000000000002a00000004reloc/share/locale/de/LC_MESSAGES/wget.moRI#+   '(22>:HAQM]m  R= ^~#W j8ifUp" !!DU3%!$ 2 , h #&I 'b = & )+ _ , z  */6 G .0] 1s  -7 35 6  4: @9* m7 z8;B <K  2G  >@ ^Ar?D3BCElFxBLHJ(%KFQiIO$N[vMP#QU({'(SUMVZWTYyXZ[2 DWa`<{[]_`^c  bd,= ei  \p  gi;!jX!r!hm!/!lnO!oo!"ku0"Y"qs "t+"X"rxi"w"#vyF#ze##f#W#|~x$\$$}$4$T%q2%g%%%%?-&^S&wn&&&& &' -'R \'d o' ' ' ' (!2(7!S(e!(!(!(!(!(!(!("%)("J)t")"),#*H#*unspecifiedtime unknown ignoreddone. done. done. connected! Wrote HTML-ized index to `%s'. Wrote HTML-ized index to `%s' [%ld]. Write failed, closing control connection. Will try connecting to %s:%hu. Will not retrieve dirs since depth is %d (max %d). Warning: wildcards not supported in HTTP. Using `%s' as listing tmp file. Usage: %s [OPTION]... [URL]... Usage: %s NETRC [HOSTNAME] Unknown/unsupported protocolUnknown type `%c', closing control connection. Unknown errorUnknown authentication scheme. Try `%s --help' for more options. The sizes do not match (local %ld), retrieving. The server refuses login. Symlinks not supported, skipping symlink `%s'. Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc' command. Starting WinHelp %s Skipping directory `%s'. Retrying. Removing %s. Removing %s since it should be rejected. Removed `%s'. Remote file is newer, retrieving. Rejecting `%s'. Recursive retrieval: -r, --recursive recursive web-suck -- use with care!. -l, --level=NUMBER maximum recursion depth (0 to unlimit). --delete-after delete downloaded files. -k, --convert-links convert non-relative links to relative. -m, --mirror turn on options suitable for mirroring. -nr, --dont-remove-listing don't remove `.listing' files. Recursive accept/reject: -A, --accept=LIST list of accepted extensions. -R, --reject=LIST list of rejected extensions. -D, --domains=LIST list of accepted domains. --exclude-domains=LIST comma-separated list of rejected domains. -L, --relative follow relative links only. --follow-ftp follow FTP links from HTML documents. -H, --span-hosts go to foreign hosts when recursive. -I, --include-directories=LIST list of allowed directories. -X, --exclude-directories=LIST list of excluded directories. -nh, --no-host-lookup don't DNS-lookup hosts. -np, --no-parent don't ascend to the parent directory. Recursion depth %d exceeded max. depth %d. Read error (%s) in headers. Proxy %s: Must be HTTP. Output will be written to `%s'. Not sure Not descending to `%s' as it is excluded/not-included. No such file or directory `%s'. No such file `%s'. No such directory `%s'. No matches on pattern `%s'. No data receivedNo URLs found in %s. Malformed status lineMail bug reports and suggestions to . Login incorrect. Logging in as %s ... Logging and input file: -o, --output-file=FILE log messages to FILE. -a, --append-output=FILE append messages to FILE. -d, --debug print debug output. -q, --quiet quiet (no output). -v, --verbose be verbose (this is the default). -nv, --non-verbose turn off verboseness, without being quiet. -i, --input-file=FILE read URL-s from file. -F, --force-html treat input file as HTML. Logged in! Location: %s%s Local file `%s' is more recent, not retrieving. Loading robots.txt; please ignore errors. Link Length: %sLength: Last-modified header missing -- time-stamps turned off. Last-modified header invalid -- time-stamp ignored. Invalid port specificationInvalid name of the symlink, skipping. Invalid host nameInvalid PORT. Index of /%s on %s:%dHost not foundHTTP options: --http-user=USER set http user to USER. --http-passwd=PASS set http password to PASS. -C, --cache=on/off (dis)allow server-cached data (normally allowed). --ignore-length ignore `Content-Length' header field. --header=STRING insert STRING among the headers. --proxy-user=USER set USER as proxy username. --proxy-passwd=PASS set PASS as proxy password. -s, --save-headers save the HTTP headers to file. -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION. Giving up. GNU Wget %s, a non-interactive network retriever. File `%s' already there, will not retrieve. File `%s' already there, not retrieving. File Failed writing HTTP request. Failed to unlink symlink `%s': %s FTP options: --retr-symlinks retrieve FTP symbolic links. -g, --glob=on/off turn file name globbing on or off. --passive-ftp use the "passive" transfer mode. Error in server response, closing control connection. Error in server greeting. Error (%s): Link %s without a base provided. Error (%s): Base %s relative, without referer URL. End of file while parsing headers. ERROR: Redirection (%d) without location. Download: -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits). -O --output-document=FILE write documents to FILE. -nc, --no-clobber don't clobber existing files. -c, --continue restart getting an existing file. --dot-style=STYLE set retrieval display style. -N, --timestamping don't retrieve files if older than local. -S, --server-response print server response. --spider don't download anything. -T, --timeout=SECONDS set the read timeout to SECONDS. -w, --wait=SECONDS wait SECONDS between retrievals. -Y, --proxy=on/off turn proxy on or off. -Q, --quota=NUMBER set retrieval quota to NUMBER. Download quota (%s bytes) EXCEEDED! Directory Directories: -nd --no-directories don't create directories. -x, --force-directories force creation of directories. -nH, --no-host-directories don't create host directories. -P, --directory-prefix=PREFIX save files to PREFIX/... --cut-dirs=NUMBER ignore NUMBER remote directory components. Data transfer aborted. Creating symlink %s -> %s Could not find proxy host. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Converting %s... Control connection closed. Continuing in background. Connection to %s:%hu refused. Connecting to %s:%hu... Cannot write to `%s' (%s). Cannot parse PASV response. Cannot initiate PASV transfer. Cannot convert links in %s: %s Can't timestamp and not clobber old files at the same time. Can't be verbose and quiet at the same time. Bind error (%s). Authorization failed. Already have correct symlink %s -> %s ==> CWD not required. ==> CWD not needed. (try:%2d)(no description)%s: unrecognized option `--%s' %s: unrecognized option `%c%s' %s: unknown/unsupported file type. %s: option requires an argument -- %c %s: option `--%s' doesn't allow an argument %s: option `%s' requires an argument %s: option `%s' is ambiguous %s: option `%c%s' doesn't allow an argument %s: missing URL %s: illegal option -- `-n%c' %s: illegal option -- %c %s: debug support not compiled in. %s: corrupt time-stamp. %s: cannot stat %s: %s %s: Warning: uname failed: %s %s: Warning: reverse-lookup of local address did not yield FQDN! %s: Warning: gethostname failed %s: Warning: cannot reverse-lookup local IP address. %s: Warning: cannot determine local IP address. %s: Warning: Both system and user wgetrc point to `%s'. %s: Redirection to itself. %s: Invalid specification `%s' %s: Error in %s at line %d. %s: Couldn't find usable socket driver. %s: Cannot read %s (%s). %s: Cannot determine user-id. %s: BUG: unknown command `%s', value `%s'. %s: %s:%d: warning: "%s" token appears before any machine name %s: %s:%d: unknown token "%s" %s: %s: invalid command %s: %s: Please specify on or off. %s: %s: Not enough memory. %s: %s: Invalid specification `%s'. %s: %s, closing control connection. %s request sent, awaiting response... %s received, redirecting output to `%%s'. %s ERROR %d: %s. %s (%s) - `%s' saved [%ld] %s (%s) - `%s' saved [%ld/%ld]) %s (%s) - `%s' saved [%ld/%ld] %s (%s) - Read error at byte %ld/%ld (%s). %s (%s) - Read error at byte %ld (%s).%s (%s) - Data connection: %s; %s (%s) - Connection closed at byte %ld/%ld. %s (%s) - Connection closed at byte %ld. [following] [%s to go] (unauthoritative) (%s to go) (%s bytes) Written by Hrvoje Niksic . REST failed, starting from scratch. Mandatory arguments to long options are mandatory for short options too. FINISHED --%s-- Downloaded: %s bytes in %d files CTRL+Break received, redirecting output to `%s'. Execution continued in background. You may stop Wget by pressing CTRL+ALT+DELETE. [ skipping %dK ]nicht spezifiziertZeit unbekannt bergangenfertig. fertig. fertig. verbunden! HTML-artiger Index nach %s geschrieben. Schreibe HTML-artigen Index nach %s [%ld]. Schreiben schlug fehl, schliee Kontroll-Verbindung. Versuche Verbindung zu %s:%hu herzustellen. Hole Verzeichnisse nicht, da die Tiefe %d ist (max %d). Warnung: Joker-Zeichen werden bei HTTP nicht untersttzt. Benutze %s als temporre Auflistungsdatei. Syntax: %s [OPTION]... [URL]... Syntax: %s NETRC [HOSTNAME] Unbekanntes/nicht untersttztes ProtokollUnbekannte Art %c, schliee Kontroll-Verbindung. Unbekannter FehlerUnbekannten Authentifizierungsablauf. %s --help gibt weitere Informationen. Gren stimmen nicht berein (lokal %ld), Hol-Versuch. Der Server weist Einloggen zurck. Symbolische Verweise nicht untersttzt, berspringe symbolischen Verweis %s. Beim Start: -V, --version Programmversion anzeigen -h, --help diese Hilfe anzeigen -b, --background nach dem Starten in den Hintergrund gehen -e, --execute=BEFEHL einen .wgetrc-Befehl ausfhren WinHelp %s wird gestartet berspringe Verzeichnis %s. Versuche erneut. Entferne %s. Entferne %s, da dies zurckgewiesen werden soll. Entfernt %s. Entfernte Datei ist neuer, Hol-Versuch. Weise zurck %s. Rekursives Holen: -r, --recursive rekursives Web-Saugen -- mit Umsicht verwenden! -l, --level=Zahl maximale Rekursionstiefe (0 ohne Begrenzung) --delete-after geholte Dateien lschen -k, --convert-links nicht-relative Verweise in relative umwandeln -m, --mirror geeignete Optionen frs Spiegeln (mirroring) einschalten -nr, --dont-remove-listing .listing-Dateien nicht entfernen Recursiv erlauben/zurckweisen: -A, --accept=LISTE Liste der erlaubten Erweiterungen -R, --reject=LISTE Liste der zurckzuweisenden Erweiterungen -D, --domains=LISTE Liste der erlaubten Domains --exclude-domains=LISTE komma-unterteilte Liste der zurckzuweisen Domains -L, --relative nur relativen Verweisen folgen --follow-ftp FTP-Verweisen von HTML-Dokumenten aus folgen -H, --span-hosts wenn --recursive, auch zu fremden Hosts gehen -I, --include-directories=LISTE Liste der erlaubten Verzeichnisse -X, --exclude-directories=LISTE Liste der auszuschlieenden Verzeichnisse -nh, --no-host-lookup kein DNS-lookup fr Hosts durchfhren -np, --no-parent nicht zum bergeordneten Verzeichnis hinaufsteigen Die Rekursionstiefe %d bersteigt die max. Tiefe %d. Lesefehler (%s) bei den Kopfzeilen. Proxy %s: Muss HTTP sein. Ausgabe wird nach %s geschrieben. Nicht sicherSteige nicht zu %s hinab, da es ausgeschlossen/nicht eingeschlossen ist. Keine solche Datei oder kein solches Verzeichnis %s. Keine solche Datei %s. Kein solches Verzeichnis %s. Keine bereinstimmungen bei dem Muster %s. Keine Daten empfangenKeine URLs in %s gefunden. Nicht korrekte StatuszeileFehlerberichte und Verbesserungsvorschlge bitte an schicken. Fr die deutsche bersetzung ist die Mailingliste zustndig. Einloggen nicht richtig. Einloggen als %s ... Log-Datei schreiben und Eingabe-Datei: -o, --output-file=DATEI Log-Meldungen in DATEI schreiben -a, --append-output=DATEI Meldungen der DATEI anhngen -d, --debug Debug-Ausgabe anzeigen -q, --quiet still (keine Ausgabe von Meldungen) -v, --verbose mitteilsam (dies ist Standard) -nv, --non-verbose Mitteilsamkeit reduzieren; nicht ganz still -i, --input-file=DATEI URLs aus DATEI lesen -F, --force-html Eingabe-Datei als HTML behandeln Eingeloggt! Platz: %s%s Lokale Datei %s ist neuer, kein Hol-Versuch. Lade robots.txt; bitte Fehler ignorieren. Verweis Lnge: %sLnge: Last-modified-Kopfzeile fehlt -- Zeitstempel abgeschaltet. Last-modified-Kopfzeile ungltig -- Zeitstempeln bergangen. Ungltige Port-AngabeUngltiger Name fr einen symbolischen Verweis, berspringe. Ungltiger HostnameUngltiger PORT. Index von /%s auf %s:%dHost nicht gefundenHTTP-Optionen: --http-user=USER setze http-Benutzer auf USER --http-passwd=PASS setse http-Passwort auf PASS -C, --cache=on/off erlaube/verbiete server-gepufferte Daten (server-cached data) (normalerweise erlaubt) --ignore-length ignoriere das Content-Length-Kopffeld --header=ZEICHENKETTE ZEICHENKETTE zwischen die Kopfzeilen einfgen --proxy-user=USER setze USER als Proxy-Benutzername --proxy-passwd=PASS setze PASS als Proxy-Passwort -s, --save-headers sichere die HTTP-Kopfzeilen in Datei -U, --user-agent=AGENT als AGENT anstelle of Wget/VERSION identifizieren Gebe auf. GNU Wget %s, ein nicht-interaktives Netzwerk-Tool zum Holen. Datei %s schon vorhanden, kein Hol-Versuch. Datei %s ist schon vorhanden, kein Hol-Versuch. Datei HTTP-Anforderung zu schreiben schlug fehl. Entfernen des symbolischen Verweises %s schlug fehlt: %s FTP-Optionen: --retr-symlinks hole symbolische Verweise (FTP) -g, --glob=on/off Dateinamen-Globbing ein (on) oder aus (off) stellen --passive-ftp den "passiven" bertragungsmodus verwenden Fehler bei der Antwort des Servers, schliee Kontroll-Verbindung. Fehler bei der Begrung des Servers. Fehler (%s): Verweis %s ohne base versucht. Fehler (%s): Base %s relativ, ohne Bezugs-URL. Dateiende beim auswerten der Kopfzeilen. FEHLER: Redirektion (%d) ohne Ziel(?). Holen (download): -t, --tries=ZAHL setze Anzahl der Wiederholversuch auf ZAHL (0 ohne Beschrnkung) -O --output-document=DATEI schreibe Dokumente in DATEI -nc, --no-clobber bestehende Dateien nicht berschreiben -c, --continue beginne erneut, eine existierende Datei zu holen --dot-style=STYLE Hol-Anzeige auf STYLE setzen -N, --timestamping hole keine Dateien, die lter als die lokalen sind -S, --server-response Antwort des Servers anzeigen --spider nichts holen (don't download anything) -T, --timeout=SEKUNDEN den Lese-Timeout auf SEKUNDEN setzen -w, --wait=SEKUNDEN SEKUNDEN zwischen den Hol-Versuchen warten -Y, --proxy=on/off Proxy ein (on) oder aus (off) stellen -Q, --quota=ZAHL setze die Hol-Vorgnge auf ZAHL Hol-Kontingent (%s Bytes) ERSCHPFT! Verzeichnis Verzeichnisse: -nd --no-directories keine Verzeichnisse anlegen -x, --force-directories Anlegen von Verzeichnissen erwingen -nH, --no-host-directories keine Host-Verzeichnisse anlegen -P, --directory-prefix=PREFIX Dateien nach PREFIX/... sichern --cut-dirs=ZAHL ignoriere die ZAHL der entfernten Verzeichnisbestandteile Daten-bertragung abgeschlossen. Lege symbolischen Verweis %s -> %s an Kann Proxy-Host nicht finden. Copyright 1995, 1996, 1997, 1998 Free Software Foundation, Inc. Es gibt KEINERLEI Garantie, nicht einmal fr die TAUGLICHKEIT oder die, VERWENDBARKEIT ZU EINEM ANGEGEBENEN ZWECK. In den Quellen befindet sich die Lizenz- und Kopierbedingung; die Einzelheiten sind in der Datei COPYING (GNU General Public License) beschrieben. Wandle um %s... Kontroll-Verbindung geschlossen. Im Hintergrund geht's weiter. Verbindung nach %s:%hu zurckgewiesen. Verbindungsaufbau zu %s:%hu... Kann nicht nach %s schreiben (%s). Kann PASV-Antwort nicht auswerten. Kann PASV-bertragung nicht beginnen. Kann Verweise nicht umwandeln zu %s: %s Zeitstempeln und nicht berschreiben alter Dateien ist gleichzeitig unmglich. "Mitteilsam" und "still" ist gleichzeitig unmglich. Verbindungsfehler (%s). Authorisierung fehlgeschlagen. Der richtige symbolische Verweis %s -> %s ist schon vorhanden ==> CWD nicht erforderlich. ==> CWD nicht notwendig. (versuche:%2d)(keine Beschreibung)%s: nicht erkannte Option `--%s' %s: nicht erkannte Option `%c%s' %s: unbekannter/nicht unterstzter Dateityp. %s: Option verlangt ein Argument -- %c %s: Option `--%s' erlaubt kein Argument %s: Option `%s' bentigt kein Argument %s: Option `%s' ist zweideutig %s: Option `%c%s' erlaubt kein Argument %s: URL fehlt %s: ungltige Option -- -n%c %s: ungltige Option -- %c %s: Debug-Untersttzung nicht hineinkompiliert. %s: beschdigter Zeitstempel. %s: kann nicht finden %s: %s %s: Warnung: uname fehlgeschlagen: %s %s: Warnung: "reverse-lookup" fr lokale Adresse ergibt keinen FQDN! %s: Warnung: gethostname fehlgeschlagen %s: Warnung: kein "reverse-lookup" fr lokale IP-Adresse mglich. %s: Warnung: lokale IP-Adresse nicht bestimmbar. %s: Warnung: wgetrc des Systems und des Benutzers zeigen nach %s. %s: Redirektion auf sich selber. %s: Ungltige Angabe %s %s: Fehler in %s bei Zeile %d. %s: Kann keinen benutzbaren "socket driver" finden. %s: Kann %s nicht lesen (%s). %s: Kann Benutzer-Kennung (User-ID) nicht bestimmen. %s: Unbekannter Befehl %s, Wert %s. %s: %s:%d: Warnung: %s-Wortteil erscheint vor einem Maschinennamen %s: %s:%d: unbekannter Wortteil %s %s: %s: ungltiger Befehl %s: %s: Bitte on oder off angeben. %s: %s: Nicht gengend Speicher. %s: %s: Ungltige Angabe %s %s: %s, schliee Kontroll-Verbindung. %s Anforderung gesendet, warte auf Antwort... %s erhalten, weise Ausgabe nach %%s zurck. %s FEHLER %d: %s. %s (%s) - %s gespeichert [%ld] %s (%s) - %s gesichert [%ld/%ld]) %s (%s) - %s gesichert [%ld/%ld] %s (%s) - Lesefehler bei Byte %ld/%ld (%s). %s (%s) - Lesefehler bei Byte %ld (%s).%s (%s) - Daten-Verbindung: %s; %s (%s) - Verbindung bei Byte %ld/%ld geschlossen. %s (%s) - Verbindung bei Byte %ld geschlossen. [folge] [noch %s] (unmageblich) (noch %s) (%s Bytes) Geschrieben von Hrvoje Niksic . REST schlug fehl, starte von Null. Zwingende Argumente zu langen Optionen sind auch zwingend bei kurzen Optionen. BEENDET --%s-- Geholt: %s Bytes in %d Dateien CTRL+Break (= Strg+Abbruch) empfangen, Ausgabe wird nach %s umgeleitet. Ausfhrung wird im Hintergrund fortgefhrt. Wget kann durch das Drcken von CTRL+ALT+DELETE (= Strg+Alt+Entf) gestopt werden. [ berspringe %dK ]Project-Id-Version: wget 1.5.2-b4 POT-Creation-Date: 1998-09-21 19:08+0200 PO-Revision-Date: 1998-06-15 19:25+02:00 Last-Translator: Karl Eichwalder Language-Team: German MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8-bit 07070100010c03000041ed000000000000000100000003372ff22600000000000000660000004500000000000000000000001600000004reloc/share/locale/hr070701000565b7000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000002200000004reloc/share/locale/hr/LC_MESSAGES070701000565b8000081a4000000020000000200000001372ff1df000054a2000000660000004500000000000000000000002a00000004reloc/share/locale/hr/LC_MESSAGES/wget.moRI#&   (+25:=AFMOmr "=P ^~  78^i"; !K!DxU%8$ , #&I 'b  =  S )+ y ,  */6 G .0] 1s  T -7 g ~ 35 6 4: 9* 7 8;B <K % g2G  >@ A?D36BNCE(F5pBLHJ(KFiIO$N[vMPDQ({'('SUMKVZXTYXZ[2KD^Wa`y{]_`^c bd,Eei\pgij,hm6/BlnO_oo|kuqs  t+-XWrxijwvyz f3Wq|~x }L 4l T q  !"!U!?!^!w!"!"F" h"' "R "d " " # 2# g#!#7!#e!#! $!$!$!2$!<$!J$"u$("$t"$"%,#%H#%unspecifiedtime unknown ignoreddone. done. done. connected! Wrote HTML-ized index to `%s'. Wrote HTML-ized index to `%s' [%ld]. Write failed, closing control connection. Will try connecting to %s:%hu. Will not retrieve dirs since depth is %d (max %d). Warning: wildcards not supported in HTTP. Using `%s' as listing tmp file. Usage: %s [OPTION]... [URL]... Usage: %s NETRC [HOSTNAME] Unknown/unsupported protocolUnknown type `%c', closing control connection. Unknown errorUnknown authentication scheme. Try `%s --help' for more options. The sizes do not match (local %ld), retrieving. The server refuses login. Symlinks not supported, skipping symlink `%s'. Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc' command. Starting WinHelp %s Skipping directory `%s'. Retrying. Removing %s. Removing %s since it should be rejected. Removed `%s'. Remote file is newer, retrieving. Rejecting `%s'. Recursive retrieval: -r, --recursive recursive web-suck -- use with care!. -l, --level=NUMBER maximum recursion depth (0 to unlimit). --delete-after delete downloaded files. -k, --convert-links convert non-relative links to relative. -m, --mirror turn on options suitable for mirroring. -nr, --dont-remove-listing don't remove `.listing' files. Recursive accept/reject: -A, --accept=LIST list of accepted extensions. -R, --reject=LIST list of rejected extensions. -D, --domains=LIST list of accepted domains. --exclude-domains=LIST comma-separated list of rejected domains. -L, --relative follow relative links only. --follow-ftp follow FTP links from HTML documents. -H, --span-hosts go to foreign hosts when recursive. -I, --include-directories=LIST list of allowed directories. -X, --exclude-directories=LIST list of excluded directories. -nh, --no-host-lookup don't DNS-lookup hosts. -np, --no-parent don't ascend to the parent directory. Recursion depth %d exceeded max. depth %d. Read error (%s) in headers. Proxy %s: Must be HTTP. Output will be written to `%s'. Not sure Not descending to `%s' as it is excluded/not-included. No such file or directory `%s'. No such file `%s'. No such directory `%s'. No matches on pattern `%s'. No data receivedNo URLs found in %s. Malformed status lineMail bug reports and suggestions to . Login incorrect. Logging in as %s ... Logging and input file: -o, --output-file=FILE log messages to FILE. -a, --append-output=FILE append messages to FILE. -d, --debug print debug output. -q, --quiet quiet (no output). -v, --verbose be verbose (this is the default). -nv, --non-verbose turn off verboseness, without being quiet. -i, --input-file=FILE read URL-s from file. -F, --force-html treat input file as HTML. Logged in! Location: %s%s Local file `%s' is more recent, not retrieving. Loading robots.txt; please ignore errors. Link Length: %sLength: Last-modified header missing -- time-stamps turned off. Last-modified header invalid -- time-stamp ignored. Invalid port specificationInvalid name of the symlink, skipping. Invalid host nameInvalid PORT. Index of /%s on %s:%dHost not foundHTTP options: --http-user=USER set http user to USER. --http-passwd=PASS set http password to PASS. -C, --cache=on/off (dis)allow server-cached data (normally allowed). --ignore-length ignore `Content-Length' header field. --header=STRING insert STRING among the headers. --proxy-user=USER set USER as proxy username. --proxy-passwd=PASS set PASS as proxy password. -s, --save-headers save the HTTP headers to file. -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION. Giving up. GNU Wget %s, a non-interactive network retriever. File `%s' already there, will not retrieve. File `%s' already there, not retrieving. File Failed writing HTTP request. Failed to unlink symlink `%s': %s FTP options: --retr-symlinks retrieve FTP symbolic links. -g, --glob=on/off turn file name globbing on or off. --passive-ftp use the "passive" transfer mode. Error in server response, closing control connection. Error in server greeting. Error (%s): Link %s without a base provided. Error (%s): Base %s relative, without referer URL. End of file while parsing headers. ERROR: Redirection (%d) without location. Download: -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits). -O --output-document=FILE write documents to FILE. -nc, --no-clobber don't clobber existing files. -c, --continue restart getting an existing file. --dot-style=STYLE set retrieval display style. -N, --timestamping don't retrieve files if older than local. -S, --server-response print server response. --spider don't download anything. -T, --timeout=SECONDS set the read timeout to SECONDS. -w, --wait=SECONDS wait SECONDS between retrievals. -Y, --proxy=on/off turn proxy on or off. -Q, --quota=NUMBER set retrieval quota to NUMBER. Download quota (%s bytes) EXCEEDED! Directory Directories: -nd --no-directories don't create directories. -x, --force-directories force creation of directories. -nH, --no-host-directories don't create host directories. -P, --directory-prefix=PREFIX save files to PREFIX/... --cut-dirs=NUMBER ignore NUMBER remote directory components. Data transfer aborted. Creating symlink %s -> %s Could not find proxy host. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Converting %s... Control connection closed. Continuing in background. Connection to %s:%hu refused. Connecting to %s:%hu... Cannot write to `%s' (%s). Cannot parse PASV response. Cannot initiate PASV transfer. Cannot convert links in %s: %s Can't timestamp and not clobber old files at the same time. Can't be verbose and quiet at the same time. Bind error (%s). Authorization failed. Already have correct symlink %s -> %s ==> CWD not required. ==> CWD not needed. (try:%2d)(no description)%s: unrecognized option `--%s' %s: unrecognized option `%c%s' %s: unknown/unsupported file type. %s: option requires an argument -- %c %s: option `--%s' doesn't allow an argument %s: option `%s' requires an argument %s: option `%s' is ambiguous %s: option `%c%s' doesn't allow an argument %s: missing URL %s: illegal option -- `-n%c' %s: illegal option -- %c %s: debug support not compiled in. %s: corrupt time-stamp. %s: cannot stat %s: %s %s: Warning: uname failed: %s %s: Warning: reverse-lookup of local address did not yield FQDN! %s: Warning: gethostname failed %s: Warning: cannot reverse-lookup local IP address. %s: Warning: cannot determine local IP address. %s: Warning: Both system and user wgetrc point to `%s'. %s: Redirection to itself. %s: Invalid specification `%s' %s: Error in %s at line %d. %s: Couldn't find usable socket driver. %s: Cannot read %s (%s). %s: Cannot determine user-id. %s: BUG: unknown command `%s', value `%s'. %s: %s:%d: warning: "%s" token appears before any machine name %s: %s:%d: unknown token "%s" %s: %s: invalid command %s: %s: Please specify on or off. %s: %s: Not enough memory. %s: %s: Invalid specification `%s'. %s: %s, closing control connection. %s request sent, awaiting response... %s received, redirecting output to `%%s'. %s ERROR %d: %s. %s (%s) - `%s' saved [%ld] %s (%s) - `%s' saved [%ld/%ld]) %s (%s) - `%s' saved [%ld/%ld] %s (%s) - Read error at byte %ld/%ld (%s). %s (%s) - Read error at byte %ld (%s).%s (%s) - Data connection: %s; %s (%s) - Connection closed at byte %ld/%ld. %s (%s) - Connection closed at byte %ld. [following] [%s to go] (unauthoritative) (%s to go) (%s bytes) Written by Hrvoje Niksic . REST failed, starting from scratch. Mandatory arguments to long options are mandatory for short options too. FINISHED --%s-- Downloaded: %s bytes in %d files CTRL+Break received, redirecting output to `%s'. Execution continued in background. You may stop Wget by pressing CTRL+ALT+DELETE. [ skipping %dK ]neodreennepoznato vrijeme zanemarenagotovo. gotovo.gotovo. spojen! Snimio HTML-iziran indeks u `%s'. Snimio HTML-iziran indeks u `%s' [%ld]. Write nije uspio, zatvaram kontrolnu vezu. Pokuat u se spojiti na %s:%hu. Ne skidam direktorije jer je dubina %d (maksimalno %d). Upozorenje: wildcardi nisu podrani za HTTP. Koristim `%s' kao privremenu datoteku za listing. Uporaba: %s [OPCIJA]... [URL]... Uporaba: %s NETRC [RAUNALO] Nepoznat/nepodran protokolNepoznat tip `%c', zatvaram kontrolnu vezu. Nepoznata grekaNepoznata metoda ovjere. Pokuajte `%s --help' za vie opcija. Veliine se ne slau (lokalno %ld), skidam. Posluitelj odbija prijavu. Linkovi nisu podrani, preskaem link `%s'. Pokretanje: -V, --version prikai verziju Wget-a i izai. -h, --help ispii pomo. -b, --background radi u pozadini nakon pokretanja. -e, --execute=NAREDBA izvri naredbu `.wgetrc'-a. Pokreem WinHelp %s Preskaem direktorij `%s'. Pokuavam ponovo. Uklanjam %s. Uklanjam %s budui da bi ga trebalo odbiti. Izbrisao `%s'. Datoteka na posluitelju je novija, skidam. Odbijam `%s'. Rekurzivno skidanje: -r, --recursive rekurzivno skidanje -- koristi paljivo! -l, --level=NUMBER maksimalna dubina rekurzije (0 za beskonanu). --delete-after brii skinute datoteke. -k, --convert-links konvertiraj apsolutne linkove u relativne. -m, --mirror ukljui opcije pogodne za "mirror". -nr, --dont-remove-listing ne uklanjaj `.listing' datoteke. Rekurzivno prihvaanje/odbijanje: -A, --accept=POPIS popis prihvaenih nastavaka. -R, --reject=POPIS popis odbijenih nastavaka. -D, --domains=POPIS popis prihvaenih domena. --exclude-domains=POPIS zarezom odvojen popis odbijenih domena. -L, --relative prati samo relativne linkove. --follow-ftp prati FTP linkove iz HTML dokumenata. -H, --span-hosts idi na strana raunala pri rekurzivnom skidanju. -I, --include-directories=POPIS popis dozvoljenih direktorija. -X, --exclude-directories=POPIS popis nedozvoljenih direktorija. -nh, --no-host-lookup nemoj pregledavati hostove DNS-om. -np, --no-parent ne idi u direktorij vie. Dubina rekurzije %d prelazi najveu dozvoljenu %d. Greka pri itanju zaglavlja (%s). Proxy %s: Mora biti HTTP. Izlaz se sprema u `%s'. Ne znam Ne idem u `%s' jer je iskljuen ili nije ukljuen. Nema datoteke ili direktorija `%s'. Nema datoteke `%s'. Nema direktorija `%s'. Nita ne ide uz `%s'. Podaci nisu primljeniNijedan URL nije pronaen u %s. Deformirana statusna linijaaljite izvjetaje o bugovima i prijedloge na . Pogrena prijava. Logiram se kao %s ... Logging and input file: -o, --output-file=DATOTEKA spremaj poruke u DATOTEKU. -a, --append-output=DATOTEKA dodaj poruke u DATOTEKU. -d, --debug ispisuj debug izlaz. -q, --quiet tiina (bez ispisa). -v, --verbose ukljui puni ispis (podrazumijeva se). -nv, --non-verbose iskljui veinu ispisa. -i, --input-file=DATOTEKA itaj URL-ove iz DATOTEKE. -F, --force-html tretiraj ulaznu datoteku kao HTML. Ulogiran! Poloaj: %s%s Lokalna datoteka `%s' je novija, ne skidam. Uitavam robots.txt; molim ne obazirati se na greke. Link Duljina: %sDuljina: Nedostaje Last-Modified zaglavlje -- ignoriram vremensku oznaku. Nevaljan Last-Modified header -- ignoriram vremensku oznaku. Pogrena specifikacija portaPogreno ime simbolikog linka, preskaem. Pogrean naziv raunalaPogrean PORT. Indeks direktorija /%s na %s:%dRaunalo nije pronaenoHTTP options: --http-user=KORISNIK postavi HTTP korisnika na KORISNIK. --http-passwd=ZAPORKA postavi HTTP zaporku na ZAPORKA. -C, --cache=on/off dozvoli ili zabrani keiranje na posluitelju (obino dozvoljeno). --ignore-length ignoriraj `Content-Length' zaglavlje. --header=STRING umetni STRING meu zaglavlja. --proxy-user=KORISNIK postavi KORISNIKA kao proxy korisnika --proxy-passwd=ZAPORKA postavi proxy zaporku na ZAPORKU. -s, --save-headers snimaj HTTP zaglavlja na disk. -U, --user-agent=KLIJENT identificiraj se kao KLIJENT umjesto Wget/VERZIJA. Odustajem. GNU Wget %s, alat za neinteraktivno skidanje preko mree. Datoteka `%s' ve postoji, ne skidam. Datoteka `%s' ve postoji, ne skidam. Datoteka Nisam uspio poslati HTTP zahtjev. Ne mogu izbrisati link `%s': %s FTP options: --retr-symlinks skidaj FTP simbolike linkove. -g, --glob=on/off ukljui ili iskljui globbing. --passive-ftp koristi "pasivni" mod prijenosa. Greka u odgovoru, zatvaram kontrolnu vezu. Greka u posluiteljevom pozdravu. Greka (%s): Zadan je link %s bez osnove. Greka (%s): Baza %s je relativna, bez referirajueg URL-a. Kraj datoteke za vrijeme obrade zaglavlja. GREKA: Redirekcija (%d) bez novog poloaja (location). Download: -t, --tries=BROJ broj pokuaja na BROJ (0 je beskonano) -O --output-document=DATOTEKA pii dokumente u DATOTEKU. -nc, --no-clobber nemoj prebrisati postojee datoteke. -c, --continue restart getting an existing file. --dot-style=STIL postavi stil prikaza skidanja. -N, --timestamping ne skidaj datoteke starije od lokalnih. -S, --server-response ispisuj posluiteljev odaziv. --spider nita ne skidaj. -T, --timeout=SEKUNDE postavi timeout itanja na SEKUNDE. -w, --wait=SEKUNDE ekaj SEKUNDE izmeu skidanja. -Y, --proxy=on/off ukljui ili iskljui proxy. -Q, --quota=BROJ postavi ogranienje skidanja na BROJ. Kvota (%s bajtova) je PREKORAENA! Direktorij Direktoriji: -nd --no-directories ne stvaraj direktorije. -x, --force-directories uvijek stvaraj direktorije. -nH, --no-host-directories ne stvaraj direktorije po raunalima. -P, --directory-prefix=PREFIKS snimaj datoteke u PREFIKS/... --cut-dirs=BROJ ignoriraj BROJ stranih direktorija. Prijenos podataka prekinut. Stvaram simboliki link %s -> %s Ne mogu nai proxy raunalo. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. Sva prava zadrana. Ovaj program distribuira se u nadi da e biti koristan, ali BEZ IKAKVOG JAMSTVA; bez ak i impliciranog jamstva PROIZVODNOSTI ili UPOTREBLJIVOSTI ZA ODREENU SVRHU. Pogledajte GNU General Public License za vie detalja. Konvertiram %s... Kontrolna veza prekinuta. Nastavljam u pozadini. %s:%hu odbija vezu. Spajam se na %s:%hu... Ne mogu pisati u `%s' (%s). Ne mogu raspoznati PASV odgovor. Ne mogu otpoeti PASV prijenos. Ne mogu konvertirati linkove u %s: %s Ne mogu istovremeno paziti na vrijeme i ne gaziti stare datoteke. Ne mogu istovremeno biti verbozan i tih. Greka u bindu (%s). Ovjera nije uspjela. Ve postoji ispravan link %s -> %s ==> CWD se ne trai. ==> CWD ne treba. (pok:%2d)(bez opisa)%s: nepoznata opcija `--%s' %s: nepoznata opcija `%c%s' %s: nepoznata/nepodrana vrsta datoteke. %s: opcija trai argument -- %c %s: uz opciju `--%s' ne ide argument %s: opcija `%s' trai argument %s: opcija `%s' je dvosmislena %s: opcija `%c%s' ne dozvoljava argument %s: nedostaje URL %s: nedozvoljena opcija -- `-n%c' %s: nedozvoljena opcija -- %c %s: podrka za debugiranje nije ugraena. %s: pogreno vrijeme. %s: ne mogu stat-irati %s: %s %s: Upozorenje: uname nije uspio: %s %s: Upozorenje: reverzni lookup lokalne adrese ne daje FQDN! %s: Upozorenje: gethostname nije uspio %s: Upozorenje: ne mogu napraviti reverzni lookup lokalne IP adrese. %s: Upozorenje: ne mogu utvrditi lokalnu IP adresu. %s: Upozorenje: sistemski i korisnikov wgetrc su `%s'. %s: Redirekcija na samog sebe. wget: %s: Pogrena specifikacija `%s' %s: Greka u %s na liniji %d. %s: Ne mogu nai upotrebljiv driver za sockete. %s: Ne mogu proitati %s (%s). %s: Ne mogu utvrditi user-id. %s: BUG: Nepoznata naredba `%s', vrijednost `%s'. %s: %s:%d: upozorenje: "%s" token se pojavljuje prije naziva stroja %s: %s:%d: nepoznat token "%s" %s: %s: nedozvoljena naredba %s: %s: Molim postavite na on ili off. %s: %s: Nema dovoljno memorije. %s: %s: Pogrena specifikacija `%s' %s: %s, zatvaram kontrolnu vezu. %s zahtjev poslan, ekam odgovor... %s primljen, usmjeravam izlaz na `%%s'. %s GREKA %d: %s. %s (%s) - `%s' snimljen [%ld] %s (%s) - `%s' snimljen [%ld/%ld]) %s (%s) - `%s' snimljen [%ld/%ld] %s (%s) - Greka pri itanju na bajtu %ld/%ld (%s). %s (%s) - Greka pri itanju na bajtu %ld (%s).%s (%s) - Podatkovna veza: %s; %s (%s) - Veza zatvorena na bajtu %ld/%ld. %s (%s) - Veza zatvorena na bajtu %ld. [pratim] [jo %s] (neautorizirana) (jo %s) (%s bajtova) Napisao Hrvoje Niki . REST nije uspio, poinjem ispoetka. Ako duga opcija zahtijeva argument, tada to vrijedi i za kratku. ZAVRIO --%s-- Skinuo: %s bajta u %d datoteka CTRL+Break je pritisnut, usmjeravam izlaz u `%s'. Izvravanje se nastavlja u pozadini. Moete prekinuti Wget pritiskom na CTRL+ALT+DELETE. [ preskaem %dK ]Project-Id-Version: wget 1.5.2-b2 POT-Creation-Date: 1998-09-21 19:08+0200 PO-Revision-Date: 1998-02-29 21:05+01:00 Last-Translator: Hrvoje Niksic Language-Team: Croatian MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-2 Content-Transfer-Encoding: 8bit 07070100010c04000041ed000000000000000100000003372ff22600000000000000660000004500000000000000000000001600000004reloc/share/locale/it070701000565b9000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000002200000004reloc/share/locale/it/LC_MESSAGES070701000565ba000081a4000000020000000200000001372ff1e00000588e000000660000004500000000000000000000002a00000004reloc/share/locale/it/LC_MESSAGES/wget.moRI#*   '(02::CAKMVm  H=~ ^~M `8i ?);Va"n !!DU%$ , #&I 'b ) = 4 s )+ ,  */6  G  .0] 1 1s J -7 35 6  4: A9* s7 8;B <K  2G  >@ TAi{?D3BCEFBL 7HJ(DKFniIO$mN[vMP QF({'(ySUMVZ9TYWxXZ[2DWa`{.]_O`n^c bd,eim\pgij - hm9 /O lnOs oo  ku !qs F!t+d!X!rxi!w!!vy$"z>"Z"f}"W"|~x"X##}#4$TB$qc$$$$%?i%^%w%%%& B&' p&R &d & & & ' A'!p'7!'e!'!'!'!(!(!)(!4("b(("(t"(" ),#)H#)unspecifiedtime unknown ignoreddone. done. done. connected! Wrote HTML-ized index to `%s'. Wrote HTML-ized index to `%s' [%ld]. Write failed, closing control connection. Will try connecting to %s:%hu. Will not retrieve dirs since depth is %d (max %d). Warning: wildcards not supported in HTTP. Using `%s' as listing tmp file. Usage: %s [OPTION]... [URL]... Usage: %s NETRC [HOSTNAME] Unknown/unsupported protocolUnknown type `%c', closing control connection. Unknown errorUnknown authentication scheme. Try `%s --help' for more options. The sizes do not match (local %ld), retrieving. The server refuses login. Symlinks not supported, skipping symlink `%s'. Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc' command. Starting WinHelp %s Skipping directory `%s'. Retrying. Removing %s. Removing %s since it should be rejected. Removed `%s'. Remote file is newer, retrieving. Rejecting `%s'. Recursive retrieval: -r, --recursive recursive web-suck -- use with care!. -l, --level=NUMBER maximum recursion depth (0 to unlimit). --delete-after delete downloaded files. -k, --convert-links convert non-relative links to relative. -m, --mirror turn on options suitable for mirroring. -nr, --dont-remove-listing don't remove `.listing' files. Recursive accept/reject: -A, --accept=LIST list of accepted extensions. -R, --reject=LIST list of rejected extensions. -D, --domains=LIST list of accepted domains. --exclude-domains=LIST comma-separated list of rejected domains. -L, --relative follow relative links only. --follow-ftp follow FTP links from HTML documents. -H, --span-hosts go to foreign hosts when recursive. -I, --include-directories=LIST list of allowed directories. -X, --exclude-directories=LIST list of excluded directories. -nh, --no-host-lookup don't DNS-lookup hosts. -np, --no-parent don't ascend to the parent directory. Recursion depth %d exceeded max. depth %d. Read error (%s) in headers. Proxy %s: Must be HTTP. Output will be written to `%s'. Not sure Not descending to `%s' as it is excluded/not-included. No such file or directory `%s'. No such file `%s'. No such directory `%s'. No matches on pattern `%s'. No data receivedNo URLs found in %s. Malformed status lineMail bug reports and suggestions to . Login incorrect. Logging in as %s ... Logging and input file: -o, --output-file=FILE log messages to FILE. -a, --append-output=FILE append messages to FILE. -d, --debug print debug output. -q, --quiet quiet (no output). -v, --verbose be verbose (this is the default). -nv, --non-verbose turn off verboseness, without being quiet. -i, --input-file=FILE read URL-s from file. -F, --force-html treat input file as HTML. Logged in! Location: %s%s Local file `%s' is more recent, not retrieving. Loading robots.txt; please ignore errors. Link Length: %sLength: Last-modified header missing -- time-stamps turned off. Last-modified header invalid -- time-stamp ignored. Invalid port specificationInvalid name of the symlink, skipping. Invalid host nameInvalid PORT. Index of /%s on %s:%dHost not foundHTTP options: --http-user=USER set http user to USER. --http-passwd=PASS set http password to PASS. -C, --cache=on/off (dis)allow server-cached data (normally allowed). --ignore-length ignore `Content-Length' header field. --header=STRING insert STRING among the headers. --proxy-user=USER set USER as proxy username. --proxy-passwd=PASS set PASS as proxy password. -s, --save-headers save the HTTP headers to file. -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION. Giving up. GNU Wget %s, a non-interactive network retriever. File `%s' already there, will not retrieve. File `%s' already there, not retrieving. File Failed writing HTTP request. Failed to unlink symlink `%s': %s FTP options: --retr-symlinks retrieve FTP symbolic links. -g, --glob=on/off turn file name globbing on or off. --passive-ftp use the "passive" transfer mode. Error in server response, closing control connection. Error in server greeting. Error (%s): Link %s without a base provided. Error (%s): Base %s relative, without referer URL. End of file while parsing headers. ERROR: Redirection (%d) without location. Download: -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits). -O --output-document=FILE write documents to FILE. -nc, --no-clobber don't clobber existing files. -c, --continue restart getting an existing file. --dot-style=STYLE set retrieval display style. -N, --timestamping don't retrieve files if older than local. -S, --server-response print server response. --spider don't download anything. -T, --timeout=SECONDS set the read timeout to SECONDS. -w, --wait=SECONDS wait SECONDS between retrievals. -Y, --proxy=on/off turn proxy on or off. -Q, --quota=NUMBER set retrieval quota to NUMBER. Download quota (%s bytes) EXCEEDED! Directory Directories: -nd --no-directories don't create directories. -x, --force-directories force creation of directories. -nH, --no-host-directories don't create host directories. -P, --directory-prefix=PREFIX save files to PREFIX/... --cut-dirs=NUMBER ignore NUMBER remote directory components. Data transfer aborted. Creating symlink %s -> %s Could not find proxy host. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Converting %s... Control connection closed. Continuing in background. Connection to %s:%hu refused. Connecting to %s:%hu... Cannot write to `%s' (%s). Cannot parse PASV response. Cannot initiate PASV transfer. Cannot convert links in %s: %s Can't timestamp and not clobber old files at the same time. Can't be verbose and quiet at the same time. Bind error (%s). Authorization failed. Already have correct symlink %s -> %s ==> CWD not required. ==> CWD not needed. (try:%2d)(no description)%s: unrecognized option `--%s' %s: unrecognized option `%c%s' %s: unknown/unsupported file type. %s: option requires an argument -- %c %s: option `--%s' doesn't allow an argument %s: option `%s' requires an argument %s: option `%s' is ambiguous %s: option `%c%s' doesn't allow an argument %s: missing URL %s: illegal option -- `-n%c' %s: illegal option -- %c %s: debug support not compiled in. %s: corrupt time-stamp. %s: cannot stat %s: %s %s: Warning: uname failed: %s %s: Warning: reverse-lookup of local address did not yield FQDN! %s: Warning: gethostname failed %s: Warning: cannot reverse-lookup local IP address. %s: Warning: cannot determine local IP address. %s: Warning: Both system and user wgetrc point to `%s'. %s: Redirection to itself. %s: Invalid specification `%s' %s: Error in %s at line %d. %s: Couldn't find usable socket driver. %s: Cannot read %s (%s). %s: Cannot determine user-id. %s: BUG: unknown command `%s', value `%s'. %s: %s:%d: warning: "%s" token appears before any machine name %s: %s:%d: unknown token "%s" %s: %s: invalid command %s: %s: Please specify on or off. %s: %s: Not enough memory. %s: %s: Invalid specification `%s'. %s: %s, closing control connection. %s request sent, awaiting response... %s received, redirecting output to `%%s'. %s ERROR %d: %s. %s (%s) - `%s' saved [%ld] %s (%s) - `%s' saved [%ld/%ld]) %s (%s) - `%s' saved [%ld/%ld] %s (%s) - Read error at byte %ld/%ld (%s). %s (%s) - Read error at byte %ld (%s).%s (%s) - Data connection: %s; %s (%s) - Connection closed at byte %ld/%ld. %s (%s) - Connection closed at byte %ld. [following] [%s to go] (unauthoritative) (%s to go) (%s bytes) Written by Hrvoje Niksic . REST failed, starting from scratch. Mandatory arguments to long options are mandatory for short options too. FINISHED --%s-- Downloaded: %s bytes in %d files CTRL+Break received, redirecting output to `%s'. Execution continued in background. You may stop Wget by pressing CTRL+ALT+DELETE. [ skipping %dK ]non specificatodata sconosciuta ignoratofatto. fatto. fatto. connesso! Scrivo l'indice in formato HTML in `%s'. Scrivo l'indice in formato HTML in `%s' [%ld]. Errore in scrittura, chiudo la connessione di controllo Prover a connettermi a %s:%hu. Non scarico le directory perch la profondit %d (max %d). Attenzione: le wildcard non sono supportate in HTTP. Utilizzo `%s' come file temporaneo per il listing. Uso: %s [OPZIONE]... [URL]... Uso: %s NETRC [HOSTNAME] Protocollo sconosciuto/non supportatoTipo `%c' sconosciuto, chiudo la connessione di controllo. Errore sconosciutoSchema di autotentificazione sconosciuto. Usare `%s --help' per ulteriori opzioni. Le dimensioni non coincidono (locale %ld), lo scarico. Il server rifiuta il login. Link simbolici non supportati, ignoro il link `%s'. Avvio: -V, --version mostra la versione di Wget ed esce. -h, --help mostra questo aiuto. -b, --background va in background dopo l'avvio. -e, --execute=COMANDO esegue un comando `.wgetrc'. Avvio WinHelp %s Ignoro la directory `%s'. Ritento. Rimuovo %s. Rimuovo %s poich deve essere rifiutato. `%s' rimosso. Il file remoto pi recente, lo scarico. Rifiuto `%s'. Scarico ricorsivo: -r, --recursive web-suck ricorsivo -- usare con cautela! -l, --level=NUMERO profondit massima di ricorsione (0 = illimitata). --delete-after cancella i file scaricati. -k, --convert-links converti i link simbolici non relativi in relativi. -m, --mirror abilita le opzioni adatte per il mirroring. -nr, --dont-remove-listing non rimuove i file `.listing'. Accetto/rifiuto ricorsivo: -A, --accept=LISTA lista di estensioni accettate. -R, --reject=LISTA lista di estensioni rifiutate. -D, --domains=LISTA lista di domini accettati. --exclude-domains=LISTA lista separata da virgole di domini rifiutati -L, --relative segue solo i link relativi. --follow-ftp segue i link FTP dai documenti HTTP. -H, --span-hosts in modo ricorsivo passa anche ad altri host -I, --include-directories=LISTA lista di directory permesse. -X, --exclude-directories=LISTA lista di directory escluse. -nh, --no-host-lookup non effettua la risoluzione DNS degli host. -np, --no-parent non risale alla directory genitrice. La profondit di %d nella ricorsione eccede il massimo ( %d ). Errore di lettura degli header (%s). Proxy %s: Deve essere HTTP. L'output sar scritto su `%s'. Incerto Non scendo nella directory `%s' perch esclusa/non inclusa. Il file o la directory `%s' non esiste. Il file `%s' non esiste. La directory `%s' non esiste. Nessun corrispondenza con il modello `%s'. Nessun dato ricevutoNon ci sono URL in %s. Riga di stato malformataInviare segnalazioni di bug e suggerimenti a . Login non corretto. Accesso come utente %s ... File di log e d'ingresso: -o, --output-file=FILE registra i messaggi su FILE. -a, --append-output=FILE accoda i messaggi a FILE. -d, --debug mostra l'output di debug. -q, --quiet silenzioso (nessun output). -v, --verbose prolisso (questo il comportamento predefinito). -nv, --non-verbose meno prolisso, senza diventare silenzioso. -i, --input-file=FILE legge gli URL da FILE. -F, --force-html tratta il file di input come HTML. Login eseguito! Location: %s%s Il file locale `%s' pi recente, non lo scarico. Carico robots.txt; si ignorino eventuali errori. Link Lunghezza: %sLunghezza: Manca l'header last-modified -- date disattivate. Header last-modified non valido -- data ignorata. Porta specificata non validaIl nome del link simbolico non valido, passo oltre. Nome host non validoPORT non valido. Indice della directory /%s su %s:%dHost non trovatoOpzioni HTTP: --http-user=UTENTE imposta l'utente http a UTENTE. --http-passwd=PASS Imposta la password http a PASS. -C, --cache=on/off permette o non permette la cache dei dati sul server (normalmente permessa). --ignore-length ignora il campo `Content-Length' degli header. --header=STRINGA inserisce STRINGA tra gli header. --proxy-user=UTENTE usa UTENTE come nome utente per il proxy. --proxy-passwd=PASS usa PASS come password per il proxy. -s, --save-headers salva gli header HTTP sul file. -U, --user-agent=AGENT si identifica come AGENT invece che come Wget/VERSIONE. Rinuncio. GNU Wget %s, un programma non interattivo per scaricare file dalla rete. Il file `%s' gi presente, non lo scarico. Il file `%s' gi presente, non lo scarico. File Non riesco a scrivere la richiesta HTTP. Non riesco a rimuovere il link simbolico `%s': %s Opzioni FTP: --retr-symlinks scarica i link simbolici FTP. -g, --glob=on/off abilita o disabilita il file name globbing. --passive-ftp usa il modo di trasferimento "passivo". Errore nella risposta del server, chiudo la connessione di controllo. Errore nel codice di benvenuto del server Errore (%s): Link %s fornito senza una base. Errore (%s): Base %s relativa, senza URL di riferimento Raggiunta la fine del file durante l'analisi degli header. ERRORE: Redirezione (%d) senza posizione. Download: -t, --tries=NUMERO imposta il numero di tentativi a NUMERO (0 = illimitati) -O --output-document=FILE scrive l'output su FILE. -nc, --no-clobber non sovrascrive i file gi esistenti. -c, --continue riprende a scaricare un file gi esistente. --dot-style=STILE imposta lo stile di visualizzazione dello scaricamento. -N, --timestamping non scarica i file se sono pi vecchi di quelli locali. -S, --server-response mostra le risposte del server. --spider non scarica niente. -T, --timeout=SECONDI imposta il timeout di lettura a SECONDI. -w, --wait=SECONDI aspetta SECONDI tra i vari scarichi. -Y, --proxy=on/off attiva o disabilita l'uso del proxy. -Q, --quota=NUMERO imposta la quota di scarico a NUMERO. Quota per lo scarico (%s byte) SUPERATA! Directory Directory: -nd --no-directories non crea directory. -x, --force-directories forza la creazione delle directory. -nH, --no-host-directories non crea directory sull'host. -P, --directory-prefix=PREFISSO salva i file in PREFISSO/... --cut-dirs=NUMERO ignora NUMERO componenti delle directory remote. Trasferimento dati abortito. Creo il link simbolico %s -> %s Non riesco a trovare il proxy host. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. Questo programma distribuito nella speranza che possa essere utile, ma SENZA ALCUNA GARANZIA; anche senza la garanzia implicita di COMMERCIABILITA` o di ADEGUATEZZA AD UN PARTICOLARE SCOPO. Si consulti la GNU General Public License per maggiori dettagli. Converto %s... Connessione di controllo chiusa. Continuo in background. Connessione a %s:%hu rifiutata. Mi sto connettendo a %s:%hu...Non riesco a scrivere in `%s' (%s). Non riesco a comprendere la risposta PASV. Non riesco ad inizializzare il trasferimento PASV. Non riesco a convertire i link in %s: %s Non posso impostare le date e contemporaneamente non modificare i vecchi file. Non posso essere prolisso e silenzioso allo stesso tempo. Errore di bind (%s). Autorizzazione fallita. Ho gi il link simbolico %s -> %s ==> CWD non necessaria. ==> CWD non necessaria. (provo:%2d)(nessuna descrizione)%s: opzione non riconosciuta`--%s' %s: opzione non riconosciuta `%c%s' %s: tipo di file sconosciuto/non supportato. %s: l'opzione richiede un argomento -- %c %s: l'opzione `--%s' non ammette argomenti %s: l'opzione `%s' richide un argomento %s: l'opzione `%s' ambigua %s: l'opzione `%c%s' non ammette argomenti %s: manca l'URL %s: opzione illegale -- `-n%c' %s: opzione illegale -- %c wget: %s: supporto per il debug non attivato in fase di compilazione. %s: time-stamp corrotto. %s: stat su %s fallita: %s %s: Attenzione: uname fallita: %s %s: Attenzione: la risoluzione inversa dell'indirizzo locale non ha prodotto un FQDN! %s: Attenzione: gethostname fallita %s: Attenzione: impossibile fare la risoluzione inversa dell'indirizzo IP locale. %s: Attenzione: impossibile determinare l'indirizzo IP locale. %s: Attenzione: Sia il wgetrc di sistema che quello personale puntano a `%s'. %s: Redirezione su se stesso. wget: %s: Specificazione non valida `%s' %s: Errore in %s alla linea %d. %s: Non riesco a trovare un driver utilizzabile per i socket. %s: Impossibile leggere %s (%s). %s: Impossibile determinare lo user-id . %s: BUG: comando `%s' sconosciuto, valore `%s'. %s: %s:%d: attenzione: il token "%s" appare prima di un nome di macchina %s: %s:%d: token "%s" sconosciuto %s: %s: comando non valido %s: %s: Specificare on oppure off. %s: %s: Memoria insufficiente. %s: %s: Specificazione non valida `%s' %s: %s, chiudo la connessione di controllo. %s richiesta inviata, aspetto la risposta... %s ricevuti, redirigo l'output su `%%s'. %s ERRORE %d: %s. %s (%s) - `%s' salvato [%ld] %s (%s) - `%s' salvati [%ld/%ld]) %s (%s) - `%s' salvato [%ld/%ld] %s (%s) - Errore di lettura al %ld/%ld (%s). %s (%s) - Errore di lettura al byte %ld (%s). %s (%s) - Connessione dati: %s; %s (%s) - Connessione chiusa al byte %ld/%ld. %s (%s) - Connessione chiusa al byte %ld. [segue] [%s alla fine] (non autorevole) (%s per finire) (%s byte) Scritto da Hrvoje Niksic . REST fallito, ricomincio dall'inizio. Gli argomenti obbligatori per le opzioni lunghe lo sono anche per quelle corte. FINITO --%s-- Scaricati: %s byte in %d file CTRL+Break intercettato, ridirigo l'output su `%s'. L'esecuzione continuer in background Wget pu essere fermato premendo CTRL+ALT+DELETE. [ salto %dK ]Project-Id-Version: wget 1.5.2-b1 POT-Creation-Date: 1998-09-21 19:08+0200 PO-Revision-Date: 1998-06-13 15:22+02:00 Last-Translator: Giovanni Bortolozzo Language-Team: Italian MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit 07070100010c05000041ed000000000000000100000003372ff22600000000000000660000004500000000000000000000001600000004reloc/share/locale/no070701000565bb000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000002200000004reloc/share/locale/no/LC_MESSAGES070701000565bc000081a4000000020000000200000001372ff1df000053ed000000660000004500000000000000000000002a00000004reloc/share/locale/no/LC_MESSAGES/wget.moRI#@&   !(+25:=ABMLmt =H ^p~  +8Ni"$J !Y!D{U%H$ ` , #&I 'b = 7 )+ ` , {  */6 G .0] 1s 6 -7 L c 35 J 6 W c 4: 9* 7 8;B <K !2G \ |>@ A?D3BCEFBL'HJ(4KF[iIO$HN[~vMPQ 6({'_(SUMVZ?TYVtXZ[2DWa`{]_`4^cY ~bd,ei\p/Ggitjhm/lnOookuErqs t+Xrxiw*vycz}fW|~x.l}4T q4 [ w   ? ^!w,!\!u!! !' !R "d ," J" l" " "!"7!"e!-#!X#!f#!r#!#!#!#"#("#t"?$"n$,#$H#%unspecifiedtime unknown ignoreddone. done. done. connected! Wrote HTML-ized index to `%s'. Wrote HTML-ized index to `%s' [%ld]. Write failed, closing control connection. Will try connecting to %s:%hu. Will not retrieve dirs since depth is %d (max %d). Warning: wildcards not supported in HTTP. Using `%s' as listing tmp file. Usage: %s [OPTION]... [URL]... Usage: %s NETRC [HOSTNAME] Unknown/unsupported protocolUnknown type `%c', closing control connection. Unknown errorUnknown authentication scheme. Try `%s --help' for more options. The sizes do not match (local %ld), retrieving. The server refuses login. Symlinks not supported, skipping symlink `%s'. Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc' command. Starting WinHelp %s Skipping directory `%s'. Retrying. Removing %s. Removing %s since it should be rejected. Removed `%s'. Remote file is newer, retrieving. Rejecting `%s'. Recursive retrieval: -r, --recursive recursive web-suck -- use with care!. -l, --level=NUMBER maximum recursion depth (0 to unlimit). --delete-after delete downloaded files. -k, --convert-links convert non-relative links to relative. -m, --mirror turn on options suitable for mirroring. -nr, --dont-remove-listing don't remove `.listing' files. Recursive accept/reject: -A, --accept=LIST list of accepted extensions. -R, --reject=LIST list of rejected extensions. -D, --domains=LIST list of accepted domains. --exclude-domains=LIST comma-separated list of rejected domains. -L, --relative follow relative links only. --follow-ftp follow FTP links from HTML documents. -H, --span-hosts go to foreign hosts when recursive. -I, --include-directories=LIST list of allowed directories. -X, --exclude-directories=LIST list of excluded directories. -nh, --no-host-lookup don't DNS-lookup hosts. -np, --no-parent don't ascend to the parent directory. Recursion depth %d exceeded max. depth %d. Read error (%s) in headers. Proxy %s: Must be HTTP. Output will be written to `%s'. Not sure Not descending to `%s' as it is excluded/not-included. No such file or directory `%s'. No such file `%s'. No such directory `%s'. No matches on pattern `%s'. No data receivedNo URLs found in %s. Malformed status lineMail bug reports and suggestions to . Login incorrect. Logging in as %s ... Logging and input file: -o, --output-file=FILE log messages to FILE. -a, --append-output=FILE append messages to FILE. -d, --debug print debug output. -q, --quiet quiet (no output). -v, --verbose be verbose (this is the default). -nv, --non-verbose turn off verboseness, without being quiet. -i, --input-file=FILE read URL-s from file. -F, --force-html treat input file as HTML. Logged in! Location: %s%s Local file `%s' is more recent, not retrieving. Loading robots.txt; please ignore errors. Link Length: %sLength: Last-modified header missing -- time-stamps turned off. Last-modified header invalid -- time-stamp ignored. Invalid port specificationInvalid name of the symlink, skipping. Invalid host nameInvalid PORT. Index of /%s on %s:%dHost not foundHTTP options: --http-user=USER set http user to USER. --http-passwd=PASS set http password to PASS. -C, --cache=on/off (dis)allow server-cached data (normally allowed). --ignore-length ignore `Content-Length' header field. --header=STRING insert STRING among the headers. --proxy-user=USER set USER as proxy username. --proxy-passwd=PASS set PASS as proxy password. -s, --save-headers save the HTTP headers to file. -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION. Giving up. GNU Wget %s, a non-interactive network retriever. File `%s' already there, will not retrieve. File `%s' already there, not retrieving. File Failed writing HTTP request. Failed to unlink symlink `%s': %s FTP options: --retr-symlinks retrieve FTP symbolic links. -g, --glob=on/off turn file name globbing on or off. --passive-ftp use the "passive" transfer mode. Error in server response, closing control connection. Error in server greeting. Error (%s): Link %s without a base provided. Error (%s): Base %s relative, without referer URL. End of file while parsing headers. ERROR: Redirection (%d) without location. Download: -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits). -O --output-document=FILE write documents to FILE. -nc, --no-clobber don't clobber existing files. -c, --continue restart getting an existing file. --dot-style=STYLE set retrieval display style. -N, --timestamping don't retrieve files if older than local. -S, --server-response print server response. --spider don't download anything. -T, --timeout=SECONDS set the read timeout to SECONDS. -w, --wait=SECONDS wait SECONDS between retrievals. -Y, --proxy=on/off turn proxy on or off. -Q, --quota=NUMBER set retrieval quota to NUMBER. Download quota (%s bytes) EXCEEDED! Directory Directories: -nd --no-directories don't create directories. -x, --force-directories force creation of directories. -nH, --no-host-directories don't create host directories. -P, --directory-prefix=PREFIX save files to PREFIX/... --cut-dirs=NUMBER ignore NUMBER remote directory components. Data transfer aborted. Creating symlink %s -> %s Could not find proxy host. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Converting %s... Control connection closed. Continuing in background. Connection to %s:%hu refused. Connecting to %s:%hu... Cannot write to `%s' (%s). Cannot parse PASV response. Cannot initiate PASV transfer. Cannot convert links in %s: %s Can't timestamp and not clobber old files at the same time. Can't be verbose and quiet at the same time. Bind error (%s). Authorization failed. Already have correct symlink %s -> %s ==> CWD not required. ==> CWD not needed. (try:%2d)(no description)%s: unrecognized option `--%s' %s: unrecognized option `%c%s' %s: unknown/unsupported file type. %s: option requires an argument -- %c %s: option `--%s' doesn't allow an argument %s: option `%s' requires an argument %s: option `%s' is ambiguous %s: option `%c%s' doesn't allow an argument %s: missing URL %s: illegal option -- `-n%c' %s: illegal option -- %c %s: debug support not compiled in. %s: corrupt time-stamp. %s: cannot stat %s: %s %s: Warning: uname failed: %s %s: Warning: reverse-lookup of local address did not yield FQDN! %s: Warning: gethostname failed %s: Warning: cannot reverse-lookup local IP address. %s: Warning: cannot determine local IP address. %s: Warning: Both system and user wgetrc point to `%s'. %s: Redirection to itself. %s: Invalid specification `%s' %s: Error in %s at line %d. %s: Couldn't find usable socket driver. %s: Cannot read %s (%s). %s: Cannot determine user-id. %s: BUG: unknown command `%s', value `%s'. %s: %s:%d: warning: "%s" token appears before any machine name %s: %s:%d: unknown token "%s" %s: %s: invalid command %s: %s: Please specify on or off. %s: %s: Not enough memory. %s: %s: Invalid specification `%s'. %s: %s, closing control connection. %s request sent, awaiting response... %s received, redirecting output to `%%s'. %s ERROR %d: %s. %s (%s) - `%s' saved [%ld] %s (%s) - `%s' saved [%ld/%ld]) %s (%s) - `%s' saved [%ld/%ld] %s (%s) - Read error at byte %ld/%ld (%s). %s (%s) - Read error at byte %ld (%s).%s (%s) - Data connection: %s; %s (%s) - Connection closed at byte %ld/%ld. %s (%s) - Connection closed at byte %ld. [following] [%s to go] (unauthoritative) (%s to go) (%s bytes) Written by Hrvoje Niksic . REST failed, starting from scratch. Mandatory arguments to long options are mandatory for short options too. FINISHED --%s-- Downloaded: %s bytes in %d files CTRL+Break received, redirecting output to `%s'. Execution continued in background. You may stop Wget by pressing CTRL+ALT+DELETE. [ skipping %dK ]uspesifisertukjent tid ignoreresOK. OK. OK. kontakt! Skrev HTML-formattert indeks til %s. Skrev HTML-formattert indeks til %s [%ld]. Feil ved skriving, lukker kontrollforbindelsen. Vil prve kontakte %s:%hu. Henter ikke kataloger p dybde %d (max %d). Advarsel: jokertegn ikke stttet i HTTP. Bruker %s som temporr katalogliste. Bruk: %s [FLAGG]... [URL]... Bruk: %s NETRC [TJENERNAVN] Protokollen er ukjent/ikke stttetUkjent type %c, lukker kontrollforbindelsen. Ukjent feilUkjent autorisasjons-protokoll. Prv %s --help for flere flagg. Filstrrelsene er forskjellige (local %ld), hentes. Tjeneren tillater ikke innlogging. Symbolske linker ikke stttet, ignorerer %s. Oppstart: -V, --version viser Wget's versjonsnummer og avslutter. -h, --help skriver ut denne hjelpeteksten. -b, --background kjrer i bakgrunnen etter oppstart. -e, --execute=KOMMANDO utfr en .wgetrc-kommando. Starter WinHelp %s Ignorerer katalog %s. Prver igjen. Fjerner %s. Fjerner %s fordi den skal forkastes. Slettet %s. Fil p tjener er nyere - hentes. Ignorerer %s. Rekursiv nedlasting: -r, --recursive tillat rekursiv nedlasting -- bruk med omtanke! -l, --level=ANTALL maksimalt antall rekursjonsniver (0=uendelig). --delete-after slett nedlastede filer. -k, --convert-links konverter absolutte linker til relative. -m, --mirror sett passende flagg for speiling av tjenere. -nr, --dont-remove-listing ikke slett .listing-filer. Hva er tillatt ved rekursjon: -A, --accept=LISTE liste med tillatte filtyper. -R, --reject=LISTE liste med ikke tillatte filtyper. -D, --domains=LISTE liste med tillatte domener. --exclude-domains=LISTE liste med ikke tillatte domener. -L, --relative flg kun relative linker. --follow-ftp flg FTP-linker fra HTML-dokumenter. -H, --span-hosts flg linker til andre tjenere. -I, --include-directories=LISTE liste med tillatte katalognavn. -X, --exclude-directories=LISTE liste med ikke tillatte katalognavn. -nh, --no-host-lookup ikke let etter tjenernavn via DNS. -np, --no-parent ikke flg linker til ovenstende katalog. Rekursjonsdybde %d overskred maksimal dybde %d. Lesefeil (%s) i topptekster. Proxy %s: M sttte HTTP. Utskrifter vil bli skrevet til %s. Usikker Behandler ikke %s da det er ekskludert/ikke inkludert. Ingen fil eller katalog ved navn %s. Ingen fil ved navn %s. Ingen katalog ved navn %s. Ingenting passer med mnsteret %s. Ingen data mottattFant ingen URLer i %s. Feil i statuslinjeRapportr feil og send forslag til . Feil ved innlogging. Logger inn som %s ... Utskrifter og innlesing: -o, --output-file=FIL skriv meldinger til ny FIL. -a, --append-output=FIL skriv meldinger p slutten av FIL. -d, --debug skriv avlusingsinformasjon. -q, --quiet stille (ingen utskrifter). -v, --verbose vr utfrlig (standard). -nv, --non-verbose mindre utfrlig, men ikke stille. -i, --input-file=FIL les URLer fra FIL. -F, --force-html les inn filer som HTML. Logget inn! Sted: %s%s Lokal fil %s er samme/nyere, ignoreres. Henter robots.txt; ignorer eventuelle feilmeldinger. Link Lengde: %sLengde: Last-modified topptekst mangler -- tidsstempling sls av. Last-modified topptekst ugyldig -- tidsstempel ignoreres. Port-spesifikasjonen er ugyldigUgyldig navn for symbolsk link, ignoreres. Tjenernavnet er ugyldigUgyldig PORT. Indeks for /%s p %s:%dTjener ikke funnetHTTP-flagg: --http-user=BRUKER sett HTTP-bruker til BRUKER. --http-passwd=PASSORD sett HTTP-passord til PASSORD. -C, --cache=on/off (ikke) tillat bruk av hurtiglager p tjener. --ignore-length ignorer Content-Length felt i topptekst. --header=TEKST sett TEKST inn som en topptekst. --proxy-user=BRUKER sett proxy-bruker til BRUKER. --proxy-passwd=PASSORD sett proxy-passord til PASSORD. -s, --save-headers skriv HTTP-topptekster til fil. -U, --user-agent=AGENT identifiser som AGENT i stedet for Wget/VERSJON. Gir opp. GNU Wget %s, en ikke-interaktiv informasjonsagent. Filen %s hentes ikke, fordi den allerede eksisterer. File %s eksisterer allerede, ignoreres. Fil Feil ved sending av HTTP-foresprsel. Kan ikke slette den symbolske linken %s: %s FTP-flagg: --retr-symlinks hent symbolske linker via FTP. -g, --glob=on/off (ikke) tolk bruk av jokertegn i filnavn. --passive-ftp bruk passiv overfringsmodus. Feil i svar fra tjener, lukker kontrollforbindelsen. Feil i melding fra tjener. Feil (%s): Link %s gitt uten utgangspunkt. Feil (%s): Utgangspunktet %s er relativt, ukjent URL som referent. Filslutt funnet ved lesing av topptekster. FEIL: Omdirigering (%d) uten nytt sted. Nedlasting: -t, --tries=ANTALL maksimalt antall forsk (0 for uendelig). -O --output-document=FIL skriv nedlastede filer til FIL. -nc, --no-clobber ikke berr eksisterende filer. -c, --continue fortsett nedlasting av en eksisterende fil. --dot-style=TYPE velg format for nedlastings-status. -N, --timestamping ikke hent filer som er eldre enn eksisterende. -S, --server-response vis svar fra tjeneren. --spider ikke hent filer. -T, --timeout=SEKUNDER sett ventetid ved lesing til SEKUNDER. -w, --wait=SEKUNDER sett ventetid mellom filer til SEKUNDER. -Y, --proxy=on/off sett bruk av proxy p eller av. -Q, --quota=ANTALL sett nedlastingskvote til ANTALL. Nedlastingskvote (%s bytes) overskredet! Katalog Kataloger: -nd --no-directories ikke lag kataloger. -x, --force-directories lag kataloger. -nH, --no-host-directories ikke lag ovenstende kataloger. -P, --directory-prefix=PREFIKS skriv filer til PREFIKS/... --cut-dirs=ANTALL ignorer ANTALL komponenter av tjenerens katalognavn. Dataoverfring brutt. Lager symbolsk link %s -> %s Fant ikke proxy-tjener. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. Dette programmet distribueres i hp om at det blir funnet nyttig, men UTEN NOEN GARANTIER; ikke en gang for SALGBARHET eller EGNETHET TIL NOEN SPESIELL OPPGAVE. Se GNU General Public License for detaljer. Konverterer %s... Forbindelsen brutt. Fortsetter i bakgrunnen. Kontakt med %s:%hu nektet. Kontakter %s:%hu... Kan ikke skrive til %s (%s). Kan ikke tolke PASV-tilbakemelding. Kan ikke sette opp PASV-overfring. Kan ikke konvertere linker i %s: %s Kan ikke tidsstemple og la vre berre eksisterende filer p samme tid. Kan ikke vre utfrlig og stille p samme tid. Bind-feil (%s). Autorisasjon mislyktes Har allerede gyldig symbolsk link %s -> %s ==> CWD ikke ndvendig. ==> CWD ikke ndvendig. (forsk:%2d)(ingen beskrivelse)%s: ukjent flagg --%s %s: ukjent flagg %c%s %s: filtypen er ukjent/ikke stttet. %s: flagget krever et argument -- %c %s: flagget --%s tillater ikke argumenter %s: flagget %s krever et argument %s: flagget %s er tvetydig %s: flagget %c%s tillater ikke argumenter %s: URL mangler. %s: ugyldig flagg -- -n%c %s: ugyldig flagg -- %c %s: sttte for avlusing ikke inkludert ved kompilering. %s: ugyldig tidsstempel. %s: stat feilet for %s: %s %s: Advarsel: feil fra uname: %s %s: Advarsel: fikk ikke FQDN fra tilbake-oppslag for lokal IP-adresse! %s: Advarsel: feil fra gethostname %s: Advarsel: feil fra tilbake-oppslag for lokal IP-adresse. %s: Advarsel: fant ikke lokal IP-adresse. %s: Advarsel: Bde systemets og brukerens wgetrc peker til %s. %s: Omdirigerer til seg selv. %s: Ugyldig spesifikasjon %s %s: Feil i %s p linje %d. %s: Fant ingen brukbar socket-driver. %s: Kan ikke lese %s (%s). %s: Fant ikke bruker-ID. %s: Ukjent kommando %s, verdi %s. %s: %s:%d: Advarsel: symbolet %s funnet fr tjener-navn %s: %s:%d: ukjent symbol %s %s: %s: ugyldig kommando %s: %s: Vennligst spesifiser on eller off. %s: %s: Ikke nok minne. %s: %s: Ugyldig spesifikasjon %s %s: %s, lukker kontrollforbindelsen. %s foresprsel sendt, mottar topptekster... %s mottatt, omdirigerer utskrifter til %%s. %s FEIL %d: %s. %s (%s) - %s lagret [%ld] %s (%s) - %s lagret [%ld/%ld] %s (%s) - %s lagret [%ld/%ld] %s (%s) - Lesefeil ved byte %ld/%ld (%s).%s (%s) - Lesefeil ved byte %ld (%s).%s (%s) - dataforbindelse: %s; %s (%s) - Forbindelse brutt ved byte %ld/%ld. %s (%s) - Forbindelse brutt ved byte %ld. [omdirigert] [%s igjen] (ubekreftet) (%s igjen) (%s bytes) Skrevet av Hrvoje Niksic . Feil ved REST, starter fra begynnelsen. Obligatoriske argumenter til lange flagg er obligatoriske ogs for korte. FERDIG --%s-- Lastet ned %s bytes i %d filer CTRL+Break mottatt, omdirigerer utskrifter til `%s'. Kjring fortsetter i bakgrunnen. Du kan stoppe Wget ved trykke CTRL+ALT+DELETE. [ hopper over %dK ]Project-Id-Version: wget 1.5.2-b1 POT-Creation-Date: 1998-09-21 19:08+0200 PO-Revision-Date: 1998-05-22 09:00+0100 Last-Translator: Robert Schmidt Language-Team: Norwegian MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-2 Content-Transfer-Encoding: 8bit 07070100010c06000041ed000000000000000100000003372ff22600000000000000660000004500000000000000000000001900000004reloc/share/locale/pt_BR070701000565bd000041ed000000000000000100000002372ff22600000000000000660000004500000000000000000000002500000004reloc/share/locale/pt_BR/LC_MESSAGES070701000565be000081a4000000020000000200000001372ff1e000005421000000660000004500000000000000000000002d00000004reloc/share/locale/pt_BR/LC_MESSAGES/wget.moRI#t&   '(02::BAJMVm  L=x ^~ C UV8{i$"3^ !n!DU%U$ P , ~ #&I 'b =  )+ K , j  */6 G .0] 1s  -7 5 35 6   4: M 9* 7 8;B <K 2G  ;>@ hA~?D3BCEF:BLgHJ(KFiIO$2N[qvMPQ!({'E(?SUMiVZvTYXZ[2.DAWa`_{{]_`^c Abd,mei\pgi%j=Vhmf/vlnOooku8qs \t+vXrxiwvyz!FfeW|~x8}4Tq > h  ? ^!w!D!c!! !' !R "d " ;" `" " "!"7!#e!-#!U#!a#!u#!#!#!#"#("#t"S$"$,#%H#5%unspecifiedtime unknown ignoreddone. done. done. connected! Wrote HTML-ized index to `%s'. Wrote HTML-ized index to `%s' [%ld]. Write failed, closing control connection. Will try connecting to %s:%hu. Will not retrieve dirs since depth is %d (max %d). Warning: wildcards not supported in HTTP. Using `%s' as listing tmp file. Usage: %s [OPTION]... [URL]... Usage: %s NETRC [HOSTNAME] Unknown/unsupported protocolUnknown type `%c', closing control connection. Unknown errorUnknown authentication scheme. Try `%s --help' for more options. The sizes do not match (local %ld), retrieving. The server refuses login. Symlinks not supported, skipping symlink `%s'. Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc' command. Starting WinHelp %s Skipping directory `%s'. Retrying. Removing %s. Removing %s since it should be rejected. Removed `%s'. Remote file is newer, retrieving. Rejecting `%s'. Recursive retrieval: -r, --recursive recursive web-suck -- use with care!. -l, --level=NUMBER maximum recursion depth (0 to unlimit). --delete-after delete downloaded files. -k, --convert-links convert non-relative links to relative. -m, --mirror turn on options suitable for mirroring. -nr, --dont-remove-listing don't remove `.listing' files. Recursive accept/reject: -A, --accept=LIST list of accepted extensions. -R, --reject=LIST list of rejected extensions. -D, --domains=LIST list of accepted domains. --exclude-domains=LIST comma-separated list of rejected domains. -L, --relative follow relative links only. --follow-ftp follow FTP links from HTML documents. -H, --span-hosts go to foreign hosts when recursive. -I, --include-directories=LIST list of allowed directories. -X, --exclude-directories=LIST list of excluded directories. -nh, --no-host-lookup don't DNS-lookup hosts. -np, --no-parent don't ascend to the parent directory. Recursion depth %d exceeded max. depth %d. Read error (%s) in headers. Proxy %s: Must be HTTP. Output will be written to `%s'. Not sure Not descending to `%s' as it is excluded/not-included. No such file or directory `%s'. No such file `%s'. No such directory `%s'. No matches on pattern `%s'. No data receivedNo URLs found in %s. Malformed status lineMail bug reports and suggestions to . Login incorrect. Logging in as %s ... Logging and input file: -o, --output-file=FILE log messages to FILE. -a, --append-output=FILE append messages to FILE. -d, --debug print debug output. -q, --quiet quiet (no output). -v, --verbose be verbose (this is the default). -nv, --non-verbose turn off verboseness, without being quiet. -i, --input-file=FILE read URL-s from file. -F, --force-html treat input file as HTML. Logged in! Location: %s%s Local file `%s' is more recent, not retrieving. Loading robots.txt; please ignore errors. Link Length: %sLength: Last-modified header missing -- time-stamps turned off. Last-modified header invalid -- time-stamp ignored. Invalid port specificationInvalid name of the symlink, skipping. Invalid host nameInvalid PORT. Index of /%s on %s:%dHost not foundHTTP options: --http-user=USER set http user to USER. --http-passwd=PASS set http password to PASS. -C, --cache=on/off (dis)allow server-cached data (normally allowed). --ignore-length ignore `Content-Length' header field. --header=STRING insert STRING among the headers. --proxy-user=USER set USER as proxy username. --proxy-passwd=PASS set PASS as proxy password. -s, --save-headers save the HTTP headers to file. -U, --user-agent=AGENT identify as AGENT instead of Wget/VERSION. Giving up. GNU Wget %s, a non-interactive network retriever. File `%s' already there, will not retrieve. File `%s' already there, not retrieving. File Failed writing HTTP request. Failed to unlink symlink `%s': %s FTP options: --retr-symlinks retrieve FTP symbolic links. -g, --glob=on/off turn file name globbing on or off. --passive-ftp use the "passive" transfer mode. Error in server response, closing control connection. Error in server greeting. Error (%s): Link %s without a base provided. Error (%s): Base %s relative, without referer URL. End of file while parsing headers. ERROR: Redirection (%d) without location. Download: -t, --tries=NUMBER set number of retries to NUMBER (0 unlimits). -O --output-document=FILE write documents to FILE. -nc, --no-clobber don't clobber existing files. -c, --continue restart getting an existing file. --dot-style=STYLE set retrieval display style. -N, --timestamping don't retrieve files if older than local. -S, --server-response print server response. --spider don't download anything. -T, --timeout=SECONDS set the read timeout to SECONDS. -w, --wait=SECONDS wait SECONDS between retrievals. -Y, --proxy=on/off turn proxy on or off. -Q, --quota=NUMBER set retrieval quota to NUMBER. Download quota (%s bytes) EXCEEDED! Directory Directories: -nd --no-directories don't create directories. -x, --force-directories force creation of directories. -nH, --no-host-directories don't create host directories. -P, --directory-prefix=PREFIX save files to PREFIX/... --cut-dirs=NUMBER ignore NUMBER remote directory components. Data transfer aborted. Creating symlink %s -> %s Could not find proxy host. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. Converting %s... Control connection closed. Continuing in background. Connection to %s:%hu refused. Connecting to %s:%hu... Cannot write to `%s' (%s). Cannot parse PASV response. Cannot initiate PASV transfer. Cannot convert links in %s: %s Can't timestamp and not clobber old files at the same time. Can't be verbose and quiet at the same time. Bind error (%s). Authorization failed. Already have correct symlink %s -> %s ==> CWD not required. ==> CWD not needed. (try:%2d)(no description)%s: unrecognized option `--%s' %s: unrecognized option `%c%s' %s: unknown/unsupported file type. %s: option requires an argument -- %c %s: option `--%s' doesn't allow an argument %s: option `%s' requires an argument %s: option `%s' is ambiguous %s: option `%c%s' doesn't allow an argument %s: missing URL %s: illegal option -- `-n%c' %s: illegal option -- %c %s: debug support not compiled in. %s: corrupt time-stamp. %s: cannot stat %s: %s %s: Warning: uname failed: %s %s: Warning: reverse-lookup of local address did not yield FQDN! %s: Warning: gethostname failed %s: Warning: cannot reverse-lookup local IP address. %s: Warning: cannot determine local IP address. %s: Warning: Both system and user wgetrc point to `%s'. %s: Redirection to itself. %s: Invalid specification `%s' %s: Error in %s at line %d. %s: Couldn't find usable socket driver. %s: Cannot read %s (%s). %s: Cannot determine user-id. %s: BUG: unknown command `%s', value `%s'. %s: %s:%d: warning: "%s" token appears before any machine name %s: %s:%d: unknown token "%s" %s: %s: invalid command %s: %s: Please specify on or off. %s: %s: Not enough memory. %s: %s: Invalid specification `%s'. %s: %s, closing control connection. %s request sent, awaiting response... %s received, redirecting output to `%%s'. %s ERROR %d: %s. %s (%s) - `%s' saved [%ld] %s (%s) - `%s' saved [%ld/%ld]) %s (%s) - `%s' saved [%ld/%ld] %s (%s) - Read error at byte %ld/%ld (%s). %s (%s) - Read error at byte %ld (%s).%s (%s) - Data connection: %s; %s (%s) - Connection closed at byte %ld/%ld. %s (%s) - Connection closed at byte %ld. [following] [%s to go] (unauthoritative) (%s to go) (%s bytes) Written by Hrvoje Niksic . REST failed, starting from scratch. Mandatory arguments to long options are mandatory for short options too. FINISHED --%s-- Downloaded: %s bytes in %d files CTRL+Break received, redirecting output to `%s'. Execution continued in background. You may stop Wget by pressing CTRL+ALT+DELETE. [ skipping %dK ]nao especificadohorrio desconhecido ignoradofeito. feito. feito. conectado! Escrito ndice em formato HTML para `%s'. Escrito index em formato HTML para `%s' [%ld]. Falha de escrita, fechando a conexo de controle. Tentando conectar-se a %s:%hu. No sero buscados diretrios, pois o nvel de recurso %d (max %d). Aviso: wildcards no suportados para HTTP. Usando `%s' como arquivo temporrio de listagem. Uso: %s [OPO]... [URL]... Uso: %s NETRC [NOME DO HOST] Protocolo desconhecido/no suportadoTipo `%c' desconhecido, fechando a conexo de controle. Erro desconhecidoTente `%s --help' para mais opes. Os tamanhos no so iguais (local %ld), baixando. O servidor recusou o login. Links simblicos no suportados, %s ser ignorado. Incio: -V, --version mostra a verso do Wget e sai. -h, --help mostra esta ajuda. -b, --background executa em background. -e, --execute=COMANDO executa um comando `.wgetrc'. Disparando WinHelp %s Ignorando diretrio `%s'. Tentando novamente. Removendo %s. Removendo %s pois ele deve ser rejeitado. Removido `%s'. Arquivo remoto mais novo, buscando. Rejeitando `%s'. Busca recursiva: -r, --recursive busca recursiva -- use com cuidado!. -l, --level=NMERO nvel mximo de recurso (0 para ilimitado). --delete-after deleta arquivos baixados. -k, --convert-links converte links no relativos para relativos. -m, --mirror liga opes para espelhamento (mirror). -nr, --dont-remove-listing no remove arquivos `.listing'. Aceitao/rejeio recursiva: -A, --accept=LISTA lista de extenses aceitas. -D, --domains=LISTA lista de domnios aceitos. -R, --reject=LISTA lista de extenses rejeitadas. -L, --relative segue somente links relativos. --exclude-domains=LISTA lista de domnios rejeitados. --follow-ftp segue links FTP em documentos HTML. -H, --span-hosts segue hosts externos quando recursivo. -I, --include-directories=LISTA lista de diretrios permitidos. -X, --exclude-directories=LISTA lista de diretrios excludos. -nh, --no-host-lookup no faz DNS-lookup dos hosts. -np, --no-parent no sobe para o diretrio pai. Nvel de recurso %d excede nvel mximo %d. Erro de leitura (%s) nos headers. Proxy %s: Deve ser HTTP. Sada ser escrita em `%s'. Sem certeza No descendo para `%s', pois est excludo/no includo. Arquivo ou diretrio `%s' no encontrado. Arquivo `%s' no encontrado. Diretrio `%s' no encontrado. Nada encontrado com o padro `%s'. Nenhuma URL encontrada em %s. Relatos de bugs e sugestes para . Login incorreto. Logando como %s ... Gerao de log e arquivo de entrada: -o, --output-file=ARQUIVO mensagens de log para ARQUIVO. -a, --append-output=ARQUIVO apenda mensagens em ARQUIVO. -d, --debug mostra sadas de debug. -q, --quiet quieto (sem sadas). -nv, --non-verbose desliga modo verboso, sem ser quieto. -i, --input-file=ARQUIVO l URL-s de ARQUIVO. -F, --force-html trata arquivo de entrada como HTML. Logado! Localizao: %s%s Arquivo local `%s' mais novo, no ser baixado. Buscando robots.txt; por favor ignore qualquer erro. Link Tamanho: %sTamanho: Header Last-modified no recebido -- time-stamps desligados. Header Last-modified invlido -- time-stamp ignorado. Especificao de porta invlidaNome invlido do link simblico, ignorando. Nome do host invlidoPORT invlido. ndice de /%s em %s:%dHost no encontradoOpes HTTP: --http-user=USURIO configura usurio http. --http-passwd=SENHA configura senha http. -C, --cache=on/off liga/desliga busca de dados do cache (normalmente ligada). --ignore-length ignora o header `Content-Length'. --header=STRING insere STRING entre os headers. --proxy-user=USURIO configura nome do usurio do proxy. --proxy-passwd=SENHA configura a senha do usurio do proxy. -s, --save-headers salva os headers HTTP no arquivo. Desistindo. GNU Wget %s, um programa no interativo para buscar arquivos da rede. Arquivo `%s' j presente, no ser baixado. Arquivo `%s' j existente, no ser baixado. Arquivo Falha na requisio HTTP. Falha na remoo do link simblico `%s': %s Opes FTP: --retr-symlinks busca links simblicos FTP. -c, --continue-ftp recomea a busca ftp aproveitando arquivos. existentes e j recebidos em parte. -g, --glob=on/off liga/desliga expanso de nomes de arquivos. --passive-ftp usa modo de transferncia "passivo". Erro na resposta do servidor, fechando a conexo de controle. Erro na saudao do servidor. Erro (%s): Link %s sem uma base fornecida. Erro (%s): Base %s relativa, sem URL referenciadora. Fim de arquivo durante a leitura dos headers. ERRO: Redireo (%d) sem Location. Download: -t, --tries=NMERO configura nmero de tentativas (0=infinitas). -O --output-document=ARQUIVO escreve os documentos no ARQUIVO. -nc, --no-clobber no sobrescreve arquivos existentes. --dot-style=ESTILO configura estilo do display de download. -N, --timestamping no busca arquivos mais antigos que os locais. -S, --server-response mostra respostas do servidor. --spider no baixa nenhum arquivo. -T, --timeout=SEGUNDOS configura o timeout de leitura. -w, --wait=SEGUNDOS espera SEGUNDOS entre buscas de arquivos. -Y, --proxy=on/off liga ou desliga proxy. -Q, --quota=NMERO configura quota de recepo. EXCEDIDA a quota (%s bytes) de recepo! Diretrio Diretrios: -nd --no-directories no cria diretrios. -x, --force-directories fora a criao de diretrios. -nH, --no-host-directories no cria diretrios com nome do host. -P, --directory-prefix=PREFIXO salva arquivos em PREFIXO/... Transferncia dos dados abortada. Criando link simblico %s -> %s No foi possvel encontrar o proxy. Copyright (C) 1995, 1996, 1997, 1998 Free Software Foundation, Inc. Este programa distribudo com o objetivo de que seja til, mas SEM QUALQUER GARANTIA; nem mesmo a garantia mplicita de COMERCIABILIDADE ou de UTILIDADE PARA UM PROPSITO PARTICULAR. Veja a Licena Pblica Geral GNU (GNU GPL) para mais detalhes. Convertendo %s... Conexo de controle fechada. Continuando em background. Conexo para %s:%hu recusada. Conectando-se a %s:%hu... No foi possvel escrever em `%s' (%s). No foi possvel entender resposta do comando PASV. No foi possvel iniciar transferncia PASV. No foi possvel converter links em %s: %s No possvel usar as opes "timestamp" e "no clobber" ao mesmo tempo. No pode ser verboso e quieto ao mesmo tempo. Erro no bind (%s). Link simblico j est correto %s -> %s ==> CWD no requerido. ==> CWD no necessrio. (tentativa:%2d)(sem descrio)%s: opo no reconhecida `--%s' %s: opo no reconhecida `%c%s' %s: tipo de arquivo desconhecido/no suportado. %s: opo requer um argumento -- %c %s: opo `--%s' no permite argumento %s: opo `%s' requer um argumento %s: opo `%s' ambgua %s: opo `%c%s' no permite argumento %s: URL faltando %s: opo ilegal -- `-n%c' %s: opo ilegal -- %c %s: compilado sem debug. %s: horrio (timestamp) invlido. %s: no foi possvel acessar %s: %s %s: Aviso: falha em uname: %s %s: Aviso: resoluo do endereo local no resultou em FQDN! %s: Aviso: falha em gethostname %s: Aviso: no foi possvel resolver endereo IP local. %s: Aviso: no foi possvel determinar endereo IP local. %s: Aviso: os arquivos wgetrc do sistema e do usurio apontam para `%s'. %s: Redireo para si mesmo. %s: Especificao invlida `%s' %s: Erro em %s na linha %d. %s: No foi possivel encontrar um driver de sockets usvel. %s: No foi possvel ler %s (%s). %s: No foi possvel determinar user-id. %s: Comando desconhecido `%s', valor `%s'. %s: %s:%d: aviso: token "%s" aparece antes de qualquer nome de mquina %s: %s:%d: token desconhecido "%s" %s: %s: comando invlido %s: %s: Por favor especifique on ou off. %s: %s: Memria insuficiente. %s: %s: Especificao invlida `%s' %s: %s, fechando conexo de controle. %s requisio enviada, buscando headers... %s recebido, redirecionando sada para `%%s'. %s ERRO %d: %s. %s (%s) - `%s' recebido [%ld] %s (%s) - `%s' recebido [%ld/%ld]) %s (%s) - `%s' recebido [%ld/%ld] %s (%s) - Erro de leitura no byte %ld/%ld (%s).%s (%s) - Erro de leitura no byte %ld (%s).%s (%s) - Conexo de dados: %s; %s (%s) - Conexo fechada no byte %ld/%ld. %s (%s) - Conexo fechada no byte %ld. [seguinte] [%s para terminar] (sem autorizao) (%s para o fim) (%s bytes) Escrito por Hrvoje Niksic . REST falhou, recomeando do zero. Argumentos obrigatrios para opes longas so tambm obrigatrios para opes curtas. FINALIZADO --%s-- Baixados: %s bytes em %d arquivos CTRL+Break recebido, redirecionando sada para `%s'. Execuo continuar em background. Voc pode parar o Wget pressionando CTRL+ALT+DELETE. [ ignorando %dK ]Project-Id-Version: wget 1.5-b9 POT-Creation-Date: 1998-09-21 19:08+0200 PO-Revision-Date: 1998-04-06 22:09-0300 Last-Translator: Wanderlei Antonio Cavasin Language-Team: Portuguese MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8-bit 07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000b00000000TRAILER!!!