Hi, I just came across this neat little program which allows you to obtain information from selected web pages, automatically, without you having to load up a browser and search for the required page each time. The info is sent to you as an email message. At the moment we are using it for trivial things, e.g every Thursday it connects to the Met. Office web page and I get an email message on Friday morning saying "This weekend's weather will be (sunny/wet/etc.....)". Also, as in the example given below, it is being used to contact the web site of the local Showcase Cinema in Bristol and get the listings for the coming week. I could imagine a number of more serious uses, though. For example there are a lot of on-line Journals which have new issues every few weeks. A version of this program could be used to connect to the appropriate site every, say, 2 weeks (or however frequently the site is updated) download the Contents page, and email it to you. You would come in in the morning and there would be a few email messages containing a list of all the latest papers in Journals relevant to you. It requires a Unix system, running C-shell, a 'smart' BSD mail program (since it requires the Mail -s option to work), and Lynx (the text-based web browser). All you need to do is to use the Unix command 'cron' to schedule the program to run at suitable times, and from then on it's automatic. ---- Example program: This automatically retrieves the listings of the local Showcase cinema in Bristol from its web page (http://www.yell.co.uk/yell/ff/br151086.html), extracts the required info (in this case next week's listings), and emails you the result. ----- #!/bin/csh # # Program written by Craig Wilson, Bristol Uni, School of Chem, August 1997. # # First remove the old files. These are stored in /tmp, but could # be called anything and be put anywhere # rm -f /tmp/bri.txt /tmp/briproc.txt >&/dev/null # # Now use Lynx to download the page to a file /tmp/bri.txt. The -dump option # means save the page to a file, not to a screen. You may need to change # the path to wherever Lynx is on your system. # /usr/local/bin/lynx -dump http://www.yell.co.uk/yell/ff/br151086.html >/tmp/bri.txt # # Now create a new file /tmp/briproc.txt, and make the first line the URL # for the cinema page, so you can double-click it in a mail reader like Simeon # echo 'http://www.yell.co.uk/yell/ff/br151086.html' >/tmp/briproc.txt # # Now use grep -v to get rid of some unwanted stuff, and write the output # to the file /tmp/briproc.txt. This is optional, but quite nice if you # bother to take the time. You will have to examine the web page, decide the # parts of it you don't want, and use grep -v to exclude them from the file. # grep -v -e http -e EYP -e Telecomm -e RESTAURANT -e links -e LICENSE -e booking < /tmp/bri.txt >>/tmp/briproc.txt # # Finally, mail the processed output to yourself, including the date. Mail -s "Showcase, Bristol: `date +'%a, %e %b'`" userid@address < /tmp/briproc.txt --------end of prog----- I'd be interested to hear if anyone tries this with one of the on-line Journals, or if anyone comes up with any other serious uses for it. Regards, ------------------------------------------------------------------------- Dr Paul May, School of Chemistry, University of Bristol, UK tel: +44 (0)117 928-9000 x4276, fax: +44 (0)117 925-1295 <mailto:paul.may@bris.ac.uk> <http://www.bris.ac.uk/Depts/Chemistry/staff/pwm.htm> "88.2% of all statistics are made up on the spot" Vic Reeves ------------------------------------------------------------------------- chemweb: A list for Chemical Applications of the Internet. To post to list: mailto:chemweb@ic.ac.uk Archived as: http://www.lists.ic.ac.uk/hypermail/chemweb/ To (un)subscribe, mailto:majordomo@ic.ac.uk the following message; (u)nsubscribe chemweb List coordinator, Henry Rzepa (mailto:rzepa@ic.ac.uk)