INN FAQ Part 6/9

From: INN FAQ Maintainers
Subject: INN FAQ Part 6/9: Day-to-day operation and changes to the system
Summary: This article is part 6 of a multi-part FAQ: Part 6: Day-to-day operational questions. Some big changes you can make, bug warnings for 1.4, 1.3, 1.2.
Posted-By: post_faq 2.10
Archive-name: usenet/software/inn-faq/part6
Last Changed: $Date: 1997/12/18 22:19:25 $ $Revision: 1.9 $

INN FAQ Part 1: General and questions from people that don't (yet) run INN
INN FAQ Part 2: Specific notes for specific operating systems
INN FAQ Part 3: Reasons why INN isn't starting
INN FAQ Part 4: The debugging tutorial (setup of feeds etc.)
INN FAQ Part 5: Other error messages and what they mean
INN FAQ Part 6: Day-to-day operation and changes to the system
INN FAQ Part 7: Problems with INN already running
INN FAQ Part 8: Appendix A: Norman's install guide
INN FAQ Part 9: Appendix B: Configurations for certain systems

Go to the table of contents

Subject: Table Of Contents for Part 6/9



  • 6.1 How do I create all those directories in the newsspool?
  • 6.2 Why is /usr/lib/news/newsgroups not found?
  • 6.3 Safe way to edit the "active" file?
  • 6.4 What's the best way to upgrade to a new version of INN?
  • 6.5 How do I talk to innd from C or Perl?
  • 6.6 After a crash.
  • 6.7 How do I moderate a mailing list?
  • 6.8 How do I configure the /usr/lib/news/moderators file?
  • 6.9 Listing every article
  • 6.10 What's a good setup for expire.ctl?
  • 6.11 How does /remember/ in expire.ctl work?
  • 6.12 What does the output of ``expire -v1'' mean? HOW DO I... (Big changes you can make to the system):
  • 6.13 How do I set up a delayed IHAVE/SENDME over NNTP?
  • 6.14 I want compressed news, but do not have uucp
  • 6.15 Can I use gzip with INN?
  • 6.16 What do I do if /var/spool/news is split over multiple partitions?
  • 6.17 Sun Online Disk Suite for news spool?
  • 6.18 Add local newsgroups?
  • 6.19 Archiving expired articles
  • 6.20 How do I restrict access on certain newsgroups (like
  • 6.21 INN on one machine, UUCP modem on a different one
  • 6.22 Setting up proxy-nntp to talk through a firewall
  • 6.23 How do I set up inpaths with INN?
  • 6.24 Fill different types of control messages in different directories?
  • 6.25 Use more than ~100 Feeds on SunOS 4.1 ?
  • 6.26 Speed up NNTP Transfers ("Streaming NNTP")
  • 6.27 I don't want all those reject messages from rnews in syslog.
    BUGS IN 1.5:
  • 6.28 Security problems with 1.5.
    BUGS IN 1.4:
  • 6.29 1.4 considered insecure, please upgrade.
    BUGS IN 1.3 and 1.2:
  • 6.30 Looping Select Patch
  • 6.31 7-bit encoded batches are not correctly processed. Why is this?
  • 6.32 NOV (overchan) doesn't work well.
  • 6.33 Why doesn't nntpget work?


    Go to the table of contents

    Subject: (6.1) How do I create all those directories in the newsspool?

    Q: For example, if you receive comp.sys.amiga.applications, do you have to mkdir /var/spool/news/comp/sys/amiga/applications?

    A: Nope. innd creates the directory for you the first time you receive an article for that newsgroup.

    Go to the table of contents

    Subject: (6.2) Why is /usr/lib/news/newsgroups not found?

    The latest rev is in:

    or in Get it and install it.

    Go to the table of contents

    Subject: (6.3) Safe way to edit the "active" file?

    First of all, you could manipulate the active file using the ctlinnd "newgroup", "rmgroup" and "changegroup" commands. However, sometimes you just need to do a lot of editing all at once:

    The following sequence is the shortest:

    	ctlinnd pause "edit active"
    	[do something to the active file]
    	ctlinnd reload active "edit active"
    	ctlinnd go "edit active"

    Simple! No need to "flush" since the "pause" does that.

    > What if I need to delete 3000 lines from my active file?

    I would definitely edit the active file manually (using the above procedure).

    > What if I need to delete 10 lines from my active file?

    For a couple quick changes, I recommend using "ctlinnd". This is a little slow because all channels are closed and reopened after each "rmgroup", "newgroup", and "changegroup". However, it's easier than remembering the above sequence.

    DO NOT THROTTLE THE SERVER WHEN DOING MULTIPLE rmgroup COMMANDS. There is a bug in INN (all versions) that will shred your active file if you do multiple "rmgroup" messages while the server if throttled. This is a common mistake. People think the "rmgroup"'s will go faster if the server is throttled. It will go faster, it will also shred your active file.

    If you have a large number of groups to remove or create, you can use awk to write a script to do the work for you.

    	% cat thelist
    	% awk <thelist '{ print "ctlinnd rmgroup " $1 }'
    	ctlinnd rmgroup
    	ctlinnd rmgroup
    	ctlinnd rmgroup comp.sys.mac

    Now, you can either send the output of that to "| sh -x", or you can redirect the output to a file, and "source" the file.

    If you want to create a bunch of newsgroups, the awk command might be like this:

    	% awk <thelist \
    	'{ print "ctlinnd newgroup " $1 " y user@host" }' | sh -x

    Be aware that news.daily also throttles the server at some time so verify the state of the server before doing ctlinnd {rm,new}group.

    Go to the table of contents

    Subject: (6.4) What's the best way to upgrade to a new version of INN?

    First, you should read the README and the (yes, read them both... again). Things change in new versions.

    Second, the README explains how to do an upgrade. This document is redundant, but explains the procedure in more detail.

    STEP 1: Copy the values in the old to your new
    You can do this automatically with this trick:

    	% cd config
    	% make subst
    	% cp config.dist
    	% ./subst -f {OLDFILE}
    where "{OLDFILE}" names your old file.

    STEP 2: Edit the to see if you want to change any of the new settings that didn't exist in the old version's file.

    STEP 3: Compile everything:

    	% cd $INN
    	% make all
    (you can run "make world", which also runs "lint". If you don't know what "lint" is, just ignore anything it outputs. If it bombs, run "make all" instead.)

    STEP 4: When you feel you are ready to install the new files shut down

    	the old daemon:

    	% ctlinnd shutdown 'upgrade in progress'
    	[ kill innwatch by hand if you need to ]

    STEP 4: Install the new files:

    	% cd $INN
    	% make update

    STEP 5: Now update all your $INN/site files to be the same as they were for your old software. "cd $INN/site ; make diff-installed" will tell you what's different between the files in /usr/lib/news and $INN/site.
    If you only make changes in the $INN/site directory and use "make install" to copy them into place you'll save your self a lot of trouble. Read $INN/site/Makefile for more interesting things that "make" can do.

    STEP 6: When you feel you are ready to install the new $INN/site files:

    	# cd $INN/site
    	# make install

    STEP 7: Re-start the system:

    	% sh /usr/lib/news/etc/

    STEP 8: If everything was done right you should be up and running.
    Parts 3 and 4 of the FAQ gives tips on testing your configuration.

    If you're upgrading from 1.4, you'll need to change the call to in your rc.local, /etc/init.d/INN or equivalent to

    	"su news -c /path/to/"  
    since now gets run as news instead of root.

    DON'T FORGET to apply the appropriate security patches (even to 1.5.1!!)!!.

    Go to the table of contents

    Subject: (6.5) How do I talk to innd from C or Perl?

    Rich Salz says:

    If you are writing C, look at doc/inndcomm.3 and include/inndcomm.h; they include all you need to do any ctlinnd command (in fact, ctlinnd itself is little more than a call to the library).

    Hacking up a Perl subroutine that spoke to innd's Unix-domain control socket should be fairly straightforward but hasn't yet been written.

    Go to the table of contents

    Subject: (6.6) After a crash.

    "What do I do after a system crash?"

    INN handles crashes pretty well. If there are any problems they get cleaned up by the nightly expire. About once a month you might want to run "makehistory -buv" to look for "lost" articles.
    Check the man page for "makehistory" for more information.
    (The man page for "makehistory" is in the news-recovery man page until INN 1.4). The manpage is a little unclear about the '-n' flag. If used alone (e.g. makehistory -n) it does not rebuild the dbz files. But if used in conjunction with -bu (e.g. makehistory -bunv) it does not pause the server.

    See also 5.11.

    Go to the table of contents

    Subject: (6.7) How do I moderate a mailing list?

    Ask your news administrator. If you are the news administrator, read RFC 1036. (also, refer to "How do I configure the /usr/lib/news/moderators file?")

    Hint: The relevant part of RFC 1036 is " 2.2.11 Approved ".

    See also 5.32, 6.8

    Go to the table of contents

    Subject: (6.8) How do I configure the /usr/lib/news/moderators file?

    Q: The 'moderators' file that comes with INN has only the following lines:


    Should this be changed? That is, if at Usenet site, does the news admin have to configure this file in order for INN to email the local posts to moderated newsgroups to the correct moderator? In this case then, every time a new moderated group is created, and/or changes its moderator, it should be necessary to change this file.

    A: Fortunately not! But also see below.

    First of all, the default configuration says, "The moderator for is". The good people at UUNET keep mail aliases for all the moderated newsgroups so that as moderators come and go, they will always forward to the correct person.

    A: But it also wouldn't be bad ...

    The default entry could/should be changed to:


    as this points to several servers around the world which have the records. This is good to balance the load on the servers. See the MX entry to see what servers there are. Be careful with that change, as the response packet is too large for a UDP response and the nameserver has to switch to TCP connections in order to get the response; unfortunately not all nameservers are able to do this.

    Refer to the "How to Construct the Mailpaths File" FAQ, for an explanation of the moderation mechanism. This article explains the 'mailpaths' file from C News, which is similar in nature to the INN's 'moderators' file although with a different syntax.

    The file 'moderators' could be modified, though, according to that article. For example, there are other sites that do what UUNET does, and they might be closer to you.

    Also, you might want to take a look to the inn.conf(5) man page to read the 'moderatormailer' parameter description.

    See also: 6.7, 5.32

    Go to the table of contents

    Subject: (6.9) Listing every article

    People often ask for a way to list every file in the newsspool. There are a couple ways of doing this. They work well for INN as well as C News:

    1. Here's the fastest way. However, it only lists the files that are actually in the history file and if an article is crossposted it only gets listed once:

    . /usr/lib/news/innshellvars
    cd ${SPOOL}
    awk '(NF > 2){print $3}' < ${HISTORY} | tr . /
    Sorting the output will improve directory cache efficiency.

    2. This lists any article file no matter how many links you have, etc. and even if it is not listed in the history file:

    	cd /var/spool/news
    	gfind . -regex '.*/[0-9][0-9]*$' -print

    NOTE: GNU find will execute this much faster than the "find" that comes with most versions of Unix (including SunOS).

    3. If you need to do something fancier than what find can do, consider using perl's find2perl program. Given a find command line, find2perl will output the perl code to do the same thing. You can then modify the output to do what you want. For example:

    	find2perl . -mtime +30 -name '[0-9][0-9]*$' -exec '/bin/rm {}'

    outputs a perl script that deletes any article that is over 30 days old (except the regular expression is output as wrong... change it to:

    	/^[0-9]+$/ &&

    and it should work just fine.

    4. Another efficient way to scan all articles in the spool, including those that for some reason aren't in the history file, is to read the active file for a list of newsgroup names, and chdir() to each directory to scan for files. Remember *not* to do a recursive treewalk for each directory.

    Go to the table of contents

    Subject: (6.10) What's a good setup for expire.ctl?

    (from Barry Bouwsma)

    Well, you can generalize from expire.ctl files, maybe it would be better to learn what your aim is, and then we can give tips on what to tune/add to reach your goal.

    Are you trying to keep your history file small? Then cut your /remember/ time, but the tradeoff is that you won't remember that you've seen old news, and you'll just waste bandwidth transferring it additional times. A value of 14 has been the default for a while, while a lot of people are using 7 or less now. Still, bandwidth is important where I am now, so that our history file has more than ten million entries, yet there is still enough old recycled news out there that I need to add a further 2 million entries just so as to be able to refuse those more than ten thousand daily rejected articles.

    Is disk space a concern for you? It isn't? Can I work for you? Remember that the daily news volume I'm seeing has been averaging more than 10 gigabytes daily, typically 12 or so. However, of this, the Big 8 and selected other hierarchies consume only some 300 megabytes daily. So if you want to preserve disk space, you're going to want to identify the groups that take up the most space, and shorten their expire times. The biggest offenders are the warez groups, the binaries, and news.lists.filters. The periodic misplaced binaries report that can be unearthed in is excellent for identifying hidden binaries groups.

    Okay, you've added terabytes of disk space to keep your lusers happy devouring your local bandwidth with their PR0N, rather than wasting your connectivity to the Real World by browsing those sites which are trying to flood the free pr0n groups of their vitality. A good investment. Our flock of readers pulls far more from our reader machines in traffic than the incoming and outgoing full feed traffic volume, so we keep binariez groups longer than many other sites. The summary of readership will also show what groups are popular, and you can placate your readers by tuning the retention time on those groups so they don't get restless and head off to other bandwidth suckers.

    So you've got* for a week, everything else for two weeks, but your machine seems to be groaning, even though there's free disk space? You also need to prune certain groups which just get too many posts and drag down filesystem performance. For most people, even a one-day retention time of control.cancel is too much, and they've had to resort to wiping that directory every ten minutes or so. It helps to identify other groups which get large numbers of posts -- the jobs wastelands are prime candidates, and I identified a couple dozen others last weekend, which, by simply cutting the number of articles in half, knocked tens of ms off our article write times, and I added another day to the more popular hierarchies for good measure. Nobody's complained. Yet.

    Of course, not everybody has to worry about this, if you're running a Real Filesystem such as SGI's XFS, which copes well with large numbers of files in a directory (I didn't even notice a performance improvement some time back when going from three days control.cancel down to one), or alternative spool methods like CNFS or multiple articles in one file.
    But really, how many people will appreciate a newsgroup with 80,000 unread articles?

    So the best things we can give you are pointers to particular groups or hierarchies which, when tweaked, will make the biggest change for you.
    Need space? Prune the warez groups and
    Need to improve file write times? Prune control.cancel,*, alt.binaries.sounds.mp3, and others.

    Then you won't have problems keeping comp.* for a month (minus the advocacy groups), and you can lengthen the time on the useful sex binariez groupz (don't forget to run an aggressive spam filter, then you might actually be able to find useful content in these groups) and your lusers will gratefully suck down all your local bandwidth.

    These are just starter ideas; when you know what your goal is, then you can achieve that from specific figures people offer, but the thing is to know what you need to tune. du and ls | wc help find troublespots.
    Perusing on-line stats pages for hierarchy volume and numbers of posts also gives useful food for thought. A nice place to start is the list of links at

    Go to the table of contents

    Subject: (6.11) How does /remember/ in expire.ctl work?

    (from Jerry Aguirre) The /remember/ time specifies how long a history only (i.e. a entry that only is in history, but where the article is no longer there) entry will be kept measured from the arrival time. If the expire value is 14 days and the /remember/ time is 5 then entries will be kept for 15 days.
    This is because the /remember/ only applies to entries after that articles have expired. It can not force a history entry to be removed before the article is expired. The extra day beyond expire might be considered a harmless bug in expire.

    Go to the table of contents

    Subject: (6.12) What does the output of ``expire -v1'' mean?

    (Based on a submission from Chris Johnson <>)

    Running expire with -v1 option produces output like:

            Removed approximately    764913k
            Article lines processed  1044872
            Articles retained         872883
            Entries expired           171989
            Files unlinked            239657
            Old entries dropped            0
            Old entries retained      103038

    "Article lines processed": the number of lines in the the history file

    	that were read by the expire program (it reads through the entire
    	history file when it runs)

    "Articles retained": the lines left in the history file after some of the

    	lines (entries) are dropped.

    "Entries expired": the number of entries/articles listed in the history

    	file that were deemed old enough to be deleted; this equals
    	"Article lines processed" - "Articles retained"

    "Files unlinked": the number of files deleted from the file system where

    	1 file equals 1 article; however, this number can be much higher than 
    	"Entries expired" because a single entry can be posted to multiple groups
    	and you get 1 file for each group it is posted to

    "Old entries dropped": lines deleted from the history file that are only

    	present because of the value of /remember/ in expire.ctl

    "Old entries retained": lines left in the history file for articles that

    	have already expired

    (Note that running expire with -t option articles are not deleted from filesystem and the output changes slightly to: Would remove approximately 103623k). Running expire with -z option also does not remove files, but writes a list of files to be removed (see entries about "delayrm" and "fastrm" in this FAQ) See also the manpage for expire ...

            HOW DO I... (Big changes you can make to the system)

    Go to the table of contents

    Subject: (6.13) How do I set up a delayed IHAVE/SENDME over NNTP?

    Christophe Wolfhugel <> writes:

    Having some of your NNTP newsfeeds delayed by a fixed amount of time is a good way to reduce your bandwidth requirements, or a good way to set up a backup-feed. By including a Wt flag in your newsfeeds file, INN will insert timestamp entries in that batchfile, channels, or exploders. This timestamp can be used to implement delayed ihave/sendme processing. INN's senders (like innxmit) do not use that data yet. However, NNTPLINK does support this delayed IHAVE/SENDME mechanism since release 3.3 (NNTPLINK can be found on


    The syntax that you would use in your newsfeeds file would be:


    and run this command now and then:

    	nntplink -i batchfile -y 300 -b site

    The delayed IHAVE/SENDME is expected to allow bandwidth savings in situations where all sites use nntplink in following topology:

    	Your site -- 64k -----------+-----------  Site 1
    	                            |               |
    	                            |              2mb
    	                            |               |
    	                            +------------ Site 2

       Site 1 and 2 are in the same metropolitan area, you feed them both.
    With the standard nntplink layout, you generally send all articles twice, which is a waste even if you're at 2 Meg/s link and even if Site 1 and 2 do nntplinks, you're faster.

       The delayed link would be used between your site and Site 2.  A 2 or
       3 minute delay allows Site 1 to feed Site 2 before you, and in case
       of a Site 1 outage the backup starts nearly immediately.

       Reasonable delays are still kept as You -> 1 -> 2 should take less
       than one minute (or just 300 ms disk to disk if using nntplink -i ? :)).

    Experiences seem to show that a 2 to 3 minutes delay is a reasonable choice.


    Go to the table of contents

    Subject: (6.14) I want compressed news, but do not have uucp

    There is an extension to the nntp Protocol called XBatch. XBatch lets you tunnel binary data through a nntp connection. The batches will be put in a separate directory on the receiving host, where they can then be fed into rnews.

    Before trying xbatch make sure, that you can get news via nntp!!!

    Inn 1.5 has xbatch built in, so you can just read the innxbatch(8) manual page.

    You can check that a server accepts xbatch (from a host which is in your hosts.nntp):

    132 % telnet nntp xbatch 3 <-- you type 339 result code abc <-- you type 239 successfully accepted

    Go to the table of contents

    Subject: (6.15) Can I use gzip with INN?

    [this was written with the help of Michael Brouwer <>]

    There are three things that can be effected by using gzip: Compression of old logs, compressing batches to send out, and decompressing batches that come in.

    With INN 1.4 all you need to do is change two lines in to something like this:

    COMPRESS /usr/local/bin/gzip DOTZ .gz

    If you rebuild INN with these options set, all logs will be gzipped, and rnews will use gzip to decompress news.

    gzip will automaticly and transparently decompress UNIX Compress, SCO UNIX Compress (I'm told it's 99% compatible with UNIX Compress), Pack, and gzip. Therefore, you can now receive batches compressed with any of the above listed formats. Let's say your site is now has "a universal decompresser".

    It has been reported that if you hardlink gzip to be zcat, and make sure that it is the zcat that INN uses, you can get the "universal decompresser" without having to use gzip for your logs. (Though, gzip for your logs is a big win, so why make trouble for yourself?)

    `send-uucp' will still use compress for outgoing batches, so the sites you feed won't suddenly start getting data they don't understand.

    Before you can send gzipped batches, you should make sure that the sites that you feed have made the above changes so that they have the "universal decompresser" too.

    Edit send-uucp to use gzip instead of compress for certain hosts (see example of using compress -b12 for the host esca in send-uucp), outgoing batches will be gzipped.

    If you use sendbatch, you will have to edit the file so that COMPRESS is set to "gzip" and COMPFLAGS is set to "-9vc".

    Go to the table of contents

    Subject: (6.16) What do I do if /var/spool/news is split over multiple partitions?


    First of all, you can do this by either mounting a filesystem at /var/spool/news/comp (for example) or by mounting a filesystem anywhere and making /var/spool/news/comp a symbolic link to the new partition.

    Articles will be written as normal, but cross-posts have to be handled specially now. Usually INN handles crossposts by writing the article to the first newsgroup, and then creating hard links to all the other places where the article should appear. Hard links do not take up additional disk space (except making your directories longer). Hard links also have the advantage that the file data doesn't get deleted until the last hard link is gone (and they can be deleted in any order). Therefore, you can expire each newsgroup at a different rate, but the file data won't delete until it is expired from the last newsgroup.

    The problem is that two hard linked files must both be on the same filesystem (partition).

    When INN sees that it can not make a hard link (because an article is cross-posted across two partitions) it will try to make a symbolic link. If your system can not do symbolic links, set HAVE_SYMLINK to DONT in your file. This will make INN write a second (or third, etc.) copy of the file instead. (NOTE: INN 1.4 doesn't make the extra files.)

    Anyway, even though INN will automatically create symbolic links, you have to give expire the "-l" flag so that it will know to modify its behavior. Suppose that a message is posted to and alt.cameras and suppose that expires more quickly then the alt group. If this happens, then you will be left with a dangling symlink. The -l flag prevents this from happening by not removing the file from until alt.cameras expire time permits it from being deleted.

    To inform expire that your spool is split across multiple partitions:

    In news.daily, change:

    to read
    	EXPIREFLAGS="-v1 -l"

    In expirerm, change:

    	RMPROC="fastrm -e -u -s ${SPOOL}"
    to read
    	RMPROC="fastrm -e -s ${SPOOL}"

    Now edit innwatch.ctl so that it checks all the spool disks, not just ".". See the lines with "No space (spool)". Also edit innshellvars and change the INNDF variable to reflect the innwatch.ctl changes.

    Lastly, edit innstat (the line with the "df") so that all spool disks are included. After that, you're done!

    If you ever need to run "makehistory" you should pay attention to the caveat in makehistory(8) (NB: This man page is called "news-recovery" in releases before INN 1.5):


    Here is an example of moving /var/spool/news/rec to its own partition:

    	(mount the new disk onto /mnt)
    	cd /var/spool/news/rec
    	tar cf - . | ( cd /mnt && tar xpvf - )
    	If you are confident you did it right, "rm -rf /var/spool/news/rec"
    	then "mkdir /var/spool/news/rec".
    umount /mnt mount /var/spool/news/rec

    If you are moving >50% of the spool, you might use dump instead of tar:

    	dump 0f - /var/spool/news | ( cd /mnt && restore xf - rec)
    But try it out first if it is really faster - some people had much better success with using a tar-pipe (as above) using GnuTar (10 times faster). If you don't mind about article loss, just deleting the articles would be fastest:
    	cd /var/spool/news
    	mv rec rec.o
    	mkdir rec
    	mount /dev/newdisk /var/spool/news/rec
    	rm -rf rec.o

    Remember: If you screw up the /etc/fstab, SunOS and many other UNIXs won't boot. fstab can't have any blank lines in many UNIXs either.
    Double check the file after you modify it.

    Go to the table of contents

    Subject: (6.17) Sun Online Disk Suite for news spool?

    Another way under SunOS 4.1.[34] not to use multiple partitions is to use the Sun Online Disk Suite. Several sites use this and have spool capacities up to 12 GB. It has been reported that using a stripe size of 1 cylinder gives the best performance for the article filesystem.
    Chris Schmidt <> elaborates more:

      "You add several physical volumes to get a logical volume. We have 
    a meta partition made out of three 2GB disks with one partition each.
    df shows:
    Filesystem kbytes used avail capacity Mounted on /dev/md1a 5878350 4831490 752943 87% /EUnet/news/spool

    With the Online Disk Suite you also can do striping to balance the load among the used disks: the first cylinder is on the first disk, the second on the second one, the third on the third one and the fourth on the first disk again .."

    Go to the table of contents

    Subject: (6.18) Add local newsgroups?

    Q: Does anyone have a cookbook example on how to create a local news group?

    These are the steps ..

    1) Make sure your innd is running 2) Add the group with: ctlinnd newgroup 3) Add entries to newsfeeds to restrict the local groups to your

    4) Add a descriptive entry to newsgroups 5) Ready :)

    Please consider, that local is a very common name for local groups, so if a user crossposts to local.test and misc.test the article will show up in all local.test over the world. So please choose a 'better' name.

    Go to the table of contents

    Subject: (6.19) Archiving expired articles

    In <2hmomh$> writes:

    >What options do I have in INN for archiving local newsgroups?
    >Any help would be appreciated. Any cookbook examples would also help.

    See doc/archive.8. You could also put "never:never:never" in your expire.ctl file.

    Here's a cookbook example of an archive feed:

    	# Feed all moderated source postings to an archiver
    		:Tc,Nm,Wn:/usr/local/bin/archive -f -i /usr/spool/news.archive/INDEX

    Ulf Kieber <> writes:
    The INN 1.4 newsfeeds(5) man page show how to set up a /program/ feed for archive. The "archive" program currently does NOT support this method. i.e. do not use Tp in "newsfeeds" for an archive feed.

    Even if "archive" supported being used as a program feed, you would not want to use it as such if you intended to use the ``-i'' flag as archive does not do any file locking on it's index file. The index file might get corrupted by multiple concurrently running instances of archive, as may happen with a program feed.

    Go to the table of contents

    Subject: (6.20) How do I restrict access on certain newsgroups (like

    >If I were running a news server, and some of my users complained that they
    >didn't want their kids being able to access some of the newsgroups, would it
    >be possible to block access to specific newsgroups on a per-user basis?

    >I'm not asking if it's easy, just _possible_.

    If they are not using NNTP for reading, you can make a /etc/group entry for a group called something special, like "horny" and give only users in group horny access to read that directory:

    	chown news /var/spool/news/alt/sex
    	chgrp horny /var/spool/news/alt/sex
    	chmod 750 /var/spool/news/alt/sex
    	chmod 750 /var/spool/news/over.view/alt/sex	# your NOV data
    	chmod 770 /var/spool/news/in.coming
    	chmod 770 /var/spool/news/out.going

    Now only people in the group "horny" can read that newsgroup. Everyone can subscribe to it, but only horny people can read it. innd (which runs as "news") will still be able to do its business.

    Inn has an authentication scheme called authinfo for use with NNTP. The user must supply a name and a password. If they match an entry in nnrp.access, then the user may read the groups specific to this entry. An example entry for nnrp.access:

    * P:::*,!
    :R P:hwr:XXX:*

    Here users from hosts * may read and post in all groups besides If a user authenticates a user hwr with password XXX, then he or she might also read
    In order to be able to authenticate as user ``hwr'' in the above example, the host where this ``hwr'' connects from also must have read rights. So this

    :R P:hwr:XXX:*

    as the only entry in nnrp.access won't work, but the following will work:

    :R P:hwr:XXX:*

    Note that those 'password entries' need to be last in nnrp.access.
    There is a bug in inn1.4 which allows users to post to such a protected group if they know the name of the group even if they can't read it. nnrp.access-auth.patch (on the usual patch site) cures this.

    If the newsreader software doesn't support this then you can still restrict access on a per-host basis. To read a specific group you then need to be on a specific machine (but then everybody on that machine can read the group).

    In 1.5 there is be a better protocol (authinfo generic) for doing this and it should gain better acceptance than the current protocol.
    Also in 1.5 you can use entries from the password database if you use the following entries:

    :R P:+::*

    In order to get authentication with Netscape to work, you need a slightly different way of authentication; Netscape (and other newsreaders) don't send authentication info on startup ("active authentication"), but only then when the server requests it by sending a "480 Authentication required for command" reply ("passive authentication"). A entry like the following will do this:

    :R P:user1:pass1:*,! :R P:user2:pass2:* P:user3:pass3:*,!ka.test ---------

    Here all users (in this case only from host {snert,tritta} allowed) have to authenticate. If they do as ``user2'' then they can read and post all groups. If they do as user ``test'' then they can only read and if they do as ``user1'' then they will be able to read all groups except Note that in the above example if the user comes from e.g. then he she will nevertheless be able to authenticate as ``user3'' even if this is marked as host tritta. That means at the stage where a ``authinfo {user|pass}'' command is sent to the server, the host is no longer checked and every valid combination of user and pass will authenticate; so password security is here as important as in the normal password database. Note that passive authentication will only take place if there is a hostname match with security fields filled in.

    If authentication is needed for a protected/secure newsgroup in an environment where no authentication is required for all other newsgroups, and users access the news server from many different hosts (ie; dial-up), then there must be a hostname entry to force passive news agents/clients to authenticate; this may result, however, in every user having to authenticate for ALL newsgroups, even when they don't attempt to access the secured newsgroup; in most cases, this will be accomplished by using a wildcard hostname entry For those that now ask how they can directly go to a newsgroup that needs authentication ... use <news://user:pass@server/>

    Many thanks to Jim Dutton <> for his valuable comments.

    Go to the table of contents

    Subject: (6.21) INN on one machine, UUCP modem on a different one

    Say you have a machine named "newsy" and "modemhead". "newsy" runs INN but only "modemhead" has any modems.

    A quick overview: has a variable called "RNEWSLOCALCONNECT".
    If it is set to "DO", "rnews" will expect to be running on the same machine as "innd". On the other hand, if "RNEWSLOCALCONNECT" is set to "DONT" then "rnews" will connect to the machine listed in "inn.conf".
    Sending batches is a little more complicated.

    Receiving batches on modemhead: Make sure has "RNEWSLOCALCONNECT" set to "DONT", recompile, and copy /bin/rnews and /usr/lib/news/inn.conf to modemhead. The unbatching will be done on modemhead, but the articles will be sent to newsy. It will work like magic. When /bin/rnews runs, it will open an NNTP connection to newsy and feed the batch (one article at a time) to newsy... newsy thinks it's just getting a regular NNTP feed. (which means modemhead has to be listed in hosts.nntp). If newsy and modemhead are different platforms (i.e. Ultrix vs. SunOS) you can use the MakeRnews script (mentioned in to generate just rnews for the modemhead machine.

    Sending batches via modemhead: The "sendbatch" program calls $(UUX).
    Change ${UUX} to be something like "rsh modemhead uux" instead of "uux".
    You'll have to do a little hacking on sendbatch. For example, the part that checks to see if the queue is full might have to be re-written.
    Anyway... now the batches will be generated and send via modemhead's UUCP system.

    Pretty neat, eh?

    Other advice:

    I set UUX to 'rsh uucphost uux' (note no pipe [|]).
    Also, we have no 'uuq' command, but even if we did, it would have

    	returned bogus info as $SITE is not known to UUCP on newshost.
    Thus I created a stupid 'uuq' that does 'echo 0 0 0 0 0 0 0' to satisfy the awk script. However, we have no way to monitor queue length (though its of little importance to us as we only have 3 feeds and they are partial)
    Finally, the /etc/passwd entry for 'news' on 'uucphost' MUST list /bin/sh;
    	/bin/csh results in 'rnews: event not found' and escaping the '!'
    	inside sendbatch had no effect.

    Go to the table of contents

    Subject: (6.22) Setting up proxy-nntp to talk through a firewall


    or look at backends/rcompressed.c in the INN distribution.

    Go to the table of contents

    Subject: (6.23) How do I set up inpaths with INN?

    inpaths should work just fine with INN as it ships. However, you can make it run faster by using the following shell script. Edit it to your tastes. It replaces the long "(cd /var/spool/news ; /usr/local/bin/gfind . -type f -print | /usr/lib/news/local/inpaths sdl /usr/ucb/mail admin," which people usually use.

    . /usr/lib/news/innshellvars
    cd ${SPOOL}
    awk '(NF > 2){print $3}' < ${HISTORY} | tr . / | sort | \
    inpaths `innconfval pathhost` | \
    ${MAILCMD} newsmaster,
    If the inpaths people would include this information in the README, I could delete it from this FAQ.

    Go to the table of contents

    Subject: (6.24) Fill different types of control messages in different directories?

    If you want to keep newgroup messages longer and expire cancel messages after half a day you can do the following:

    Create the groups control.newgroup and control.cancel

    ctlinnd newgroup control.newgroup ctlinnd newgroup control.cancel

    Add the following to expire.ctl:

    control.*:A:1:2:4 control.sendsys:A:10:15:21 control.newgroup:A:10:15:21

    so all control messages typically expire after 2 days, but sendsys and newgroup messages are normally kept for 15 days.
    You should also change newsfeeds appropriately to reflect the changes that control now is a group and a hierarchy.

    Go to the table of contents

    Subject: (6.25) Use more than ~100 Feeds on SunOS 4.1 ?

    SunOS 4.1 normally has a limit of 256 file descriptors per process. But unfortunately there is a bug in stdio (the use of a _signed_ char ) which lets one only use 128 file descriptors.
    One way would be to use an exploder feed (like bufchan).
    The other is:

       There is a stdio replacement called sfio which you can get from  Just compile it as indicated in the package. After that, you must tweak a bit to use sfio:

    DEFS -I../include -I/usr/local/include/sfio CC gcc

    CFLAGS $(DEFS) -O2
    LDFLAGS (empty) LIBS -L/usr/local/lib/sfio -lsfio VAR_STYLE STDARGS (important! sfio doesn't like varargs if compiled ANSI)
    EXITVAL volatile void _EXITVAL volatile void

    Just recompile inn then and go for it.

    If you even need more than 256 descriptors, then you can use SunDBE (the Sun Databese Excelerator) which raises the limit from 256 to 1024.
    Thanks to Christopher Davis <> for this tip.

    Go to the table of contents

    Subject: (6.26) Speed up NNTP Transfers ("Streaming NNTP")

    Normal NNTP uses the following scheme to transfer articles:

    Sender Receiver

      ---> Ihave <some Message>
       Ok send it to me       <---
      --->  sends the actual message 
       Says 'this was ok'    <---

    This procedure uses 2*RTT on the link (rtt = round trip time) plus time for the actual article transfer. Jerry Aguirre has rewritten NNTP code so that it now sends a list of message ids to remote which checks it and returns a value if it is to be sent. With each message sent Streaming NNTP also sends a new message-id to check so that the flow of news keeps streaming.

    Advantages of Streaming NNTP are - Fast even on lines with a high rtt (e.g. satellite links) - faster than normal nntp - compatible as innxmit has a autofallback to normal nntp

    Disadvantages are - INN gets more compute intensive - Streaming NNTP can fill a 64kB line that much that working over it via telnet gets a real pain.

    Go to the table of contents

    Subject: (6.27) I don't want all those reject messages from rnews in syslog

    Rnews is logging those rejects via syslog. The level is determined at compile time via what you tell it in

    #  Informational notice, usually not worth caring about.<BR>
    ### =()<L_NOTICE               @<L_NOTICE>@>()=
    L_NOTICE                LOG_WARNING
    So in this case you need to tell you syslogd to log only messages above level warning.

    BUGS IN 1.5

    Go to the table of contents

    Subject: (6.28): Security problems with 1.5


    	!! It is strongly recommended that anyone using a 1.5.x      !!
    	!! version of INN upgrade to 1.7.2. Especially if you aren't !!
    	!! running any version with security hole fixes in it.       !!

    -----> Security Notice 2

    (updated Fri Apr 4 06:54:30 PST 1997)

    A new security issue has come up that affects anyone using UCB Mail as the mailer defined in the variable _PATH_MAILCMD. A patch has been created that is for all versions of INN and is available at "". Note: The patch was originally released as security-patch.04, but has been regenerated as security-patch.05.

    You should apply this even if you don't use UCB mail. It is a patch to the same file (samples/parsecontrol) as the patches discussed below. If you are running a version of INN older than 1.5.1, then you must apply one of the patches discussed in Security Notice 1 (below) before you can apply this patch..

    Thanks to Doug Schales and Charles Palmer at IBM for bringing this to our attention.

    The MD5 checksum for this patch file is available in security-patch.05.md5. The PGP signature is in security-patch.05.asc. (Both in

    -----> Security Notice 1

    (updated Mon Mar 17 09:00:51 PST 1997)

    A security problem with all versions of INN through 1.5 has been found (fixing this was one of the inspirations for the 1.5.1 release). You want one of the following patch(1)-ready files, which can be downloaded from

    	security-patch.01 for release 1.5. 
    	security-patch.02 for release 1.4sec 
    (but you really should upgrade to a newer version of you're still on 1.4sec).
    	security-patch.03 for releases 1.4unoff3, 1.4unoff4. 

    The MD5 checksums for the patch files are available in

    	security-patch.01.md5 security-patch.02.md5

    The PGP signatures for the patch files are available in

    	security-patch.01.asc security-patch.02.asc

    BUGS IN 1.4

    Go to the table of contents

    Subject: (6.29): 1.4 considered insecure, please upgrade.

    	!! WARNING: Various editions of INN 1.4 (1.4, 1.4sec, 1.4sec2,     !!
    	!! 1.4unoff[1-4]) are vulnerable to not only the security problems !!
    	!! listed below, but also to the ones listed in the above section  !!
    	!! on INN 1.5.  It is strongly recommended that you simply upgrade !!
    	!! to INN 1.7.2.                                                   !!

    UNOFFICIAL patches for INN 1.4 are available via anonymous FTP at

    The ones that are highly recommended are:

    1.4-to-1.4sec -- Fixes a major security hole in INN 1.4. 1.4sec-to-1.4sec2 -- Fixes another known security hole in INN 1.4.
    select-loop-bug.patch -- Under some circumstances innd can lose track

    		of a file descriptor and end up sitting in a select()
    		loop.  If your INN suddenly is using up tons of CPU
    		time and not getting much done, install this UNOFFICIAL
    		patch.  Some OSs are more susceptible to this bug.

    THERE ARE MANY MORE at that site, many add some useful features.

    There is a replacement for innwatch that is written in Perl. Get it from "". This directory is mirrored on

                                 BUGS IN 1.3 and 1.2
                     (Hey, it's 1997!  Upgrade already!)
    (No, REALLY. Upgrade. 1.2 and 1.3 are at this point so rare as to be considered unsupported and unsupportable. There may be many more bugs than listed below, but nobody is going to fix them.)

    Go to the table of contents

    Subject: (6.30) 7-bit encoded batches are not correctly processed. Why is this?

    Chris Schmidt <> replies:

    The decode program that comes with INN up to version 1.3 is broken.
    Because of that the last article in a 7bit encoded batch will not correctly be decoded (the last characters are screwed up). This is fixed in INN 1.4.

    Go to the table of contents

    Subject: (6.31) NOV (overchan) doesn't work well.

    Correct. The NOV support in 1.3 didn't have all the bugs worked out.
    Don't use NOV under INN 1.3. Better yet, upgrade to at least 1.4sec and get all the benefits!

    Go to the table of contents

    Subject: (6.32) Why doesn't nntpget work?

    The nntpget in INN 1.2 doesn't work. Period. Upgrade to the latest version of INN.

    Continue with Part 7...