Metadata file does not match checksum
Mon, 06/09/2008 - 23:27 — sandipIf getting the error "Metadata file does not match checksum", try running `
# yum clean metadata
`yum clean all` should also resolve the issue if the metadata fails.
- sandip's blog
- Login or register to post comments
Get a count of files/folder in a directory
Tue, 05/27/2008 - 11:33 — sandip$ ls -A1 /path/to/folder | wc -l
Lists out the files in a directory including hidden files in a single-column format and pipes it through a line count via wc.
- sandip's blog
- Login or register to post comments
Moving files around to include hidden files
Tue, 05/27/2008 - 11:07 — sandipOften times when moving files from one directory to another, specifically when dealing with web folders, I have missed out the all important .htaccess hidden files with just the usual `mv source/* destination` command.
Here's a one liner that will include the hidden files too:
$ ls -A <source> | while read i; do mv <source>/"$i" <destination>; done
IP range to CIDR conversion
Thu, 05/15/2008 - 10:19 — sandipI've often had to convert IP range with netmask to a CIDR notation. Below is a quick perl script to help with the conversion:
#!/usr/bin/perl -w
# range2cidr.pl
use Net::CIDR;
use Net::CIDR ':all';
if (@ARGV == 0) {
die "Usage Example: $0 192.168.0.0-192.168.255.255 \n";
}
print join("\n", Net::CIDR::range2cidr("$A RGV[0]")) . "\n";
Track files uploaded via pure-ftpd
Mon, 05/12/2008 - 09:28 — sandipRecently, I've had more than one occurrence of files being messed up due to bad uploads from users on a cpanel server running pure-ftpd.
Here is a simple one liner to get a report of uploads:
/bin/grep pure-ftpd /var/log/messages| grep upload | grep -v <trusted ip address>
"trusted ip address" would possibly be your own.
I put the above on a daily cron and keep an eye out for user uploads.
- sandip's blog
- Login or register to post comments
apache internal dummy connection
Sat, 05/10/2008 - 14:54 — sandipI've noticed these in httpd access log starting with Apache2.2:
::1 - - [09/May/2008:14:53:29 -0400] "GET / HTTP/1.0" 200 5043 "-" "Apache (internal dummy connection)"
The apache server occasionally hits localhost to signal its children. See the apache wiki for more info.
"When Apache HTTP Server manages its child processes, it needs a way to wake up processes that are listening for new connections. To do this, it sends a simple HTTP request back to itself...
These requests are perfectly normal and you do not, in general, need to worry about them. They can simply be ignored."
Unfortunately, the homepage I host is a dynamic one and this becomes very costly during busy times. I see a large number of those internal dummy connection requests during an apache graceful restart (SIGUSR1) and at the same time the cpu load on the Apache2.2 server maxes out at nearly 100%. I do not see this cpu load during a graceful restart on apache 2.0 httpd servers.
With the below mod_rewrite rule in place I was able to reduce the load by pointing http request coming from HTTP_USER_AGENT, "internal dummy request" to an empty static html page.
RewriteEngine on
RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC]
RewriteRule ^/$ /blank.html [L]
Also, removed logging of such requests via:
SetEnvIf Remote_Addr "::1" dontlog
CustomLog /var/log/httpd/access.log combined env=!dontlog
- sandip's blog
- Login or register to post comments
apcupsd rpm rebuild on CentOS-5
Sat, 05/10/2008 - 13:42 — sandipApcupsd is a daemon for controlling APC UPSes. It can be used for power mangement and controlling most of APC's UPS models on Unix and Windows machines. Apcupsd works with most of APC's Smart-UPS models as well as most simple signalling models such a Back-UPS, and BackUPS-Office. During a power failure, apcupsd will inform the users about the power failure and that a shutdown may occur. If power is not restored, a system shutdown will follow when the battery is exhausted, a timeout (seconds) expires, or runtime expires based on internal APC calculations determined by power consumption rates.
I kept getting failure when rebuilding from source rpm and was able to resolve once the package latex2html was installed. Although, it did not come up with any dependency failure when trying to build the package.
The required packages I had to install were: gd-devel, tetex, tetex-latex, glibc-devel, ghostscript, latex2html.
- sandip's blog
- Login or register to post comments
- Read more
Check service linked to libwrap / tcpwrapper
Wed, 05/07/2008 - 11:10 — sandipIn order to use hosts_access (hosts.allow/hosts.deny), a service would need to be compiled in with tcpwrapper (tcpd) support and can be checked easily with the below commands.
hosts_access is great as an alternative to iptables and firewall, specifically if you are hosted on a VPS with limited resources for iptables rules.
# ldd `which sshd` | grep -i libwrap
or
# strings `which sshd` | grep -i libwrap
Both the commands should echo out libwrap.so.0 which would mean hosts_access can be used for service sshd.
Make sure you are able to connect to ssh, add your IP to "/etc/hosts.allow". In the below case I am using the full range of my local intranet (LAN).
# Allow localhost
ALL: 127.
# Allow LAN
sshd: 192.168.
Now to block ssh access to others, simply add the below lines to "/etc/hosts.deny".
# Block everyone else from SSH
sshd: ALL
Note: hosts.allow takes precedence over hosts.deny.
- sandip's blog
- Login or register to post comments
Issues with receiving mail on Plesk server
Tue, 04/22/2008 - 09:05 — sandipI was not receiving mails from a particular email address. The MX records checked out fine. The mail server was not in any of the DNSBL list I was subscribed to. There was nothing in the logs that mentioned that there was any emails coming in from the user. However, it did have a lot of relaylocks for the mail servers IP address.
Digging in some more, I found a similar issue discussed at theplanet forum where the issue was caused due to conflict of timeouts and auth packets being dropped instead by the sender mail server, so I adjusted qmail timeout which seemed to push the conversation between the MTAs forward and the emails are now being accepted.
I changed the default timeout from 30 seconds to 15 seconds by editing the /etc/inetd and adding -t15 as below.
smtp stream tcp nowait.1000 root /var/qmail/bin/tcp-env tcp-env -t15 /usr/sbin/rblsmtpd -r bl.spamcop.net -r zen.spamhaus.org /var/qmail/bin/relaylock /var/qmail/bin/qmail-smtpd /var/qmail/bin/smtp_auth /var/qmail/bin/true /var/qmail/bin/cmd5checkpw /var/qmail/bin/true
smtps stream tcp nowait.1000 root /var/qmail/bin/tcp-env tcp-env -t15 /usr/sbin/rblsmtpd -r bl.spamcop.net -r zen.spamhaus.org /var/qmail/bin/relaylock /var/qmail/bin/qmail-smtpd /var/qmail/bin/smtp_auth /var/qmail/bin/true /var/qmail/bin/cmd5checkpw /var/qmail/bin/true
Incremental snapshot backups via rsync and ssh
Fri, 04/04/2008 - 19:35 — sandipIn follow-up to the previous post, I am compiling this as a separate post as this solution is been running very stable for a while with quite a few updates and changes...
I will be setting up a back-up of a remote web-host via rsync over ssh and creating the snapshot style backup on the local machine.
The backups are done incremental, only the files that have changed are backed up so there is very less bandwidth used during the backup and also does not cause any load on the server.
These are sliced backups, meaning that you get a full backup of the last 4 days, and the last 4 weeks. So data can be restored for upto a month of back date.
Below is an example listing of backups you would see.
Mar 11 - daily.0
Mar 10 - daily.1
Mar 9 - daily.2
Mar 8 - daily.3
Mar 5 - weekly.0
Feb 27 - weekly.1
Feb 20 - weekly.2
Feb 13 - weekly.3
Each of those is a full snapshot for the particular day/week. The files are all hard-linked and would only require 2 to 3 times the space used on the server. The backups should consist of web, database, email and some of the important server configuration files.
- sandip's blog
- Login or register to post comments
- Read more