Thursday, December 13, 2012

Nagios Server Installtion


1) Install the prerequisites

yum install httpd php gcc glibc glibc-common gd gd-devel libssl-dev openssl*

2) Create a new nagios user account and give it a password.

/usr/sbin/useradd -m nagios
passwd nagios

3) Create a new nagcmd group for allowing external commands to be submitted through the web interface. 

Add both the nagios user and the apache user to the group.

/usr/sbin/groupadd nagcmd
/usr/sbin/usermod -a -G nagcmd nagios
/usr/sbin/usermod -a -G nagcmd apache

4) Now go to http://www.nagios.org and download the files..

wget http://downloads.sourceforge.net/project/nagios/nagios-3.x/nagios-3.4.3/nagios-3.4.3.tar.gz
wget http://downloads.sourceforge.net/project/nagiosplug/nagiosplug/1.4.16/nagios-plugins-1.4.16.tar.gz

5) Compile and Install Nagios

tar zxvf nagios-3.4.3.tar.gz
cd nagios
./configure --with-command-group=nagcmd
make all
make install; make install-init; make install-config; make install-commandmode;

6) Customize Configuration

Edit the /usr/local/nagios/etc/objects/contacts.cfg config file with your favorite editor and change the email address associated with the nagiosadmin contact definition to the address you’d like to use for receiving alerts.

vim /usr/local/nagios/etc/objects/contacts.cfg
-----------------------------------------------

define contact{
        contact_name                    nagiosadmin             ; Short name of user
        use                             generic-contact         ; Inherit default values from generic-contact template (defined above)
        alias                           Nagios Admin            ; Full name of user

        email                           sankar.k@gmail.com ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******
        }


7) Configure the Web Interface

Install the Nagios web config file in the Apache conf.d directory.

make install-webconf

Create a nagiosadmin account for logging into the Nagios web interface. Remember the password you assign to this account – you’ll need it later.

htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Restart Apache to make the new settings take effect.

service httpd restart

8) Compile and Install the Nagios Plugins

tar zxvf nagios-plugins-1.4.16.tar.gz
nagios-plugins-1.4.16
./configure --with-nagios-user=nagios --with-nagios-group=nagios
make
make install

9) Start Nagios

chkconfig --add nagios
chkconfig nagios on

Verify the sample Nagios configuration files.

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

If there are no errors, start Nagios.

service nagios start

10) Login to the Web Interface using username (nagiosadmin) and password you specified earlier
From your internet browser navigate to the following URL:

http://<your server name or IP>/nagios


Monitor Remote Linux Host using Nagios.
===================================

Follow below steps to monitor a remote Linux host and the various services running on the remote host

 6 steps to install Nagios plugin and NRPE on remote host.

   1) Download Nagios Plugins and NRPE Add-on
   2) Create nagios account
   3) Install Nagios Plugins
   4) Install NRPE
   5) Setup NRPE to run as daemon
   6) Modify the /usr/local/nagios/etc/nrpe.cfg

 4 Configuration steps on the Nagios monitoring server to monitor remote host:

   1) Download NRPE Add-on
   2) Install check_nrpe
   3) Create host and service definition for remote host
   4) Restart the nagios service


Overview

  a) Nagios will execute check_nrpe command on nagios-server and request it to monitor disk usage on remote host using check_disk command.
  b) The check_nrpe on the nagios-server will contact the NRPE daemon on remote host and request it to execute the check_disk on remote host.
  c) The results of the check_disk command will be returned back by NRPE daemon to the check_nrpe on nagios-server.


Following flow summarizes the above explanation:

  Nagios Server (check_nrpe) —–> Remote host (NRPE deamon) —–> check_disk

  Nagios Server (check_nrpe) <—– Remote host (NRPE deamon) <—– check_disk (returns disk space usage)

Steps to install Nagios Plugins and NRPE on the remote host

1. Download Nagios Plugins and NRPE Add-on.

wget http://downloads.sourceforge.net/project/nagiosplug/nagiosplug/1.4.16/nagios-plugins-1.4.16.tar.gz
wget http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.13/nrpe-2.13.tar.gz

2. Create nagios account

useradd nagios
passwd nagios

3. Install nagios-plugin

tar zxvf nagios-plugins-1.4.16.tar.gz
cd nagios-plugins-1.4.16
export LDFLAGS=-ldl
./configure --with-nagios-user=nagios --with-nagios-group=nagios --enable-redhat-pthread-workaround
make
make install
chown nagios.nagios /usr/local/nagios
chown -R nagios.nagios /usr/local/nagios/libexec/

4. Install NRPE

tar zxvf nrpe-2.13.tar.gz
cd nrpe-2.13
./configure
make all
make install-plugin
make install-daemon
make install-daemon-config
make install-xinetd

5. Setup NRPE to run as daemon (i.e as part of xinetd):

 ==> Modify the /etc/xinetd.d/nrpe to add the ip-address of the Nagios monitoring server to the only_from directive. Note that there is a space after the 127.0.0.1 and the nagios monitoring server ip-address (in this example, nagios monitoring server ip-address is: 192.168.80.70)


vim /etc/xinetd.d/nrpe
---------------------

only_from       = 127.0.0.1 192.168.80.70


 ==> Modify the /etc/services and add the following at the end of the file.

vim /etc/services
-----------------

nrpe         5666/tcp             # NRPE

 ==> Start the service

service xinetd restart

 ==> Verify whether NRPE is listening

netstat -at | grep nrpe
       tcp 0      0 *:nrpe *:*                         LISTEN


 ==> Verify to make sure the NRPE is functioning properly

[remotehost]# /usr/local/nagios/libexec/check_nrpe -H localhost
NRPE v2.13

6. Modify the /usr/local/nagios/etc/nrpe.cfg

The nrpe.cfg file located on the remote host contains the commands that are needed to check the services on the remote host. By default the nrpe.cfg comes with few standard check commands as samples. check_users and check_load are shown below as an example.

command[check_users]=/usr/local/nagios/libexec/check_users -w 5 -c 10
command[check_load]=/usr/local/nagios/libexec/check_load -w 15,10,5 -c 30,25,20


In all the check commands, the “-w” stands for “Warning” and “-c” stands for “Critical”. for e.g. in the check_disk command below, if the available disk space gets to 20% of less, nagios will send warning message. If it gets to 10% or less, nagios will send critical message. Change the value of “-c” and “-w” parameter below depending on your environment.

command[check_disk]=/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/hda1


Note: You can execute any of the commands shown in the nrpe.cfg on the command line on remote host and see the results for yourself. For e.g. When I executed the check_disk command on the command line, it displayed the following:

[remotehost]#/usr/local/nagios/libexec/check_disk -w 20% -c 10% -p /dev/hda1
DISK CRITICAL - free space: / 6420 MB (10% inode=98%);| /=55032MB;51792;58266;0;64741


In the above example, since the free disk space on /dev/hda1 is only 10% , it is displaying the CRITICAL message, which will be returned to nagios server.


Configuration steps on the Nagios monitoring server to monitor remote host

1. Download NRPE Add-on

wget http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.13/nrpe-2.13.tar.gz

2. Install check_nrpe on the nagios monitoring server

tar zxvf nrpe-2.13.tar.gz
cd nrpe-2.13
./configure
make all
make install-plugin

 ==> Verify whether nagios monitoring server can talk to the remotehost.

/usr/local/nagios/libexec/check_nrpe -H 192.168.80.129
NRPE v2.13

Note: 192.168.80.70 in the ip-address of the remotehost where the NRPE and nagios plugin was installed as explained in Section II above.


3. Create host and service definition for remotehost

Create a new configuration file /usr/local/nagios/etc/objects/remotehost.cfg to define the host and service definition for this particular remotehost. It is good to take the localhost.cfg and copy it as remotehost.cfg and start modifying it according to your needs.

Ex :

# vim /etc/hosts
   --------------
  192.168.80.129    slave.sankar.com slave

# vim /usr/local/nagios/etc/objects/commands.cfg
  ------------------------------------------------
 ### Add this below lines in end of the file
 # check_nrpe command definition
 define command{
 command_name check_nrpe
 command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -t 30 -c $ARG1$
 }

# cp localhost.cfg slave.cfg

# vim slave.cfg
  --------------
  define host{
        use                     linux-server            ; Name of host template to use
                                                        ; This host definition will inherit all variables that are defined
                                                        ; in (or inherited by) the linux-server host template definition.
        host_name               slave.sankar.com
        alias                   slave.sankar.com
        address                 192.168.80.129
        }

# Define a service to "ping" the local machine

define service{
        use                             local-service         ; Name of service template to use
        host_name                       slave.sankar.com
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
        }


:%s/localhost/slave.sankar.com/g

# vim /usr/local/nagios/etc/nagios.cfg
  ---------------------------------------

# Definitions for monitoring the local (Linux) host
cfg_file=/usr/local/nagios/etc/objects/localhost.cfg
cfg_file=/usr/local/nagios/etc/objects/slave.cfg


4. Restart the nagios service

Restart the nagios as shown below and login to the nagios web (http://nagios-server/nagios/) to verify the status of the remotehost linux sever that was added to nagios for monitoring.

# service nagios reload

Enjoy............

Wednesday, December 12, 2012

Compiling nagios-plugins-1.4.16 throws an error


While compiling nagios plugins, you can get an error given below.

==========================
 check_http.c:312:9: error: ‘ssl_version’ undeclared (first use in this function)
....
make[2]: *** [check_http.o] Error 1
make[2]: Leaving directory `/usr/local/src/nagios-plugins-1.4.16/plugins'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/usr/local/src/nagios-plugins-1.4.16'
make: *** [all] Error 2
========================

Fix :

yum install libssl-dev
yum install openssl*

Monday, December 10, 2012

Command for linux background processing


Perfect for long running batch jobs on a remote server over unreliable connections or if you want to bring your laptop home (instead of keeping that terminal running).

1) screen
   ------------
Log in and run

screen -t title_of_your_choice

Do the same thing again if you want to create another window.
All the following screen commands are preceeded by Ctrl-a (i.e. first press ctrl-a then the shortcut below)
  • 0-9 – switch to window by id
  • Ctrl-n – next window
  • Ctrl-a – previous window
  • d – quit screen (leaving it running)
  • k – kill window
The next day, log in as usual and attach to the screen session using

screen -x

2) nohup
    ----------

 nohup utility which allows to run command./process or shell script that can continue running in the background after you log out from a shell:

Log in and run

 nohup command-name &

Where,
  • command-name : is name of shell script or command name. You can pass argument to command or a shell script.
  • & : nohup does not automatically put the command it runs in the background; you must do that explicitly, by ending the command line with an & symbol.


Tuesday, December 4, 2012

Linux Patch Management with SpaceWalk


Prerequisites


# hostname rhn.sankar.com
# vim /etc/sysconfig/network
==========
HOSTNAME=rhn.sankar.com

install spacewalk-repo package with commands below:

# rpm -Uvh http://yum.spacewalkproject.org/1.8/RHEL/5/x86_64/spacewalk-repo-1.8-4.el5.noarch.rpm
# rpm -Uvh http://yum.pgrpms.org/reporpms/8.4/pgdg-redhat-8.4-2.noarch.rpm

If you want to use the nightly builds, install the spacewalk-repo package based on your operating system (see above) and then enable the nightly repository:
 
# sed -i 's/enabled=0/enabled=1/' /etc/yum.repos.d/spacewalk-nightly.repo
# sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/spacewalk.repo

NOTE:
Nigthly repo contains developers snapshot and it is not suitable for production environment. Especially beware that you might not be able to upgrade from the nightly installation to the next release, especially with respect to the database schema.

Spacewalk requires a Java Virtual Machine with version 1.6.0 or greater. ​EPEL - Extra Packages for Enterprise Linux contains a version of the openjdk that works with Spacewalk. Other dependencies can get installed from EPEL as well. To get packages from EPEL just install this RPM:

# rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm

Setup of the PostgreSQL database
You should have PostgreSQL server running somewhere. Let's assume you will run the server on the same machine as Spacewalk itself:
# yum install -y 'postgresql-server > 8.4'

# chkconfig postgresql on

# /etc/init.d/postgresql initdb

# /etc/init.d/postgresql start

Create database, user, and plpgsql language there:

# su - postgres -c 'PGPASSWORD=spacepw; createdb spaceschema ; createlang plpgsql spaceschema ; yes $PGPASSWORD | createuser -P -sDR spaceuser'

Configure the user to use md5 password to connect to that database. Put the lines like following to /var/lib/pgsql/data/pg_hba.conf. Avoid the common pitfall: Make sure you put them *before* those existing lines that are for all..
# vim /var/lib/pgsql/data/pg_hba.conf
===================================
local spaceschema spaceuser md5
host  spaceschema spaceuser 127.0.0.1/8 md5
host  spaceschema spaceuser ::1/128 md5
local spaceschema postgres  ident


Then reload PostgreSQL:

# service postgresql reload

and test the connection:

# PGPASSWORD=spacepw psql -a -U spaceuser spaceschema
# PGPASSWORD=spacepw psql -h localhost -a -U spaceuser spaceschema

Tune up PostgreSQL's performance by running pgtune:
# yum install pgtune
# pgtune --type=web -c 600 -i /var/lib/pgsql/data/postgresql.conf >/tmp/pgtune.conf

 Review the changes by

# diff -u /var/lib/pgsql/data/postgresql.conf /tmp/pgtune.conf
# cp /var/lib/pgsql/data/postgresql.conf /var/lib/pgsql/data/postgresql.conf.bak
# cp /tmp/pgtune.conf /var/lib/pgsql/data/postgresql.conf
# service postgresql restart

or at least increase maximal number of connections to 600:# echo max_connections = 600 >>/var/lib/pgsql/data/postgresql.conf

Install the spacewalk-postgresql and configure it

When installing Spacewalk, you install spacewalk-postgresql which should give you correct backend and dependencies.

# wget http://pkgs.repoforge.org/python-simplejson/python-simplejson-2.0.5-1.el5.rf.i386.rpm

# rpm -ivh python-simplejson-2.0.5-1.el5.rf.i386.rpm

# yum install spacewalk-postgresql

Then, when you run spacewalk-setup, you'll be asked for connection information:

# spacewalk-setup --disconnected --external-db
** Database: Setting up database connection for PostgreSQL backend.
Hostname (leave empty for local)?
Database? spaceschema
Username? spaceuser
Password? spacepw
** Database: Populating database.

Managing Spacewalk
Spacewalk consists of several services. Each of them has its own init.d script to stop/start/restart. If you want manage all spacewalk services at once use

/usr/sbin/spacewalk-service [stop|start|restart].

Once Spacewalk installation is completed we can access spacewalk admin control panel using below URL

http://rhn.sankar.com

This is time to create admin user id and password.

Creating Channels

1. Create a base channel within Spacewalk.

Channels > Manage Software Channels > Create New Channel

2. Fill up all the required fields such as Channel Name, Channel Label, and Channel Summary

3. Select the Parent (its depends upon your channel)

4. Select the channel architecture from the drop down list

5. Select the Checksum type

6. Write a description about your channel

7. Fill the Contact support information, Channel access control and security GPG

8. Now click the “Create Channel” button.

The channel with the specified name has been created.

Adding packages to repository
There are two ways to add packages to the spacewalk server. We can either add using spacewalk-repo-sync or rhnpush command.

Spacewalk-repo-sync

The  spacewalk-repo-sync tool is used to sync packages from external or local yum repositories. All the packages within the specified repository will be added to the channel.Any url supported by yum is supported by this utility, including mirror lists.  If the url is not supplied, the tool will look to see what  repositories are associated with the specified channel and use those.

Example:

spacewalk-repo-sync --channel=repo1 --url=http://example.com/yum-repo/
spacewalk-repo-sync --channel=repo2 --url=file:///var/share/localrepo/
spacewalk-repo-sync --channel=repom --url=http://example.com/mirrorlist.xml/

You can also use WebGUI and this is the easiest way to create repositories

Screenshot of adding a externam yum repository




    Goto Channels -> Manage Software Channels -> Manage Repositories -> create new repository

After creating the repository, you need to link it to one or more Software Channels.

    Goto: Channels -> Manage Software Channels -> Choose the channel to be linked -> Repositories -> Select the repositories to be linked to the channel -> Update Repositories.

Now you can sync the repository by clicking on the sync tab.

Click on sync now or schedule a sync.

Alternatively you can start a sync of a yum repository defined in the web ui by command line:

spacewalk-repo-sync --channel CHANNEL_LABEL


If, when doing a spacewalk-repo-sync, you get a "yum.Errors.NoMoreMirrorsRepoError?" error then you need to install python-hashlib.

The logs are stored in /var/log/rhn/reposync/



RHNpush :


The  RHN  Satellite Package Pusher (rhnpush) pushes RPMs into locally managed channels on an RHN Satellite Server. Rhnpush has three configuration files called /etc/sysconfig/rhn/rhnpushrc, ~/.rhnpushrc, and ./.rhnpushrc.

/etc/sysconfig/rhn/rhnpushrc is the system-wide default settings for rhnpush.
~/.rhnpushrc is the user-specific settings that override the system-wide settings.
./.rhnpushrc controls the directory specific settings that override the user-specific and system-specific settings.

/etc/sysconfig/rhn/rhnpushrc must be present for rhnpush to function correctly. If it is missing, rhnpush will attempt to use a series of  default  settings  stored  internally as a replacement. ~/.rhnpushrc and ./.rhnpushrc are not required to be present, but will be used if they are present. They are not created automatically by rhnpush.

Rhnpush uses a cache, stored at ~/.rhnpushcache, to temporarily hold the username and password for a user. If the cache is missing, it will be created by rhnpush.
If the cache is present and not too old, the usename-password combo will be used as a convenience for the user. The amount of time a cache lasts is configurable in any of the three configuration files. If your username/password combination gets messed up you have two options. One, you can wait until the cache expires, which takes  minutes  by  default. Two, you can use the –new_cache option to force rhnpush to let you reenter your username/password.

Using  the  –stdin  and –dir options at the same time works as follows: rhnpush will let you type in rpm names, each rpm name on a separate line. When       you have finished entering in rpm names, hit Ctrl-D. Rhnpush will then grab the files from directory you specified with –dir, put them in a  list  with       the rpms you listed through standard input, and send them to the channel that was listed on the command-line or in the configuration files.

Note : Make sure /var/satellite exists on the Spacewalk server and has owner:group apache before pushing.

[root@sathishhost ~]# chgrp apache /var/satellite/ -R

[root@sathishhost ~]# ls -l /var | grep satellite

drwxr-xr-x.  3 apache apache 4096 Mar 20 10:06 satellite

Example

rhnpush --server localhost -u <username> -p <password> --channel <channel-name> /usr/local/src/additional/*.rpm

rhnpush --server localhost -u sathish -p redhat --channel spacewalk-nightly-rhel-6-x86_64 /usr/local/src/additional/*.rpm

rhnpush -v --channel=<channel-name> --server=http://localhost/APP --dir=<package-dir>

rhnpush -v --channel=spacewalk-nightly-rhel-6-x86_64 --server=http://localhost/APP --dir=/usr/local/src/additional

Creating activation key

Activation keys are used to register a system to spacewalk server. System registered with an activation key will inherit the characteristics defined by that key.

1. To create an activation key

Systems > Activation keys > Create new key

2. Enter the description of the activation key

3. If you have a specific key value, type it in the key textbox else leave it as blank. Spacewalk will generate a key after clicking create key button.

4. Enter a numeric value for the limited usage of the key. In case if you want to use the key unlimited times leave the usage textbox as blank.

5. Select the base channel from the drop down list box or choose “Spacewalk Default” to allow systems to register to the default Red Hat provided channel that corresponds to their installed version of Red Hat Enterprise Linux.

6. Enable the universal default check box and click “Create Activation Key”



Registering Clients
Install the client-tools in the client

For RHEL 5 

# rpm -Uvh http://spacewalk.redhat.com/yum/1.7/RHEL/5/i386/spacewalk-client-repo-1.7-5.el5.noarch.rpm

Now install the client packages

# yum install rhn-client-tools rhn-check rhn-setup rhnsd m2crypto yum-rhn-plugin

Register your CentOS or Red Hat Enterprise Linux system to Spacewalk using the activation key you created earlier

# rhnreg_ks --serverUrl=http://YourSpacewalk.example.org/XMLRPC --activationkey=<key-with-rhel-custom-channel>

rhnreg_ks is used for registration of clients to Spacewalk. If you need to re-register a client to your Spacewalk server or change registration from one environment or server to another Spacewalk server then use the “–force” flag with rhnreg_ks, otherwise there is no need to use “–force”.

Friday, November 30, 2012

How to manage Redhat Cluster in RHEL 5



Check cluster status


[root@ncs-db-1 ~]# clustat
Cluster Status for ncs_dbcluster @ Wed Sep  1 15:22:08 2009
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 ncs-db-n1                                   1 Online, Local, rgmanager
 ncs-db-n2                                   2 Online, rgmanager

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:clusvc                 ncs-db-n1                      started



Disable cluster status

After disable, resource group shutdown and will not affect by server reboot and/or failover.


[root@ncs-db-1 ~]# clusvcadm -d clusvc
Local machine disabling service:clusvc...Success


Enable cluster status


[root@ncs-db-1 ~]# clusvcadm -e clusvc
Local machine trying to enable service:clusvc...Success
service:clusvc is now running on ncs-db-n1


Freeze cluster resource group

After freeze resource group, it will not being monitored by the cluster manager. But current resource group state will not be affected by this command.


[root@ncs-db-1 ~]# clusvcadm -Z clusvc
Local machine freezing service:clusvc...Success


Unfreeze cluster resource group

[root@ncs-db-1 ~]# clusvcadm -U clusvc
Local machine unfreezing service:clusvc...Success


Stop cluster resource group
Resource group can fail over if current node due to fencing.


[root@ncs-db-1 ~]# clusvcadm -s clusvc
Local machine stopping service:clusvc...Success


Restart cluster resource group
Resource group can failover if current node due to fencing.


[root@ncs-db-1 ~]# clusvcadm -s clusvc
Local machine stopping service:clusvc...Success


Relocate resource to another member in the failover domain

[root@ncs-db-1 ~]# clusvcadm -r clusvc -m ncs-db-n2
Trying to relocate service:clusvc to ncs-db-n2...Success
service:clusvc is now running on ncs-db-n2


Other useful commands
[root@ncs-db-1 ~]# ccs_tool lsnode

Cluster name: ncs_dbcluster, config_version: 21

Nodename                        Votes Nodeid Fencetype
ncs-db-n1                          1    1    ncs-db-1-ilo1
ncs-db-n2                          1    2    ncs-db-2-ilo2




[root@ncs-db-1 ~]# cman_tool status
Version: 6.1.0
Config Version: 21
Cluster Name: ncs_cluster
Cluster Id: 27444
Cluster Member: Yes
Cluster Generation: 1796
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Quorum: 1
Active subsystems: 9
Flags: 2node Dirty
Ports Bound: 0 11 177
Node name: ncs-db-n1
Node ID: 1
Multicast addresses: xxx.xxx.xxx.xxx
Node addresses: xxx.xxx.xxx.xxx


Thursday, November 29, 2012

Local Yum repository to Red Hat Enterprise Linux.


1) Mount the dvd in /mnt/dvd path.

2) Install the createrepo RPM from the RHEL 5.2 DVD ( which I have mounted to /mnt/dvd ).

# rpm -ivh /mnt/dvd/Server/createrepo-0.4.11-3.el5.noarch.rpm

# cd /mnt

# createrepo .

3) Create the YUM repository pointing back at the newly created repository metadata.


# vim /etc/yum.repos.d/dvdrhel.repo
[MailRepo]
name=MailRepo
baseurl=file:///mnt/
enabled=1
gpgcheck=0

4) Update YUM to have it pick up the new repository.

# yum clean

5) Test by listing the available packages.

# yum list


Wednesday, November 28, 2012

Patch Management in Linux using YUM


Pre-requisites:

    Installation of Red Hat Enterprise Linux 5.
    Premium/Standard License for RHEL 5 32-bit or 64-bit servers (Note: RHEL 32-bit YUM Server will let    you apply patches on RHEL 32-bit OS. For 64-bit OS, you required YUM Server on RHEL 64-bit).
    Createrepo,  yum-downloadonly, httpd packages to be installed on the server.
    Installation and configuration of apache web server.
    Copying the rpm’s from RHEL CD to the defined DocumentRoot Path mentioned in httpd configuration file.


Installation and Configuration of YUM Server:

Step 1: Creating a Repository using apache.

      Installation of apache web server.
# rpm –ivh httpd

      Modify httpd configuration file as mentioned below.
# vi /etc/httpd/conf/httpd.conf

ServerAdmin root@192.168.0.5
ServerName 192.168.0.5:80
DocumentRoot "/var/www/html"

      Create folders as mentioned below.
# cd /var/www/html
# mkdir Server
# mkdir VT
# mkdir images

      Copy all RHEL 5 RPM’s from CD to the Server, VT & images folders on the server.

Step 2: Create Database of RPM’s

      Run createrepo command to create database of the rpm’s.
# cd /var/www/html/Server
# createrepo .
# cd /var/www/html/VT
# createrepo .
# cd /var/www/html/images
# createrepo .


       Create group of RPM’s for installing group of packages.
# createrepo -g /var/www/html/Server/repodata/comps-rhel5-server-core.xml
# createrepo -g /var/www/html/VT/repodata/comps-rhel5-VT-core.xml

Step 3: Register your YUM Server with Red hat Network.

    Ensure that the following entries have been added in host file and the URLs are accessible from the server.
# vi /etc/hosts
209.132.183.44  xmlrpc.rhn.redhat.com
209.132.183.43  satellite.rhn.redhat.com
209.132.183.42  rhn.redhat.com
    Run rhn_register command and follow the instructions as printed on screen, create a system profile. (Note: You should have a valid subscription key)
    De-select Location aware updates from RHN website of the registered machine. (Note: You should have a valid RHN login id.)

Step 4: Download required updates & hot fixes from Red hat

    Run below command to download RPM’s to configured repository.
# yum update -y --downloadonly --downloaddir=/var/www/html/Server/
 
          Re-run below command after downloading any new packages into repository.
# cd /var/www/html/Server
 # createrepo –update .

Note :
If any patch is released, First download it via download command as mentioned in step 4 and then only install on YUM Server using yum update command. Else, you won’t be able to download same patch again.

Configuration of YUM Client:

Step 1: Creation of Repo file for pointing the client to server for updates.
# vi /etc/yum.repos.d/Server.repo
[rhel-i386-server-5]
name=rhel-i386-server-5
baseurl=http://192.168.0.5/Server
enabled=1
gpgcheck=0


[rhel-i386-server-vt-5]
name=rhel-i386-server-vt-5
baseurl=http://192.168.0.5/VT
enabled=1
gpgcheck=0

Step 2: Configure Mail alerts for pending patches on client.
Note: You have to enable SMTP relay on the server.
# yum check-update | mailx -s "PATCHES PENDING on $HOSTNAME" abc@tcs.com

Step 3: Run Yum update command on quarterly basis to make your system up2date with latest patches and hot fixes.
# yum update

Note: Before updating the system, have a proper backup for the same.

Rollback Package updates/Installation on YUM Server and Client:

Step 1: To configure yum to save rollback information, add the line tsflags=repackage to /etc/yum.conf.

Step 2: To configure command-line rpm to do the same thing, add the line %_repackage_all_erasures 1 to /etc/rpm/macros. If /etc/rpm/macros. Does not exist, just create it.

Step 3: You can now install, erase and update packages with yum and/or rpm, and they will save roll back information.

Step 4: When you want to roll back, use rpm to do so.
You do this by specifying the --rollback switch and a date/time, like the examples below:
rpm -Uhv --rollback '19:00'
rpm -Uhv --rollback '8 hours ago'
rpm -Uhv --rollback 'december 31'
rpm -Uhv --rollback 'yesterday'


Friday, November 16, 2012

Apache restrict access based on IP address to selected directories


Apache web server allows server access based upon various conditions. For example you just want to restrict access to url http://payroll.nixcraft.in/ (mapped to /var/www/sub/payroll directory) from 192.168.1.0/24 network (within intranet).

Apache provides access control based on client host name, IP address, or other characteristics of the client request using mod_access module.

Open your httpd.conf file:

# vi /etc/httpd/conf/httpd.conf Locate directory section (for example/var/www/sub/payroll) and set it as follows:
<Directory /var/www/sub/payroll/>
Order allow,deny
Allow from 192.168.1.0/24
Allow from 127
</Directory>

    Order allow,deny: The Order directive controls the default access state and the order in which Allow and Deny directives are evaluated. The (allow,deny) Allow directives are evaluated before the Deny directives. Access is denied by default. Any client which does not match an Allow directive or does match a Deny directive will be denied access to the server.

    Allow from192.168.1.0/24: The Allow directive affects which hosts can access an area of the server (i.e. /var/www/sub/payroll/). Access is only allowed from network 192.168.1.0/24 and localhost (127.0.0.1).

Save file and restart apache web server:

# /etc/init.d/httpd restart


Thursday, November 15, 2012

What is the difference between a daemon and a server process?


A 'daemon' is a software process that runs in the background (continuously) and provides the service to client upon request.

For example named is a daemon. When requested it will provide DNS service. Other examples are:
  • xinetd (it is a super-daemon, it is responsible for invoking other Internet servers when they are needed)
  • inetd (same as xinetd, but with limited configuration options)
  • sendmail/postfix (to send/route email)
  • Apache/httpd (web server)

Wednesday, November 7, 2012

What is the difference between varchar and char?



CHAR Data Type is a Fixed Length Data Type. For example, if you declare a variable/column of CHAR (10) data type, then it will always take 10 bytes irrespective of whether you are storing 1 character or 10 character in this variable or column. And in this example, as we have declared this variable/column as CHAR(10), so we can store max 10 characters in this column.

On the other hand, VARCHAR is a variable length Data Type. For example, if you declare a variable/column of VARCHAR (10) data type, it will take the number of bytes equal to the number of characters stored in this column. So, in this variable/column, if you are storing only one character, then it will take only one byte and if we are storing 10 characters, then it will take 10 bytes. And in this example, as we have declared this variable/column as VARCHAR (10), so we can store max 10 characters in this column.

Monday, November 5, 2012

How to Split and Merge Multiple Linux Files


Considering that you have 50MB of single file. Your friend needs it urgently. The situation shows that the ONLY means of transferring this large file between you and your friend is thru email exchanges. The problem is both your ISP limits the size of email attachments to 5MB file attachments.

How to transfer 50MB large file by email?
How to split and merge large chunks of file by email?
How to send big file by email?
How to split big file and rejoin them later from the other end?

Splitting and Merging Multiple Linux Files

Split Linux commands chops file into multiple chunks of file. Split command divides a single file into multiple files regardless of file types. Split command is fantastic on handling and splitting binary files, compressed and archived files, text files and any other file types. Split is part of coreutils package.

Here’s how to accomplish our objective of splitting large files into pieces and merging them back.

As an example, I have prepared a binary rpm file named testfile.rpm with 18MB filesize. Our objective is to split tesfile.rpm into multiple file pieces and merge them back into the same binary file state from the other end.

# ls -la testfile.rpm
-rw-r--r-- 1 root root 18835725 2007-11-23 11:07 testfile.rpm

Spliting the file into multiple file chunks with 4MB as maximum filesize

# split -b4000000 testfile.rpm
# ls -la
-rw-r--r-- 1 root root 18835725 2007-11-23 11:07 testfile.rpm
-rw-r--r-- 1 root root 4000000 2007-11-23 12:27 xaa
-rw-r--r-- 1 root root 4000000 2007-11-23 12:27 xab
-rw-r--r-- 1 root root 4000000 2007-11-23 12:27 xac
-rw-r--r-- 1 root root 4000000 2007-11-23 12:27 xad
-rw-r--r-- 1 root root 2835725 2007-11-23 12:27 xae

Notice that the file testfile.rpm as splitted into 5 file chunks, with 4 files having 4MB filesize and 1 file having more than 2MB of filesize. Filename conventions are written alphabetically xaa, xab, xac, xad and xae and so on for larger file.

Since each filesize is less than the 5MB email attachment limit, you can now attach these multiple chunks of file into your email message and send them successfully into the receiving end, bypassing email attachment restrictions of 5MB.

Now, we need to merge them back from the other receiving end. Here’s how to do it.

How to merge splitted files into a single big file?

Simply issue these simple commands:

# cat xaa xab xac xad xae > newfile.rpm
# ls -la
-rwxr-xr-x 1 root root 18835725 2007-11-23 12:35 newfile.rpm
-rwxr-xr-x 1 root root 18835725 2007-11-23 11:07 testfile.rpm
-rw-r--r-- 1 root root 4000000 2007-11-23 12:27 xaa
-rw-r--r-- 1 root root 4000000 2007-11-23 12:27 xab
-rw-r--r-- 1 root root 4000000 2007-11-23 12:27 xac
-rw-r--r-- 1 root root 4000000 2007-11-23 12:27 xad
-rw-r--r-- 1 root root 2835725 2007-11-23 12:27 xae

The receiving end now has the actual file with simple file splitting and file merging operations.

Enjoy

Linux Troubleshooting High Load

What do you do when you get an alert that your system load is high? Tracking down the cause of high load just takes some time, some experience and a few Linux tools.


 This column is the first in a series of columns dedicated to one of my favorite subjects: troubleshooting. I'm a systems administrator during the day, and although I enjoy many aspects of my job, it's hard to beat the adrenaline rush of tracking down a complex server problem when downtime is being measured in dollars. Although it's true that there are about as many different reasons for downtime as there are Linux text editors, and just as many approaches to troubleshooting, over the years, I've found I perform the same sorts of steps to isolate a problem. Because my column is generally aimed more at tips and tricks and less on philosophy and design, I'm not going to talk much about overall approaches to problem solving. Instead, in this series I describe some general classes of problems you might find on a Linux system, and then I discuss how to use common tools, most of which probably are already on your system, to isolate and resolve each class of problem.

For this first column, I start with one of the most common problems you will run into on a Linux system. No, it's not getting printing to work. I'm talking about a sluggish server that might have high load. Before I explain how to diagnose and fix high load though, let's take a step back and discuss what load means on a Linux machine and how to know when it's high.
Uptime and Load

When administrators mention high load, generally they are talking about the load average. When I diagnose why a server is slow, the first command I run when I log in to the system is uptime:

$ uptime
 18:30:35 up 365 days, 5:29, 2 users, load average: 1.37, 10.15, 8.10

As you can see, it's my server's uptime birthday today. You also can see that my load average is 1.37, 10.15, 8.10. These numbers represent my average system load during the last 1, 5 and 15 minutes, respectively. Technically speaking, the load average represents the average number of processes that have to wait for CPU time during the last 1, 5 or 15 minutes. For instance, if I have a current load of 0, the system is completely idle. If I have a load of 1, the CPU is busy enough that one process is having to wait for CPU time. If I do have a load of 1 and then spawn another process that normally would tie up a CPU, my load should go to 2. With a load average, the system will give you a good idea of how consistently busy it has been over the past 1, 5 and 10 minutes.

Another important thing to keep in mind when you look at a load average is that it isn't normalized according to the number of CPUs on your system. Generally speaking, a consistent load of 1 means one CPU on the system is tied up. In simplified terms, this means that a single-CPU system with a load of 1 is roughly as busy as a four-CPU system with a load of 4. So in my above example, let's assume that I have a single-CPU system. If I were to log in and see the above load average, I'd probably assume that the server had pretty high load (8.10) during the last 15 minutes that spiked around 5 minutes ago (10.15), but recently, at least during the last 1 minute, the load has dropped significantly. If I saw this, I might even assume that the real cause of the load has subsided. On the other hand, if the load averages were 20.68, 5.01, 1.03, I would conclude that the high load had likely started in the last 5 minutes and was getting worse.
How High Is High?

After you understand what load average means, the next logical question is “What load average is good and what is bad?” The answer to that is “It depends.” You see, a lot of different things can cause load to be high, each of which affects performance differently. One server might have a load of 50 and still be pretty responsive, while another server might have a load of 10 and take forever to log in to. I've had servers with load averages in the hundreds that were certainly slow, but didn't crash, and I had one server that consistently had a load of 50 that was still pretty responsive and stayed up for years.

What really matters when you troubleshoot a system with high load is why the load is high. When you start to diagnose high load, you find that most load seems to fall into three categories: CPU-bound load, load caused by out of memory issues and I/O-bound load. I explain each of these categories in detail below and how to use tools like top and iostat to isolate the root cause.
top

If the first tool I use when I log in to a sluggish system is uptime, the second tool I use is top. The great thing about top is that it's available for all major Linux systems, and it provides a lot of useful information in a single screen. top is a quite complex tool with many options that could warrant its own article. For this column, I stick to how to interpret its output to diagnose high load.

To use top, simply type top on the command line. By default, top will run in interactive mode and update its output every few seconds. Listing 1 shows sample top output from a terminal.


Listing 1. Sample top Output

top - 14:08:25 up 38 days, 8:02, 1 user, load average: 1.70, 1.77, 1.68
Tasks: 107 total,   3 running, 104 sleeping,   0 stopped,   0 zombie
Cpu(s): 11.4%us, 29.6%sy, 0.0%ni, 58.3%id, .7%wa, 0.0%hi, 0.0%si, 0.0%st
Mem:   1024176k total,   997408k used,    26768k free,    85520k buffers
Swap:  1004052k total,     4360k used,   999692k free,   286040k cached

  PID USER    PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 9463 mysql   16   0  686m 111m 3328 S   53  5.5 569:17.64 mysqld
18749 nagios  16   0  140m 134m 1868 S   12  6.6   1345:01 nagios2db_status
24636 nagios  17   0 34660  10m  712 S    8  0.5   1195:15 nagios
22442 nagios  24   0  6048 2024 1452 S    8  0.1   0:00.04 check_time.pl

As you can see, there's a lot of information in only a few lines. The first line mirrors the information you would get from the uptime command and will update every few seconds with the latest load averages. In this case, you can see my system is busy, but not what I would call heavily loaded. All the same, this output breaks down well into our different load categories. When I troubleshoot a sluggish system, I generally will rule out CPU-bound load, then RAM issues, then finally I/O issues in that order, so let's start with CPU-bound load.
CPU-Bound Load

CPU-bound load is load caused when you have too many CPU-intensive processes running at once. Because each process needs CPU resources, they all must wait their turn. To check whether load is CPU-bound, check the CPU line in the top output:

Cpu(s): 11.4%us, 29.6%sy, 0.0%ni, 58.3%id, .7%wa, 0.0%hi, 0.0%si, 0.0%st

Each of these percentages are a percentage of the CPU time tied up doing a particular task. Again, you could spend an entire column on all of the output from top, so here's a few of these values and how to read them:

    us: user CPU time. More often than not, when you have CPU-bound load, it's due to a process run by a user on the system, such as Apache, MySQL or maybe a shell script. If this percentage is high, a user process such as those is a likely cause of the load.

    sy: system CPU time. The system CPU time is the percentage of the CPU tied up by kernel and other system processes. CPU-bound load should manifest either as a high percentage of user or high system CPU time.

    id: CPU idle time. This is the percentage of the time that the CPU spends idle. The higher the number here the better! In fact, if you see really high CPU idle time, it's a good indication that any high load is not CPU-bound.

    wa: I/O wait. The I/O wait value tells the percentage of time the CPU is spending waiting on I/O (typically disk I/O). If you have high load and this value is high, it's likely the load is not CPU-bound but is due to either RAM issues or high disk I/O.

Track Down CPU-Bound Load

If you do see a high percentage in the user or system columns, there's a good chance your load is CPU-bound. To track down the root cause, skip down a few lines to where top displays a list of current processes running on the system. By default, top will sort these based on the percentage of CPU used with the processes using the most on top (Listing 2).

Listing 2. Current Processes Example

  PID USER   PR NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 9463 mysql  16  0  686m 111m 3328 S   53  5.5 569:17.64 mysqld
18749 nagios 16  0  140m 134m 1868 S   12  6.6   1345:01 nagios2db_status
24636 nagios 17  0 34660  10m  712 S    8  0.5   1195:15 nagios
22442 nagios 24  0  6048 2024 1452 S    8  0.1   0:00.04 check_time.pl

The %CPU column tells you just how much CPU each process is taking up. In this case, you can see that MySQL is taking up 53% of my CPU. As you look at this output during CPU-bound load, you probably will see one of two things: either you will have a single process tying up 99% of your CPU, or you will see a number of smaller processes all fighting for a percentage of CPU time. In either case, it's relatively simple to see the processes that are causing the problem. There's one final note I want to add on CPU-bound load: I've seen systems get incredibly high load simply because a multithreaded program spawned a huge number of threads on a system without many CPUs. If you spawn 20 threads on a single-CPU system, you might see a high load average, even though there are no particular processes that seem to tie up CPU time.

Out of RAM Issues

The next cause for high load is a system that has run out of available RAM and has started to go into swap. Because swap space is usually on a hard drive that is much slower than RAM, when you use up available RAM and go into swap, each process slows down dramatically as the disk gets used. Usually this causes a downward spiral as processes that have been swapped run slower, take longer to respond and cause more processes to stack up until the system either runs out of RAM or slows down to an absolute crawl. What's tricky about swap issues is that because they hit the disk so hard, it's easy to misdiagnose them as I/O-bound load. After all, if your disk is being used as RAM, any processes that actually want to access files on the disk are going to have to wait in line. So, if I see high I/O wait in the CPU row in top, I check RAM next and rule it out before I troubleshoot any other I/O issues.

When I want to diagnose out of memory issues, the first place I look is the next couple of lines in the top output:

Mem: 1024176k total, 997408k used, 26768k free, 85520k buffers
Swap: 1004052k total, 4360k used, 999692k free, 286040k cached

These lines tell you the total amount of RAM and swap along with how much is used and free; however, look carefully, as these numbers can be misleading. I've seen many new and even experienced administrators who would look at the above output and conclude the system was almost out of RAM because there was only 26768k free. Although that does show how much RAM is currently unused, it doesn't tell the full story.
The Linux File Cache

When you access a file and the Linux kernel loads it into RAM, the kernel doesn't necessarily unload the file when you no longer need it. If there is enough free RAM available, the kernel tries to cache as many files as it can into RAM. That way, if you access the file a second time, the kernel can retrieve it from RAM instead of the disk and give much better performance. As a system stays running, you will find the free RAM actually will appear to get rather small. If a process needs more RAM though, the kernel simply uses some of its file cache. In fact, I see a lot of the overclocking crowd who want to improve performance and create a ramdisk to store their files. What they don't realize is that more often than not, if they just let the kernel do the work for them, they'd probably see much better results and make more efficient use of their RAM.

To get a more accurate amount of free RAM, you need to combine the values from the free column with the cached column. In my example, I would have 26768k + 286040k, or over 300Mb of free RAM. In this case, I could safely assume my system was not experiencing an out of RAM issue. Of course, even a system that has very little free RAM may not have gone into swap. That's why you also must check the Swap: line and see if a high proportion of your swap is being used.
Track Down High RAM Usage

If you do find you are low on free RAM, go back to the same process output from top, only this time, look in the %MEM column. By default, top will sort by the %CPU column, so simply type M and it will re-sort to show you which processes are using the highest percentage of RAM. In the output in Listing 3, I sorted the same processes by RAM, and you can see that the nagios2db_status process is using the most at 6.6%.

Listing 3. Processes Sorted by RAM

  PID USER   PR NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
18749 nagios 16  0  140m 134m 1868 S   12  6.6   1345:01 nagios2db_status
 9463 mysql  16  0  686m 111m 3328 S   53  5.5 569:17.64 mysqld
24636 nagios 17  0 34660  10m  712 S    8  0.5   1195:15 nagios
22442 nagios 24  0  6048 2024 1452 S    8  0.1   0:00.04 check_time.pl

I/O-Bound Load

I/O-bound load can be tricky to track down sometimes. As I mentioned earlier, if your system is swapping, it can make the load appear to be I/O-bound. Once you rule out swapping though, if you do have a high I/O wait, the next step is to attempt to track down which disk and partition is getting the bulk of the I/O traffic. To do this, you need a tool like iostat.

The iostat tool, like top, is a complicated and full-featured tool that could fill up its own article. Unlike top, although it should be available for your distribution, it may not be installed on your system by default, so you need to track down which package provides it. Under Red Hat and Debian-based systems, you can get it in the sysstat package. Once it's installed, simply run iostat with no arguments to get a good overall view of your disk I/O statistics:

Linux 2.6.24-19-server (hostname)     01/31/2009

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.73    0.07    2.03    0.53    0.00   91.64

Device:    tps  Blk_read/s  Blk_wrtn/s   Blk_read   Blk_wrtn
sda       9.82       417.96        27.53   30227262    1990625
sda1      6.55       219.10         7.12   15845129     515216
sda2      0.04         0.74         3.31      53506     239328
sda3      3.24       198.12        17.09   14328323    1236081

Like with top, iostat gives you the CPU percentage output. Below that, it provides a breakdown of each drive and partition on your system and statistics for each:

    tps: transactions per second.

    Blk_read/s: blocks read per second.

    Blk_wrtn/s: blocks written per second.

    Blk_read: total blocks read.

    Blk_wrtn: total blocks written.

By looking at these different values and comparing them to each other, ideally you will be able to find out first, which partition (or partitions) is getting the bulk of the I/O traffic, and second, whether the majority of that traffic is reads (Blk_read/s) or writes (Blk_wrtn/s). As I said, tracking down the cause of I/O issues can be tricky, but hopefully, those values will help you isolate what processes might be causing the load.

For instance, if you have an I/O-bound load and you suspect that your remote backup job might be the culprit, compare the read and write statistics. Because you know that a remote backup job is primarily going to read from your disk, if you see that the majority of the disk I/O is writes, you reasonably can assume it's not from the backup job. If, on the other hand, you do see a heavy amount of read I/O on a particular partition, you might run the lsof command and grep for that backup process and see whether it does in fact have some open file handles on that partition.

As you can see, tracking down I/O issues with iostat is not straightforward. Even with no arguments, it can take some time and experience to make sense of the output. That said, iostat does have a number of arguments you can use to get more information about different types of I/O, including modes to find details about NFS shares. Check out the man page for iostat if you want to know more.

Up until recently, tools like iostat were about the limit systems administrators had in their toolboxes for tracking down I/O issues, but due to recent developments in the kernel, it has become easier to find the causes of I/O on a per-process level. If you have a relatively new system, check out the iotop tool. Like with iostat, it may not be installed by default, but as the name implies, it essentially acts like top, only for disk I/O. In Listing 4, you can see that an rsync process on this machine is using the most I/O (in this case, read I/O).

Listing 4. Example iotop Tool Output

Total DISK READ: 189.52 K/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER DISK READ DISK WRITE  SWAPIN     IO>    COMMAND         
 8169 be/4 root  189.52 K/s   0.00 B/s  0.00 %  0.00 % rsync --server --se
 4243 be/4 kyle    0.00 B/s   3.79 K/s  0.00 %  0.00 % cli /usr/lib/gnome-
 4244 be/4 kyle    0.00 B/s   3.79 K/s  0.00 %  0.00 % cli /usr/lib/gnome-
    1 be/4 root    0.00 B/s   0.00 B/s  0.00 %  0.00 % init

Once You Track Down the Culprit

How you deal with these load-causing processes is up to you and depends on a lot of factors. In some cases, you might have a script that has gone out of control and is something you can easily kill. In other situations, such as in the case of a database process, it might not be safe simply to kill the process, because it could leave corrupted data behind. Plus, it could just be that your service is running out of capacity, and the real solution is either to add more resources to your current server or add more servers to share the load. It might even be load from a one-time job that is running on the machine and shouldn't impact load in the future, so you just can let the process complete. Because so many different things can cause processes to tie up server resources, it's hard to list them all here, but hopefully, being able to identify the causes of your high load will put you on the right track the next time you get an alert that a machine is slow.

 

Thursday, November 1, 2012

Check Linux Server connected switch port no


Using below command we can check switch name and switch port.

tcpdump -nn -v -i eth1 -s 1500 -c 1 'ether[20:2] == 0x2000'


Tuesday, October 30, 2012

Sendmail Server Interview Questions And Answers

Q: - How to start sendmail server ?
service sendmail restart

Q: - On which ports sendmail and senmail with SSL works ?
By default, Sendmail uses TCP and UDP port 25 for non-encrypted transfers. If the Sendmail server is configured to use SSL for encrypting email sent and received, it uses port 465.

Q: - Explain use of "trusted-users" file ?
List of users that can send email as other users without a warning including system users such as apache for the Apache HTTP Server.

Q: - Explain the use of "local-host-names" file ?
If the email server should be known by different hostnames, list the host-names in this file, one line per hostname. Any email sent to addresses at these hostnames is treated as local mail. The FEATURE(`use_cw_fileĆ¢€™) option must be enabled in the sendmail.mc file for this file to be referenced.

Q: - explain the use of /etc/aliases file ?
/etc/aliases, can be used to redirect email from one user to another. By default, it includes redirects for system accounts to the root user. It can then be used to redirect all email for the root user to the user account for the system administrator.

Q: - Can we use SSL Encryption with Sendmail ?
Yes, Sendmail can be configured to encrypt email sent and received using SSL (secure sockets layer).

Q: - What is Sendmail ?
Sendmail is an MTA, meaning it accepts email messages sent to it using the SMTP proto-col and transports them to another MTA email server until the messages reach heir destinations. It also accepts email for the local network and delivers them to local mail spools, one for each user.

Q: - What is the role of MUA ?
An MUA (Mail User Agent) with access to the mailbox file, directly or through a network file system, can read messages from the disk and display them for the user. this is generally a console or webmail application running on the server.

Q: - Which are the important configuration files for Sendmail server ?
The /etc/mail/ directory contains all the Sendmail configuration files, with sendmail.cf and submit.cf being the main configuration files. The sendmail.cf file includes options for the mail transmission agent and accepts SMTP connections for sending email. The submit.cf file configures the mail submission program.

Q: - How to configure sendmail to accept mail for local delivery that is addressed to other hosts?
Create a /etc/mail/local-host-names file. Put into that file the hostnames and domain names for which sendmail should accept mail for local delivery. Enter the names with one hostname or domain name per line. And also make sure that Sendmail configuration file should contain "use_cw_file" option.
dnl Load class $=w with other names for the local host
FEATURE(`use_cw_file')

Q: - When an organization stores aliases on an LDAP server, how you will configure sendmail to read aliases from the LDAP server?
Use "sendmail -bt -d0" command to check the sendmail compiler options. If sendmail was not compiled with LDAP support, recompile and reinstall sendmail.
Add an ALIAS_FILE define, containing the string ldap  to the sendmail configuration.
# Set the LDAP cluster value
define(`confLDAP_CLUSTER', `wrotethebook.com')
# Tell sendmail that aliases are available via LDAP
define(`ALIAS_FILE', `ldap:')

Q: - How to forward emails of a local user to external address?
Add an alias to the aliases file for each user whose mail must be forwarded to another system. The recipient field of the alias entry must be a full email address that includes the host part. After adding the desired aliases, rebuild the aliases database file with the newaliases command.

Q: - You have been asked to create a sendmail configuration that sends all local mail to a mail hub, while directly delivering mail addressed to external systems.
Create a sendmail configuration containing the MAIL_HUB define to identify the mail relay host for local mail. Use the LOCAL_USER command to exempt the root user's mail from relaying.
dnl Define a relay server for local mail
define(`MAIL_HUB', `smtp.test.com')
dnl Users whose mail is not passed to the mail hub
LOCAL_USER(root)
Rebuild and reinstall sendmail.cf, and then restart sendmail.

Q: - How to  configure multiple mail queues?
mkdir /var/spool/mqueue/queue.1
mkdir /var/spool/mqueue/queue.2
mkdir /var/spool/mqueue/queue.3
chmod 700 /var/spool/mqueue/queue.1
chmod 700 /var/spool/mqueue/queue.2
chmod 700 /var/spool/mqueue/queue.3
Add the QUEUE_DIR define to the sendmail configuration to use the new queue directories.
dnl Declare the queue directory path
define(`QUEUE_DIR', `/var/spool/mqueue/queue.*')

Q: - How to  disable certain SMTP commands?
Add the confPRIVACY_FLAGS define to the sendmail configuration to set Privacy Options that disable unwanted, optional SMTP commands. Here we will disables the EXPN, VRFY, VERB, and ETRN commands.
dnl Disable EXPN, VRFY, VERB and ETRN
define(`confPRIVACY_FLAGS', `noexpn,novrfy,noverb,noetrn')
Rebuild and reinstall sendmail.cf, and then restart sendmail.

Q: - In which Sendmail configuration file we have to make changes?
we will make the changes only in the sendmail.mc file, and the changes will be moved into the sendmail.cf file for us.

Q: -  When Sendmail dispatches your email, it places the servers hostname behind your username, which becomes the "from address" in the email (ie. user@mail.test.com).But we want to use the domain name and not the hostname?
define(`confDOMAIN_NAME', `test.com')dnl
FEATURE(`relay_entire_domain')dnl

Q: - What does /etc/mail/access file contains?
The access database ("/etc/mail/access") is a list of IP addresses and domainnames of allowable connections.
FEATURE(`access_db',`hash -T<TMPF> -o /etc/mail/access.db')dnl
and cat  /etc/mail/access
localhost.localdomain      RELAY
localhost                              RELAY
127.0.0.1                             RELAY
192.168.0                            RELAY
test.com                              RELAY

Q: - How to restrict sendmail to sending a big file?
define(`confMAX_MESSAGE_SIZE',`52428800')dnl
or If you are using a PHP based webmail application like SquirrelMail, you can adjust the max file size in php.ini file.
vi php.ini
post_max_size = 50M
upload_max_filesize = 50M
memory_limit = 64M

Q: - How to set 25 recipients for each email?
define(`confMAX_RCPTS_PER_MESSAGE',`50')dnl

Q: - Which antivirus you have integrated with sendmail ?
ClaimAV

Q: - What is Clamav-Milter?
Clamav-Milter is a tool to integrate sendmail and clamAV antivirus.

Q: - Which configuration files are required to integrate sendmail and ClaimAV antivirus?
milter.conf and clamav-milter

Q: - How to test sendmail integration with ClaimAV?
grep Milter /var/log/maillog
You have to get following type of messages.
sendmail: Milter add: header: X-Virus-Scanned: ClamAV version 0.88.2, clamav-milter version 0.88.2 on mail.test.com
sendmail: Milter add: header: X-Virus-Status: Clean 

Q: - Which tool you have used to block spamming?
SpamAssassin

Q: - What does "/etc/mail/" directory contains?
The /etc/mail/ directory contain all the Sendmail configuration files, with sendmail.cf and submit.cf being the main configuration files.

Q: - Explain the use of /etc/mail/relay-domains file?
The /etc/mail/relay-domains file is used to determine domains from which it will relay mail. The contents of the relay-domains file should be limited to those domains that can be trusted not to originate spam.

Q: - What is the name of spamassassin configuration file?
/etc/mail/spamassassin/local.cf

Q: - How to check mail Queue of sendmail?
/usr/lib/sendmail -bp
or
mailq

Q: - How to use  m4 macro processor to generate a new sendmail.cf?
m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf

DNS Server Interview Questions And Answers



Q: - which are the important configuration files for DNS server ?
BIND uses /etc/named.conf as its main configuration file, the /etc/rndc.conf file as the configuration file for name server control utility rndc, and the /var/named/ directory for zone files and the like.

Q: - What is BIND ?
BIND stands for Berkeley Internet Name Domain which is the most commonly used Domain Name System (DNS) server on the Internet.

Q: - On which version of bind u have worked ?
BIND 9

Q: - What is the role of DNS ?
A DNS server, or name server, is used to resolve an IP address to a hostname or vice versa.

Q: - On which port DNS server works ?
DNS servers use port 53 by default. Incoming and outgoing packets should be allowed on port 53. Also allow connections on port 921 if you configure a lightweight resolver server. The DNS control utility, rndc, connects to the DNS server with TCP port 953 by default. If you are running rndc on the name server, connections on this TCP port from localhost should be allowed. If you are running rndc on additional systems, allow connections to port 953 (or whatever port you have chosen to configure) from these additional systems.

Q: - What is round robin DNS?
Round robin DNS is usually used for balancing the load of geographically distributed Web servers. For example, a company has one domain name and three identical home pages residing on three servers with three different IP addresses. When one user accesses the home page it will be sent to the first IP address. The second user who accesses the home page will be sent to the next IP address, and the third user will be sent to the third IP address. In each case, once the IP address is given out, it goes to the end of the list. The fourth user, therefore, will be sent to the first IP address, and so forth.

Q: - What is Name Server?
A name server keeps information for the translation of domain names to IP addresses   and IP addresses to domain names. The name server is a program that performs the translation at the request of a resolver or another name server.

Q: - What is Primary name server or primary master server?
Primary name server/primary master is the main data source for the zone. It is the authoritative server for the zone. This server acquires data about its zone from databases saved on a local disk. The primary server must be published as an authoritative name server for the domain in the SOA resource record, while the primary master server does not need to be published.

Q: - What is Secondary name server/slave name server?
Secondary name server/slave name server acquires data about the zone by copying the data from the primary name server (respectively from the master server) at regular time intervals. It makes no sense to edit these databases on the secondary name servers, although they are saved on the local server disk because they will be rewritten during further copying.

Q: - what is Root name server?
Root name server is an authoritative name server for the root domain (for the dot). Each root name server is a primary server, which differentiates it from other name servers.

Q: - what is Stealth name server?
Stealth name server is a secret server. This type of name server is not published anywhere. It is only known to the servers that have its IP address statically listed in their configuration. It is an authoritative server. It acquires the data for the zone with the help of a zone transfer. It can be the main server for the zone. Stealth servers can be used as a local backup if the local servers are unavailable.

Q: - What do you mean by "Resource Records"?
Information on domain names and their IP addresses, as well as all the other information distributed via DNS is stored in the memory of name servers as Resource Records (RR).

Q: - Explain "TTL"?
Time to live. A 32-bit number indicating the time the particular RR can be kept valid in a server cache. When this time expires, the record has to be considered invalid. The value 0 keeps nonauthoritative servers from saving the RR to their cache memory.

Q: - Tell me 5 Types of DNS records?
A, NS, CNAME, SOA, PTR, MX.

Q:- explain "SOA Record"?
The Start of Authority (SOA) record determines the name server that is an authoritative source of information for the particular domain. There is always only one SOA record in the file, and it is placed at the beginning of the file of authoritative resource records.

Q: - what is "A Record"
A (Address) records assign IP addresses to domain names of computers. The IP address cannot have a dot at the end.

Q: - Explain "CNAME Record"?
Synonyms to domain names can be created using CNAME records. This is often referred to as 'creating aliases for computer names'.

Q: - What are "HINFO and TXT Records"?
HINFO and TXT records are for information only. An HINFO record has two items in its data part. The first item is information about hardware, and the second one is information about software. A TXT record contains a general data string in its data part.
Example :
test.com IN SOA ...
...
mail IN A 192.1.1.2
IN HINFO My_Server UNIX
IN TXT my server

Q: - what are "MX Records"?
MX records specify the mailing server of the domain. An MX record shows to which computer a mail of a particular domain should be sent. The MX record also includes a priority number, which can be used to determine several computers where the mail for the domain can be sent. The first attempt is to deliver the mail to the computer with the highest priority (lowest value). If this attempt fails, the mail goes to the next computer (with a higher priority value), and so on.

test.com IN SOA ...
...
mail               IN        A         192.1.1.2
                       IN       HINFO    AlphaServer UNIX
                       IN        TXT       my  server
                       IN         MX   30    mail2.nextstep4it.com
                       IN         MX   20    mail3.nextstep4it.com
                       IN         MX   10    mail2.nextstep4it.com

Q: - Explain "PTR Records"?
A Pointer Record (PTR) is used to translate an IP address into a domain name.

Q: - What is Dynamic DNS?
Dynamic DNS a method of keeping a domain name linked to a changing IP address as not all computers use static IP addresses. Typically, when a user connects to the Internet, the user's ISP assigns an unused IP address from a pool of IP addresses, and this address is used only for the duration of that specific connection. This method of dynamically assigning addresses extends the usable pool of available IP addresses. A dynamic DNS service provider uses a special program that runs on the user's computer, contacting the DNS service each time the IP address provided by the ISP changes and subsequently updating the DNS database to reflect the change in IP address.

Q: - What is the role of "named-checkconf Utility"?
The named-checkconf utility checks the syntax of the named.conf configuration file.
Syntax: named-checkconf    [-t directory] [filename]

Q: - what is the role of "named-checkzone Utility"?
The named-checkzone utility checks the syntax and consistency of the zone file.
Syntax:     named-checkzone [-dgv]   [-c class] zone   [filename]



Scanning for new LUNs on Linux servers

   

# ls /sys/class/fc_host

 host0  host1  host2  host3


 echo "1" > /sys/class/fc_host/host0/issue_lip

 echo "- - -" > /sys/class/scsi_host/host0/scan

 echo "1" > /sys/class/fc_host/host1/issue_lip

 echo "- - -" > /sys/class/scsi_host/host1/scan

 echo "1" > /sys/class/fc_host/host2/issue_lip

 echo "- - -" > /sys/class/scsi_host/host2/scan

 echo "1" > /sys/class/fc_host/host3/issue_lip

 echo "- - -" > /sys/class/scsi_host/host3/scan

 

And in other cases I use the following script to prod the sysfs scan and issue_lip entries directly:

 

 #!/bin/bash

SLEEP_INTERVAL=300

echo "Scanning all fibre channel host adapters"

for i in `ls /sys/class/fc_host`
do
    echo "Rescanning /sys/class/fc_host/${i}:"

    echo "  Issuing a loop initialization on ${i}:" 
    echo "1" > /sys/class/fc_host/${i}/issue_lip

    echo "  Scanning ${i} for new devices:"
    echo "- - -" > "/sys/class/scsi_host/${i}/scan"

    echo "Sleeping for ${SLEEP_INTERVAL} seconds"
    sleep ${SLEEP_INTERVAL}
done
 

=================


1) Check the newly assign LUN using below command and compare it using with our backup.


# multipath –ll

-------------------------------------------------------

mpath4 (36006016069502200b2fbd50e3173e011) dm-4 DGC,RAID 5

[size=400G][features=1 queue_if_no_path][hwhandler=1 emc][rw]

\_ round-robin 0 [prio=2][active]

 \_ 3:0:0:4 sdf 8:80   [active][ready]

 \_ 3:0:2:4 sdr 65:16  [active][ready]

\_ round-robin 0 [prio=0][enabled]

 \_ 3:0:1:4 sdl 8:176  [active][ready]

 \_ 3:0:3:4 sdx 65:112 [active][ready]

-------------------------------------------------------------------

In this case mpath4 is newly assign LUN, So continue to do the activity.
 

2)  Create the partition as below.


# fdisk /dev/mapper/mpath4

n

8e

# partprobe /dev/mapper/mpath4*

 
6) Check the newly created partition using below command.

 
# pvs

---------------------------------------------------------

PV                   VG         Fmt  Attr PSize    PFree

  /dev/dm-5            VolGroup01 lvm2 a-   1016.00M    0

  /dev/dm-6            VolGroup01 lvm2 a-     19.99G    0

  /dev/dm-7            VolGroup01 lvm2 a-    399.99G    0

  /dev/dm-8            VolGroup01 lvm2 a-     69.99G    0

  /dev/mapper/mpath4p1 VolGroup01 lvm2 a-    100.00G 4.00M

  /dev/sda6            VolGroup00 lvm2 a-    109.88G    0

--------------------------------------------------------------------------------

# pvcreate /dev/mapper/mpath4p1

 
# vgs

-----------------------------------------------------------------------------

VG         #PV #LV #SN Attr   VSize   VFree 

  VolGroup00   1   4   0 wz--n- 110.47G  56.56G

  VolGroup01   6   4   0 wz--n- 591.96G 200.00M

-------------------------------------------------------------------

 
# vgextend    VolGroup01   /dev/mapper/mpath4p1

# lvs

---------------------------------------------------------

LV          VG         Attr   LSize   Origin Snap%  Move Log Copy%  Convert

  backup      VolGroup00 -wi-a-  54.88G                                     

  home        VolGroup00 -wi-ao  10.00G                                     

  tmp         VolGroup00 -wi-ao   5.00G                                     

  usr         VolGroup00 -wi-ao  10.00G                                      

  var         VolGroup00 -wi-ao  30.00G                                     

  application VolGroup01 -wi-ao 569.96G                                     

  logs        VolGroup01 -wi-ao  20.00G                                      

  redo        VolGroup01 -wi-ao   1.00G 

--------------------------------------------------------------------------------

# lvextend -L +99.99G /dev/VolGroup01/application

# e2fsck –f /dev/VolGroup01/application

# resize2fs /dev/VolGroup01/application