Archive

Archive for September, 2015

Create `ext4` Partition, Format and Mount using parted (`fdisk` unable to create partition more than 2TB).

September 30, 2015 Leave a comment

Create Partition, Format and Mounting using parted.

Below is the image of how partition is divided. (courtesy from wikipedia)
image

Logging into the server

First lets check the fdisk partition to see how much space we have on the server.
Using username "root".
root@192.168.100.44's password:
Last login: Wed Sep 30 13:22:13 2015 from 192.168.100.2
[root@my-server ~]# fdisk -l /dev/sdb

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

Disk /dev/sdb: 13196.0 GB, 13196018581504 bytes
255 heads, 63 sectors/track, 1604324 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267350  2147483647+  ee  GPT
Checking disk.
[root@my-server ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VG-LV_ROOT
                       96G  1.9G   90G   2% /
tmpfs                 127G     0  127G   0% /dev/shm
/dev/sda2             976M   32M  894M   4% /boot
/dev/sda1            1022M  276K 1022M   1% /boot/efi
/dev/mapper/VG-LV_HOME
                      976M  1.3M  924M   1% /home
/dev/mapper/VG-LV_VAR
                      998G  1.5G  946G   1% /var
Let us start partitioning the RAID on the server.
[root@my-server ~]# parted /dev/sdb
GNU Parted 2.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: DELL PERC H730 Mini (scsi)
Disk /dev/sdb: 13196GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
(parted) unit GB
(parted) mkpart primary 1MB 13196GB
(parted) print
Model: DELL PERC H730 Mini (scsi)
Disk /dev/sdb: 13196GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End      Size     File system  Name     Flags
 1      0.00GB  13196GB  13196GB  ext4         primary

(parted) quit
Partition is created. Now we are going to format it using ext4.
[root@my-server ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
732422144 inodes, 2929687296 blocks
146484364 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
89407 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
        2560000000

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 22 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@my-server ~]#
[root@my-server ~]#
Checking for the partition is ready. Now we need to mount it.
[root@my-server ~]# lsblk
NAME                  MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
sda                     8:0    0    1.1T  0 disk
├─sda1                  8:1    0      1G  0 part /boot/efi
├─sda2                  8:2    0      1G  0 part /boot
└─sda3                  8:3    0    1.1T  0 part
  ├─VG-LV_ROOT (dm-0) 253:0    0   97.7G  0 lvm  /
  ├─VG-LV_SWAP (dm-1) 253:1    0      3G  0 lvm  [SWAP]
  ├─VG-LV_VAR (dm-2)  253:2    0 1013.6G  0 lvm  /var
  └─VG-LV_HOME (dm-3) 253:3    0      1G  0 lvm  /home
sdb                     8:16   0     12T  0 disk
└─sdb1                  8:17   0     12T  0 part /data
Creating a mount point.
[root@my-server ~]# mkdir /data    
Updating /etc/fstab. Add the below line to /etc/fstab.
#------------------------------------------------------------------------
# drive      |     dir  |    fs-type     |    options     | dump  |  pass
#-----------------------------------------------------------------------
/dev/sdb1         /data       ext4           defaults        0         0
Here are more details about what each columns mean.
  1. file system : The partition or storage device to be mounted.
  2. dir : The mountpoint where is mounted to.
  3. fs-type : The file system type of the partition or storage device to be mounted. Many different file systems are supported: ext2, ext3, ext4, btrfs, reiserfs, xfs, jfs, smbfs, iso9660, vfat, ntfs, swap and auto. The auto type lets the mount command guess what type of file system is used. This is useful for optical media (CD/DVD).
  4. options : Mount options of the filesystem to be used. See the mount man page. Please note that some options are specific to filesystems; to discover them see below in the aforementioned mount man page.
  5. dump : Used by the dump utility to decide when to make a backup. Dump checks the entry and uses the number to decide if a file system should be backed up. Possible entries are 0 and 1. If 0, dump will ignore the file system; if 1, dump will make a backup. Most users will not have dump installed, so they should put 0 for the dump entry.
  6. pass : Used by fsck to decide which order filesystems are to be checked. Possible entries are 0, 1 and 2. The root file system should have the highest priority 1 (unless its type is btrfs, in which case this field should be 0) – all other file systems you want to have checked should have a 2. File systems with a value 0 will not be checked by the fsck utility.
Here is how the contents look like.
[root@my-server ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue May 19 15:57:32 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VG-LV_ROOT  /                       ext4    defaults        1 1
UUID=4185b123-5123-45ca-b123-de6d1da123e2 /boot                   ext4    defaults        1 2
UUID=EB97-DBDC          /boot/efi               vfat    umask=0077,shortname=winnt 0 0
/dev/mapper/VG-LV_HOME  /home                   ext4    defaults        1 2
/dev/mapper/VG-LV_VAR   /var                    ext4    defaults        1 2
/dev/mapper/VG-LV_SWAP  swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
/dev/sdb1               /data                   ext4    defaults        0 0
[root@my-server ~]#
Executing mount -a to load the fstab entries.
[root@my-server ~]# mount -a 
Display mount entries.
[root@my-server ~]# mount
/dev/mapper/VG-LV_ROOT on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda2 on /boot type ext4 (rw)
/dev/sda1 on /boot/efi type vfat (rw,umask=0077,shortname=winnt)
/dev/mapper/VG-LV_HOME on /home type ext4 (rw)
/dev/mapper/VG-LV_VAR on /var type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/sdb1 on /data type ext4 (rw)
Checking for directory mounting.
[root@my-server ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VG-LV_ROOT
                       96G  1.9G   90G   2% /
tmpfs                 127G     0  127G   0% /dev/shm
/dev/sda2             976M   32M  894M   4% /boot
/dev/sda1            1022M  276K 1022M   1% /boot/efi
/dev/mapper/VG-LV_HOME
                      976M  1.3M  924M   1% /home
/dev/mapper/VG-LV_VAR
                      998G  1.5G  946G   1% /var
/dev/sdb1              13T   31M   13T   1% /data
[root@my-server ~]#
Now we are all good.
Important Links :
http://ift.tt/1KLr2lM
http://ift.tt/1q2qjPZ

from Blogger http://ift.tt/1KLr3Gs
via IFTTT

Categories: Others Tags: ,

Creating a RHEL cluster with Virtual IP using CMAN and Pacemaker.

September 29, 2015 Leave a comment

Creating a two-node RHEL cluster with Virtual IP using CMAN and Pacemaker.

Important Links :
http://ift.tt/1O5DMaN
http://ift.tt/1O5DMaO

Configuring Repo on RHEL 6.6

[root@waepprrkhe001 ~]# cat /etc/yum.repos.d/centos.repo
[centos-6-base]
name=CentOS-$releasever - Base
mirrorlist=http://ift.tt/1faCRks
baseurl=http://ift.tt/1E1l8vi
enabled=1
gpgkey=http://ift.tt/1O5DO2n
[root@waepprrkhe001 ~]#

Installation and initial configuration

Install the required packages on both machines:
yum install pacemaker cman pcs ccs resource-agents
Set up and configure the cluster on the primary machine, changing vipcluster, primary.server.com and secondary.server.com as needed:
ccs -f /etc/cluster/cluster.conf --createcluster vipcluster
ccs -f /etc/cluster/cluster.conf --addnode primary.server.com
ccs -f /etc/cluster/cluster.conf --addnode secondary.server.com
ccs -f /etc/cluster/cluster.conf --addfencedev pcmk agent=fence_pcmk
ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect primary.server.com
ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect secondary.server.com
ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk primary.server.com pcmk-redirect port=primary.server.com
ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk secondary.server.com pcmk-redirect port=secondary.server.com
Copy /etc/cluster/cluster.conf from the primary server to secondary server in cluster.
It’s necessary to turn off quorum checking, so do this on both machines:
echo "CMAN_QUORUM_TIMEOUT=0" >> /etc/sysconfig/cman

Start the services

Start up the services on both servers.
service cman start
service pacemaker start
Make sure both services can be reboot:
chkconfig cman on
chkconfig pacemaker on

Configure and create floating IP

Configure the cluster on the primary server.
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
Create the Virtual IP on the primary server. This VIP will be assigned between the 2 servers.
If primary goes down, then this ip is assigned to the secondary server.
pcs resource create vipbalancerip ocf:heartbeat:IPaddr2 ip=192.168.0.100 cidr_netmask=32 op monitor interval=30s
pcs constraint location vipbalancerip prefers primary.server.com=INFINITY

Cluster administration

To monitor the status of the cluster:
pcs status
Here is the output from primary
[root@waepprrkhe001 ~]# pcs status
Cluster name: vipcluster
Last updated: Mon Sep 28 20:53:57 2015
Last change: Mon Sep 28 19:52:47 2015
Stack: cman
Current DC: primary.server.com - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured
1 Resources configured


Online: [ primary.server.com secondary.server.com ]

Full list of resources:

 livefrontendIP0        (ocf::heartbeat:IPaddr2):       Started primary.server.com

[root@waepprrkhe001 ~]#    
To show the full cluster configuration:
pcs config
Here is the output from primary
[root@waepprrkhe001 ~]# pcs config
Cluster Name: vipcluster
Corosync Nodes:
 primary.server.com secondary.server.com
Pacemaker Nodes:
 primary.server.com secondary.server.com

Resources:
 Resource: livefrontendIP0 (class=ocf provider=heartbeat type=IPaddr2)
  Attributes: ip=192.168.0.100 cidr_netmask=32
  Operations: start interval=0s timeout=20s (livefrontendIP0-start-interval-0s)
              stop interval=0s timeout=20s (livefrontendIP0-stop-interval-0s)
              monitor interval=30s (livefrontendIP0-monitor-interval-30s)

Stonith Devices:
Fencing Levels:

Location Constraints:
  Resource: livefrontendIP0
    Enabled on: primary.server.com (score:INFINITY) (id:location-livefrontendIP0-primary.server.com-INFINITY)
Ordering Constraints:
Colocation Constraints:

Resources Defaults:
 No defaults set
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: cman
 dc-version: 1.1.11-97629de
 no-quorum-policy: ignore
 stonith-enabled: false
[root@waepprrkhe001 ~]#

Failover testing.

Shutdown secondary server.
[root@waepprrkhe001 ~]# pcs status
Cluster name: vipcluster
Last updated: Mon Sep 28 20:08:00 2015
Last change: Mon Sep 28 19:52:47 2015
Stack: cman
Current DC: primary.server.com - partition WITHOUT quorum
Version: 1.1.11-97629de
2 Nodes configured
1 Resources configured


Online: [ primary.server.com ]
OFFLINE: [ secondary.server.com ]

Full list of resources:

 livefrontendIP0        (ocf::heartbeat:IPaddr2):       Started primary.server.com
Shutdown primary server.
[root@waepprrkhe002 ~]# pcs status
Cluster name: vipcluster
Last updated: Mon Sep 28 20:05:30 2015
Last change: Mon Sep 28 19:52:47 2015
Stack: cman
Current DC: secondary.server.com - partition WITHOUT quorum
Version: 1.1.11-97629de
2 Nodes configured
1 Resources configured


Online: [ secondary.server.com ]
OFFLINE: [ primary.server.com ]

Full list of resources:

 livefrontendIP0        (ocf::heartbeat:IPaddr2):       Started secondary.server.com

from Blogger http://ift.tt/1O5DO2p
via IFTTT

Categories: Others Tags: ,

Setting up Pentaho Data Integration 5.4.1 with Hadoop

September 21, 2015 Leave a comment

Setting up Pentaho Data Integration 5.4.1 with Hadoop Cluster (Clouder Manager)

Setting up pentaho server has the below steps. Will be referring to Pentaho Data Integration as PDI from now on.
  1. Install PDI.
  2. Make ETL (PDI) as the Gateway for HDFS, HIVE, YARN, HBASE from Cloudera Manager.
  3. Updating configuration in PDI for Hadoop Setup.

Installing Pentaho Server 5.4.1 Community Edition.

Installation is very straight forward, we get a pdi.zip which needs to be extracted in the location required. Here we will be extracting the file into /opt directory on our RHEL ETL PDI Server.
Extract and move the directory to /opt
[root@pentaho-server ~]# unzip pdi.zip
[root@pentaho-server ~]# mv data-integration /opt
Create a soft Link /opt directory, we have used kettle / pentaho as the softlink.
[root@pentaho-server ~]# cd opt
[root@pentaho-server opt]# ln -s data-integration pentaho
[root@pentaho-server opt]# ln -s data-integration kettle
[root@pentaho-server opt]# ls -l
total 12
drwxr-xr-x  12 root         root         4096 Sep 14 11:43 data-integration
lrwxrwxrwx   1 root         root           16 Sep  7 14:37 kettle -> data-integration
lrwxrwxrwx   1 root         root           16 Sep 15 10:17 pentaho -> data-integration
drwxr-xr-x.  2 root         root         4096 May 17  2013 rh
[root@pentaho-server opt]#
Thats it Our, Installation is complete. 🙂

Create ETL (PDI) Server Gateway for hdfs, hive, yarn, hbase from Cloudera Manager.

We are doing this so for two reasons.
  1. PDI Hadoop cloudera configurations can be managed from the cloudera-manager.
  2. Also we make PDI server communicate to Hadoop cluster directly.
Before we add the gateway, this server should be part of the hosts on the cloudera manager, so that all the packages are installed on this server.
Steps to add a host on Cloudera Manager.
NOTE : Before we start make sure the /etc/hosts directory is similar to what we have on the hadoop cluster, and ETL PDI server, so that we are able to communicate with the hadoop cluster from ETL PDI Server.
  1. Click on Hosts.
  2. Add a new host to Cluster.
  3. Add a Host to cluster wizard.
  4. Enter the IP address or FQDN for PDI server.
  5. Install JDK and JCL Policy both as we have kerberos enabled cluster.
  6. Complete the wizard, and you will see the host added to the cloudera manager. Just install all the components without setting any roles to the host.
  7. You will see that the host will have no roles assigned to it.
Now we are ready to make PDI server a gateway.
Steps to create a Gateway in Cloudera Manager. We will take hdfs as an example. But the steps are identical for all other services.
NOTE : we can add all the roles to the host, and deploy-client configuration all at once.
  1. Login as an Admin user.
  2. Got to the service we want to create a Gateway for, we can create gateway for hdfs, yarn, hbase, hive, impala.
  3. Click hdfs > Instances tab > Add Role to Instance button > Select gateway and add the server in the text box.
  4. Continue and Finish, this will add the configuration information in the cloudera manager, but next we need to deploy-client configuration
  5. And we are done.

Testing Access to Hadoop from ETL PDI Server.

Now we have make the server a gateway, now lets test the configuration works.
Login to the hadoop user.
[root@pentaho-server ~]# su hadoop-user
[hadoop-user@pentaho-server ~]$ cd ~
[hadoop-user@pentaho-server ~]$ pwd
/home/hadoop-user
Check if we are able to access the HDFS directories. All good.
[hadoop-user@pentaho-server ~]$ hadoop fs -ls /
Found 6 items
drwxr-xr-x   - hdfs  supergroup          0 2015-05-29 15:32 /benchmarks
drwxr-xr-x   - hbase hbase               0 2015-09-14 15:10 /hbase
drwxrwxr-x   - solr  solr                0 2015-05-29 11:49 /solr
drwxrwxrwx   - hdfs  supergroup          0 2015-09-14 17:12 /tmp
drwxr-xr-x   - hdfs  supergroup          0 2015-05-29 16:22 /use
drwxrwxr-x   - hdfs  supergroup          0 2015-09-15 16:30 /user
Lets run a test Job, Should run successfully.
[hadoop-user@pentaho-server pentaho_test]$ hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 10 1000
...
verbose
...
MR JOB Completed successfully
Once we know that we are able to running the hadoop command and run a MR Job, we are done.

Updating PDI configuration to communicate to Hadoop Setup.

Before we go into this we need to get the configuration from clouder-manager.
  1. Goto hdfs server.
  2. On the right top corner Click Action > Download Client Configuration
Do this for hdfs, hive, yarn. Then move these files to PDI server, so that we can configure it.
Now lets continue to work on updating the config on PDI server.
[root@pentaho-server opt]# ls -l
total 12
drwxr-xr-x   4 cloudera-scm cloudera-scm 4096 Sep 14 13:12 cloudera
drwxr-xr-x  12 root         root         4096 Sep 14 11:43 data-integration
lrwxrwxrwx   1 root         root           16 Sep  7 14:37 kettle -> data-integration
lrwxrwxrwx   1 root         root           16 Sep 15 10:17 pentaho -> data-integration
drwxr-xr-x.  2 root         root         4096 May 17  2013 rh
Change Directory to cd /opt/pentaho/plugins/pentaho-big-data-plugin/
[root@pentaho-server opt]# cd /opt/pentaho/plugins/pentaho-big-data-plugin/
[root@pentaho-server pentaho-big-data-plugin]# ls -l
total 96640
drwxr-xr-x 7 root root     4096 Jun 14 14:35 hadoop-configurations
drwxr-xr-x 2 root root     4096 Jun 14 14:35 lib
-rw-r--r-- 1 root root   901809 Jun 14 12:55 pentaho-big-data-plugin-5.4.0.1-130.jar
-rw-r--r-- 1 root root 98034663 Jun 14 12:55 pentaho-mapreduce-libraries.zip
-rw-r--r-- 1 root root     2383 Sep 15 10:01 plugin.properties
drwxr-xr-x 2 root root     4096 Jun 14 12:55 plugins
Update the plugin.properties with active.hadoop.configuration=cdh53 leave the other parameters as it is.
We are using cdh53 as we are using cdh5.4 in our setup. cdh53 works for cdh5.3 and cdh5.4.
[root@pentaho-server pentaho-big-data-plugin]# cat plugin.properties
# The Hadoop Configuration to use when communicating with a Hadoop cluster. This is used for all Hadoop client tools
# including HDFS, Hive, HBase, and Sqoop.
# For more configuration options specific to the Hadoop configuration choosen
# here see the config.properties file in that configuration's directory.
active.hadoop.configuration=cdh53

# Path to the directory that contains the available Hadoop configurations
hadoop.configurations.path=hadoop-configurations

# Version of Kettle to use from the Kettle HDFS installation directory. This can be set globally here or overridden per job
# as a User Defined property. If not set we will use the version of Kettle that is used to submit the Pentaho MapReduce job.
pmr.kettle.installation.id=

# Installation path in HDFS for the Pentaho MapReduce Hadoop Distribution
# The directory structure should follow this structure where {version} can be configured through the Pentaho MapReduce
# User Defined properties as kettle.runtime.version
#
pmr.kettle.dfs.install.dir=/opt/pentaho/mapreduce

# Enables the use of Hadoop's Distributed Cache to store the Kettle environment required to execute Pentaho MapReduce
# If this is disabled you must configure all TaskTracker nodes with the Pentaho for Hadoop Distribution
# @deprecated This is deprecated and is provided as a migration path for existing installations.
pmr.use.distributed.cache=true

# Pentaho MapReduce runtime archive to be preloaded into http://ift.tt/1QU0Ves
pmr.libraries.archive.file=pentaho-mapreduce-libraries.zip

# Additional plugins to be copied when Pentaho MapReduce's Kettle Environment does not exist on DFS. This should be a comma-separated
# list of plugin folder names to copy.
# e.g. pmr.kettle.additional.plugins=my-test-plugin,steps/DummyPlugin
pmr.kettle.additional.plugins=
Now lets go into hadoop-configurations > cdh53 to update configuration from cloudera.
[root@pentaho-server pentaho-big-data-plugin]# ls
hadoop-configurations  lib  pentaho-big-data-plugin-5.4.0.1-130.jar  pentaho-mapreduce-libraries.zip  plugin.properties  plugins
[root@pentaho-server pentaho-big-data-plugin]# cd hadoop-configurations/
[root@pentaho-server hadoop-configurations]# ls -l
total 20
drwxr-xr-x 3 root root 4096 Sep 16 11:46 cdh53
drwxr-xr-x 3 root root 4096 Jun 14 14:35 emr34
drwxr-xr-x 3 root root 4096 Sep 15 10:01 hadoop-20
drwxr-xr-x 3 root root 4096 Jun 14 14:35 hdp22
drwxr-xr-x 3 root root 4096 Jun 14 14:35 mapr401
[root@pentaho-server hadoop-configurations]# cd cdh53/
In cdh53 update all the configuration with cloudera-manager configurations.
[root@pentaho-server cdh53]# ls -l
total 212
-rw-r--r-- 1 root root    835 Jun 14 12:49 config.properties
-rwxr-xr-x 1 root root   3798 Sep 16 11:16 core-site.xml
-rwxr-xr-x 1 root root   2546 Sep 16 11:16 hadoop-env.sh
-rwxr-xr-x 1 root root   3568 Sep 16 11:16 hdfs-site.xml
-rwxr-xr-x 1 root root   1126 Sep 16 11:46 hive-env.sh
-rwxr-xr-x 1 root root   2785 Sep 16 11:46 hive-site.xml
drwxr-xr-x 4 root root   4096 Jun 14 14:35 lib
-rwxr-xr-x 1 root root    314 Sep 16 11:16 log4j.properties
-rwxr-xr-x 1 root root   4678 Sep 16 11:16 mapred-site.xml
-rw-r--r-- 1 root root 128746 Jun 14 12:49 pentaho-hadoop-shims-cdh53-54.2015.06.01.jar
-rw-r--r-- 1 root root   6871 Jun 14 12:49 pentaho-hadoop-shims-cdh53-hbase-comparators-54.2015.06.01.jar
-rwxr-xr-x 1 root root    315 Sep 16 11:16 ssl-client.xml
-rwxr-xr-x 1 root root   1141 Sep 16 11:16 topology.map
-rwxr-xr-x 1 root root   1510 Sep 16 11:16 topology.py
-rwxr-xr-x 1 root root   6208 Sep 16 11:16 yarn-site.xml
[root@pentaho-server cdh53]#
Add the following files as below. You can get these files from the clouder-manager.
Example : On Cloudera Manager, Clusters -> Services -> hdfs -> Action (On the right top corner) -> download-client configuration.
Do the same for yarn and other services.
From hdfs configuration.
core-site.xml 
hadoop-env.sh 
hdfs-site.xml 
log4j.properties 
ssl-client.xml 
topology.map 
topology.py 
From hive configuration.
hive-env.sh 
hive-site.xml 
From yarn configuration.
mapred-site.xml
yarn-site.xml
We are done with configuration, now lets test it.

Testing PDI Hadoop.

We will do a simple test by copying file from local drive to Hadoop cluster.
load_hdfs job information. Its a simple Job to test connectivity between PDI and Hadoop cluster.
Important part of the file is where we give the location for src and dest.
source_filefolder -- file:///home/hadoop-user/pentaho_test/update_raw.txt
destination_filefolder -- hdfs://nameservice-ha/user/hadoop-user/weblogs/raw
Here is the complete file load_hdfs.kjb.
http://ift.tt/1QU0Vey
Here is the Example we are trying out.
http://ift.tt/1qjsh1V
Lets create a location for the file to be put.
[hadoop-user@pentaho-server pentaho_test]$ hadoop fs -mkdir -p /user/hadoop-user/weblogs/raw
[hadoop-user@pentaho-server pentaho_test]$ ls -l
total 83124
-rw-r--r-- 1 hadoop-user hadoop-user     6160 Sep 16 11:26 load_hdfs.kjb
-rwxr-xr-x 1 hadoop-user hadoop-user 77908174 Jan 10  2012 weblogs_rebuild.txt
We have kettle job will will load the weblogs_rebuild.txt file to Hadoop cluster, location created earlier /user/hadoop-user/weblogs/raw.
[hadoop-user@pentaho-server pentaho_test]$ sh /opt/pentaho/kitchen.sh -file=/home/hadoop-user/pentaho_test/load_hdfs.kjb
2015/09/16 11:26:37 - Kitchen - Start of run.
2015/09/16 11:26:38 - load_hdfs - Start of job execution
2015/09/16 11:26:38 - load_hdfs - Starting entry [Copy Files]
2015/09/16 11:26:38 - Copy Files - Starting ...
2015/09/16 11:26:38 - Copy Files - Processing row source File/folder source : [file:///home/hadoop-user/pentaho_test/weblogs_rebuild.txt] ... destination file/folder : [hdfs://nameservice-ha/user/hadoop-user/weblogs/raw]... wildcard : [^.*\.txt]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:///opt/data-integration/plugins/pentaho-big-data-plugin/hadoop-configurations/cdh53/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:///opt/data-integration/plugins/pentaho-big-data-plugin/hadoop-configurations/cdh53/lib/client/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/data-integration/launcher/../lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/data-integration/plugins/pentaho-big-data-plugin/lib/slf4j-log4j12-1.7.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://ift.tt/1f12hSy for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015/09/16 11:26:40 - load_hdfs - Finished job entry [Copy Files] (result=[true])
2015/09/16 11:26:40 - load_hdfs - Job execution finished
2015/09/16 11:26:40 - Kitchen - Finished!
2015/09/16 11:26:40 - Kitchen - Start=2015/09/16 11:26:37.578, Stop=2015/09/16 11:26:40.932
2015/09/16 11:26:40 - Kitchen - Processing ended after 3 seconds.
Job completed successfully, now lets check the hadoop location.
[hadoop-user@pentaho-server pentaho_test]$ hadoop fs -ls  /user/hadoop-user/weblogs/raw
Found 1 items
-rw-r--r--   3 hadoop-user hadoop-user   77908174 2015-09-16 11:26 /user/hadoop-user/weblogs/raw/weblogs_rebuild.txt
[hadoop-user@pentaho-server pentaho_test]$ ls -l
Looks like we have the file copied to HDFS. So we are good.
Few links which are helpful.
http://ift.tt/1tgZAqD
http://ift.tt/1LrR1k3

from Blogger http://ift.tt/1QU0TmV
via IFTTT

Categories: Others Tags: ,

Simple Steps to Start with SSSD Configuration.

September 13, 2015 Leave a comment

RHEL with AD using SSSD.

My previous post assumed many setup constraints which might not be true.
Here are some simple steps to start off with SSSD configuration. We will still assume we have 2 Domains to authenticate from ABCDOMAIN and XYZDOMAIN.

Preparation for SSSD.

Prerequisite installations.
yum install sssd sssd-client krb5-workstation samba openldap-clients open-ssl authconfig

Update /etc/resolve.conf on slave nodes.

And then we update the /etc/resolve.conf file with the direct IP of External Server 172.14.14.174, as slave node now should be able to communicate to the External Server.
; generated by /sbin/dhclient-script
nameserver 172.14.14.174 ; IP for the DNS server, this happens to be the xyzserver.
nameserver 172.14.14.141 ; IP for the DNS server, this happens to be the abcserver.
Testing if ping works.
[root@slave-server ~]# ping xyzserver.xyzdomain.com
PING xyzserver.xyzdomain.com (172.14.14.174) 56(84) bytes of data.
64 bytes from 172.14.14.174: icmp_seq=1 ttl=127 time=0.866 ms
64 bytes from 172.14.14.174: icmp_seq=2 ttl=127 time=1.09 ms
64 bytes from 172.14.14.174: icmp_seq=3 ttl=127 time=1.12 ms
64 bytes from 172.14.14.174: icmp_seq=4 ttl=127 time=0.933 ms
^C
--- xyzserver.xyzdomain.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7042ms
rtt min/avg/max/mdev = 0.866/1.004/1.122/0.112 ms
[root@slave-server ~]#


[root@slave-server ~]# ping abcserver.abcdomain.com
PING abcserver.abcdomain.com (172.14.14.141) 56(84) bytes of data.
64 bytes from 172.14.14.141: icmp_seq=1 ttl=127 time=0.866 ms
64 bytes from 172.14.14.141: icmp_seq=2 ttl=127 time=1.09 ms
64 bytes from 172.14.14.141: icmp_seq=3 ttl=127 time=1.12 ms
64 bytes from 172.14.14.141: icmp_seq=4 ttl=127 time=0.933 ms
^C
--- abcserver.abcdomain.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7042ms
rtt min/avg/max/mdev = 0.866/1.004/1.122/0.112 ms
[root@slave-server ~]#

Create a bind user on both domains ABCDOMAIN and XYZDOMAIN.

Open the Active directory and create a user called xyzdomainuser user in XYZDOMAIN, and abcdomainuser in ABCDOMAIN.
We will using these users to join DOMAIN using the ldap_default_bind_dn, will get to this later on.

Setting krb5 configuration.

Setting up the krb setting to communicate with AD using Kerberos.
[libdefaults]
default_realm = XYZDOMAIN.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
[realms]
XYZDOMAIN.COM = {
kdc = xyzserver.xyzdomain.com
admin_server = xyzserver.xyzdomain.com
}

ABCDOMAIN.COM = {
kdc = abcserver.abcdomain.com
admin_server = abcserver.abcdomain.com
}

[domain_realm]
abcdomain.com = ABCDOMAIN.COM
.abcdomain.com = .ABCDOMAIN.COM
xyzdomain.com = XYZDOMAIN.COM
.xyzdomain.com = .XYZDOMAIN.COM

[logging]
kdc = FILE:/var/krb5/log/krb5kdc.log
admin_server = FILE:/var/krb5/log/kadmin.log
default = FILE:/var/krb5/log/krb5lib.log

Testing krb5 setup.

Once we have the configuration we will use kinit to test (Test both users).
[root@slave-server ~]# kinit xyzdomainuser@XYZDOMAIN.COM
Password for xyzdomainuser@XYZDOMAIN.COM:
[root@slave-server ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: xyzdomainuser@XYZDOMAIN.COM

Valid starting     Expires            Service principal
09/12/15 08:37:56  09/12/15 18:38:03  krbtgt/XYZDOMAIN.COM@XYZDOMAIN.COM
        renew until 09/19/15 08:37:56
[root@slave-server ~]# klist -e
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: xyzdomainuser@XYZDOMAIN.COM

Valid starting     Expires            Service principal
09/12/15 08:37:56  09/12/15 18:38:03  krbtgt/XYZDOMAIN.COM@XYZDOMAIN.COM
        renew until 09/19/15 08:37:56, Etype (skey, tkt): arcfour-hmac, aes256-cts-hmac-sha1-96
Now we are able to connect to ldap and get the tgt as well. so we are ready for the next steps.

Testing ldapsearch from the Linux server.

This step is to make sure that our active directory is accessible. And we are able to search users and groups from Linux nodes. Goto linux machine and execute the below command.
 ldapsearch -v -x -H ldap://xyzserver.xyzdomain.com/ -D "cn=xyzdomainuser,cn=Users,dc=xyzdomain,dc=com" -W -b "cn=xyzuser2,ou=cmlab,dc=xyzdomain,dc=com"
ldapsearch -v -x -H ldap://abcserver.xyzdomain.com/ -D "cn=abcdomainuser,cn=Users,dc=abcdomain,dc=com" -W -b "cn=xyzuser2,ou=cmlab,dc=abcdomain,dc=com"
Here are some more details about the options above.
More details here : http://ift.tt/1Q9oHCk
-v      Run in verbose mode, with many diagnostics written to standard output.
-x      Use simple authentication instead of SASL.
-H ldapuri
        Specify URI(s) referring to the ldap server(s).
-D binddn
        Use the Distinguished Name binddn to bind to the LDAP directory.
-W      Prompt for simple authentication.  This is used instead of specifying the password on the command line.
-b searchbase
        Use searchbase as the starting point for the search  instead  of the default.
Above we are trying to search for information about xyzuser2 using the user xyzdomainuser. When you execute the command above we need to enter the password for xyzdomainuser. Assuming xyzuser2 user is present in the DOMAIN.

Creating SSSD Configuration.

Finally we are ready to configure SSSD. Below are the SSSD configuration to connect to xyzdomain.com.
If we want to connect to multiple AD servers, then we need to add multiple [domain/abcdomain.com] in the configuration.
[sssd]
config_file_version = 2
debug_level = 0
domains = xyzdomain.com, abcdomain.com
services = nss, pam

[nss]
filter_groups = root
filter_users = root
reconnection_retries = 3
entry_cache_timeout = 3
entry_cache_nowait_percentage = 75
debug_level = 8
account_cache_expiration = 1

[pam]
reconnection_retries = 3

[domain/xyzdomain.com]
debug_level = 8
id_provider = ldap
auth_provider = ldap
chpass_provider = krb5
access_provider = simple
cache_credentials = false
min_id = 1000
ad_server = xyzserver.xyzdomain.com
ldap_uri = ldap://xyzserver.xyzdomain.com:389
ldap_schema = ad
krb5_realm = XYZDOMAIN.COM
ldap_id_mapping = true
cache_credentials = false
entry_cache_timeout = 3
ldap_referrals = false
ldap_default_bind_dn = CN=xyzdomainuser,CN=Users,DC=xyzdomain,DC=com
ldap_default_authtok_type = password
ldap_default_authtok = Welcome@123
fallback_homedir = /home/%u
ldap_user_home_directory = unixHomeDirectory

###################################################
# Update below with another AD server as required #
###################################################

[domain/abcdomain.com]
debug_level = 8
id_provider = ldap
auth_provider = ldap
chpass_provider = krb5
access_provider = simple
cache_credentials = false
min_id = 1000
ad_server = abcserver.abcdomain.com
ldap_uri = ldap://abcserver.abcdomain.com/:389
ldap_schema = ad
krb5_realm = ABCDOMAIN.COM
ldap_id_mapping = true
cache_credentials = false
entry_cache_timeout = 3
ldap_referrals = false
ldap_default_bind_dn = CN=abcdomainuser,CN=Users,DC=abcdomain,DC=com
ldap_default_authtok_type = password
ldap_default_authtok = Welcome@123
fallback_homedir = /home/%u
ldap_user_home_directory = unixHomeDirectory
Install oddjob-mkhomedir to auto create the directory whenever a user logs in.
yum install oddjob-mkhomedir    
Enable sssd, localauth and update the configuration.
authconfig --enablesssd --enablesssdauth --enablelocauthorize --update    
NOTE : Check the sssd.conf again, sometimes authconfig will insert the default domain.
You can remove it and make the sssd.conf file similar to what we have above.
Start sssd services.
service sssd start
service oddjobd start
Testing our setup.
Checking our user, who is present in the Active Directory XYZDOMAIN.
[root@slave-server ~]# id xyzdomainuser
uid=62601149(xyzdomainuser) gid=62600513(Domain Users) groups=62600513(Domain Users),62601134(supergroup),62601133(hdfs)
[root@slave-server ~]# su xyzdomainuser
[xyzdomainuser@slave-server root]$ cd ~
[xyzdomainuser@slave-server ~]$ pwd
/home/xyzdomainuser
Next we try to login from remote.
[xyzdomainuser@slave-server ~]$ exit
exit
[root@slave-server ~]# ssh xyzdomainuser@slave-server
xyzdomainuser@slave-server's password:
Last login: Sat Sep 12 07:46:15 2015 from slave-server.xyzdomain.com
[xyzdomainuser@slave-server ~]$ pwd
/home/xyzdomainuser
[xyzdomainuser@slave-server ~]$ id
uid=62601149(xyzdomainuser) gid=62600513(Domain Users) groups=62600513(Domain Users),62601133(hdfs),62601134(supergroup)
[xyzdomainuser@slave-server ~]$
We are able to and the /home/xyzdomainuser is autocreated when the user logged-in.
Now checking users for ABCDOMAIN.
[root@slave-server ~]# id abcdomainuser
uid=1916401111(abcdomainuser) gid=1916400513 groups=1916400513,1916401114(supergroup-test),1916401113(hadoop-test),1916401112,1916401112
[root@slave-server ~]# id abcdomainuser
uid=1916401111(abcdomainuser) gid=1916400513 groups=1916400513,1916401114(supergroup-test),1916401113(hadoop-test),1916401112,1916401112
[root@slave-server ~]# su abcdomainuser
sh-4.1$ pwd
/root
sh-4.1$ cd ~
sh-4.1$ pwd
/home/abcdomainuser
sh-4.1$ id
uid=1916401111(abcdomainuser) gid=1916400513 groups=1916400513,1916401112,1916401113(hadoop-test),1916401114(supergroup-test)
sh-4.1$ exit
exit
We are done.

from Blogger http://ift.tt/1NpMn6g
via IFTTT

Categories: Others Tags: ,

Redhat Integration with Active Directory using SSSD.

September 13, 2015 Leave a comment

Redhat Integration with Active Directory using SSSD.

Introduction

There are inherent structural differences between how Windows and Linux handle system users. The user schemas used in Active Directory and standard LDAPv3 directory services also differ significantly. When using an Active Directory identity provider with SSSD to manage system users, it is necessary to reconcile Active Directory-style users to the new SSSD users. There are two ways to achieve it:
  • ID mapping in SSSD can create a map between Active Directory security IDs (SIDs) and the generated UIDs on Linux. ID mapping is the simplest option for most environments because it requires no additional packages or configuration on Active Directory.
  • Unix services can manage POSIX attributes on Windows user and group entries. This requires more configuration and information within the Active Directory environment, but it provides more administrative control over the specific UID/GID values and other POSIX attributes.
Active Directory can replicate user entries and attributes from its local directory into a global catalog, which makes the information available to other domains within the forest. Performance-wise, the global catalog replication is the recommended way for SSSD to get information about users and groups, so that SSSD has access to all user data for all domains within the topology. As a result, SSSD can be used by applications which need to query the Active Directory global catalog for user or group information.
Before we start, here are few of the links which are helpful.
http://ift.tt/1NoOCGQ
http://ift.tt/1HygZPr
http://ift.tt/1Ie5q0a
http://ift.tt/1xmv63X

Background about the setup.

We have our setup as below.
  1. Two Active Directory servers. XYZDOMAIN and ABCDOMAIN
  2. Two Edge Nodes runnning RHEL 6.6, which can directly communicate with both AD servers.
  3. Two Slave Nodes are running behind a firewall, which can only communicate to EDGE nodes.
We have to configure Slave to send the traffic to EDGE nodes which will forward the traffic to AD servers.

Preparation for the setup.

[Interface Forwarding] from eth1 to eth0 on EDGE node.

Adding route to all the slaves which reside on a private network to communicate with External Server directly using an EDGE node using Interface Forwarding.
NOTE : Below testing was done on RHEL 6.6
What we are trying to do.
  1. All the slave nodes will send their data to Edge nodes on a private interface.
  2. Edge Node will take the data arriving on the private interface and forward it over a external interface.
NOTE: below I have used slaves for all the nodes which are communicating with EDGE, in the this case making EDGE as the master which acts like a router.
Slave ifconfig
Slaves will only run on Private network.
  1. 192.168.0.8 aka eth0 Private Interface.
Here is the ifconfig.
[root@slave-node ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 
          inet addr:192.168.0.8  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::21d:d8ff:feb7:1efe/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:131581 errors:0 dropped:0 overruns:0 frame:0
          TX packets:148636 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:11583580 (11.0 MiB)  TX bytes:35866144 (34.2 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:245626 errors:0 dropped:0 overruns:0 frame:0
          TX packets:245626 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:286415155 (273.1 MiB)  TX bytes:286415155 (273.1 MiB)
Edge Node ifconfig
  1. 172.14.14.214 aka eth0 External Interface
  2. 192.168.0.11 aka eth1 Private Interface.
Here is the ifconfig.
[root@edge-node ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 
          inet addr:172.14.14.214  Bcast:172.14.14.255  Mask:255.255.255.0
          inet6 addr: fe80::21d:d8ff:feb7:1f7b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:908442 errors:0 dropped:0 overruns:0 frame:0
          TX packets:235173 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:77363514 (73.7 MiB)  TX bytes:33167098 (31.6 MiB)

eth1      Link encap:Ethernet  HWaddr 
          inet addr:192.168.0.11  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::21d:d8ff:feb7:1f7a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:210510 errors:0 dropped:0 overruns:0 frame:0
          TX packets:177170 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:61583138 (58.7 MiB)  TX bytes:16125613 (15.3 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:13799253 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13799253 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:27734863794 (25.8 GiB)  TX bytes:27734863794 (25.8 GiB)

[root@edge-node ~]#

Configuration.

  1. Create FORWARDer on the Edge node.
  2. Create route on all the slave.
  3. Update /etc/hosts on slave nodes.
1. Create FORWARDer on the Edge node.
  1. If you haven’t already enabled forwarding in the kernel, do so.
  2. Open /etc/sysctl.conf and uncomment net.ipv4.ip_forward = 1
  3. Then execute $ sudo sysctl -p
  4. Add the following rules to iptables
Commands.
[root@edge-node ~]# iptables -t nat -A POSTROUTING --out-interface eth0 -j MASQUERADE  
[root@edge-node ~]# iptables -A FORWARD --in-interface eth1 -j ACCEPT
2. Create route on all the slave.
Here is the command to add the route in slaves.
[root@slave-node ~]# route add -net 172.0.0.0 netmask 255.0.0.0 gw 192.168.0.11 eth0
We are tell all the traffic trying to go to 172.x.x.x will have to use 192.168.0.11 as the gateway. Which is the Private Interface on the Edge Node.
3. Update /etc/resolve.conf on slave nodes.
And then we update the /etc/resolve.conf file with the direct IP of External Server 172.14.14.174, as slave node now should be able to communicate to the External Server.
; generated by /sbin/dhclient-script
nameserver 172.14.14.174 ; IP for the DNS server, this happens to be the xyzserver.
nameserver 172.14.14.141 ; IP for the DNS server, this happens to be the abcserver.
Testing if ping works.
[root@slave-server ~]# ping xyzserver.xyzdomain.com
PING xyzserver.xyzdomain.com (172.14.14.174) 56(84) bytes of data.
64 bytes from 172.14.14.174: icmp_seq=1 ttl=127 time=0.866 ms
64 bytes from 172.14.14.174: icmp_seq=2 ttl=127 time=1.09 ms
64 bytes from 172.14.14.174: icmp_seq=3 ttl=127 time=1.12 ms
64 bytes from 172.14.14.174: icmp_seq=4 ttl=127 time=0.933 ms
^C
--- xyzserver.xyzdomain.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7042ms
rtt min/avg/max/mdev = 0.866/1.004/1.122/0.112 ms
[root@slave-server ~]#


[root@slave-server ~]# ping abcserver.abcdomain.com
PING abcserver.abcdomain.com (172.14.14.141) 56(84) bytes of data.
64 bytes from 172.14.14.141: icmp_seq=1 ttl=127 time=0.866 ms
64 bytes from 172.14.14.141: icmp_seq=2 ttl=127 time=1.09 ms
64 bytes from 172.14.14.141: icmp_seq=3 ttl=127 time=1.12 ms
64 bytes from 172.14.14.141: icmp_seq=4 ttl=127 time=0.933 ms
^C
--- abcserver.abcdomain.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7042ms
rtt min/avg/max/mdev = 0.866/1.004/1.122/0.112 ms
[root@slave-server ~]#

Preparation for SSSD.

Prerequisite installations.
yum install sssd sssd-client krb5-workstation samba openldap-clients open-ssl authconfig

Create a bind user on both domains ABCDOMAIN and XYZDOMAIN.

Open the Active directory and create a user called xyzdomainuser user in XYZDOMAIN, and abcdomainuser in ABCDOMAIN.
We will using these users to join DOMAIN using the ldap_default_bind_dn, will get to this later on.

Setting krb5 configuration.

Setting up the krb setting to communicate with AD using Kerberos.
[libdefaults]
default_realm = XYZDOMAIN.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
[realms]
XYZDOMAIN.COM = {
kdc = xyzserver.xyzdomain.com
admin_server = xyzserver.xyzdomain.com
}

ABCDOMAIN.COM = {
kdc = abcserver.abcdomain.com
admin_server = abcserver.abcdomain.com
}

[domain_realm]
abcdomain.com = ABCDOMAIN.COM
.abcdomain.com = .ABCDOMAIN.COM
xyzdomain.com = XYZDOMAIN.COM
.xyzdomain.com = .XYZDOMAIN.COM

[logging]
kdc = FILE:/var/krb5/log/krb5kdc.log
admin_server = FILE:/var/krb5/log/kadmin.log
default = FILE:/var/krb5/log/krb5lib.log

Testing krb5 setup.

Once we have the configuration we will use kinit to test (Test both users).
[root@slave-server ~]# kinit xyzdomainuser@XYZDOMAIN.COM
Password for xyzdomainuser@XYZDOMAIN.COM:
[root@slave-server ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: xyzdomainuser@XYZDOMAIN.COM

Valid starting     Expires            Service principal
09/12/15 08:37:56  09/12/15 18:38:03  krbtgt/XYZDOMAIN.COM@XYZDOMAIN.COM
        renew until 09/19/15 08:37:56
[root@slave-server ~]# klist -e
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: xyzdomainuser@XYZDOMAIN.COM

Valid starting     Expires            Service principal
09/12/15 08:37:56  09/12/15 18:38:03  krbtgt/XYZDOMAIN.COM@XYZDOMAIN.COM
        renew until 09/19/15 08:37:56, Etype (skey, tkt): arcfour-hmac, aes256-cts-hmac-sha1-96
Now we are able to connect to ldap and get the tgt as well. so we are ready for the next steps.

Testing ldapsearch from the Linux server.

This step is to make sure that our active directory is accessible. And we are able to search users and groups from Linux nodes. Goto linux machine and execute the below command.
 ldapsearch -v -x -H ldap://xyzserver.xyzdomain.com/ -D "cn=xyzdomainuser,cn=Users,dc=xyzdomain,dc=com" -W -b "cn=xyzuser2,ou=cmlab,dc=xyzdomain,dc=com"
ldapsearch -v -x -H ldap://abcserver.xyzdomain.com/ -D "cn=abcdomainuser,cn=Users,dc=abcdomain,dc=com" -W -b "cn=xyzuser2,ou=cmlab,dc=abcdomain,dc=com"
Here are some more details about the options above.
More details here : http://ift.tt/1Q9oHCk
-v      Run in verbose mode, with many diagnostics written to standard output.
-x      Use simple authentication instead of SASL.
-H ldapuri
        Specify URI(s) referring to the ldap server(s).
-D binddn
        Use the Distinguished Name binddn to bind to the LDAP directory.
-W      Prompt for simple authentication.  This is used instead of specifying the password on the command line.
-b searchbase
        Use searchbase as the starting point for the search  instead  of the default.
Above we are trying to search for information about xyzuser2 using the user xyzdomainuser. When you execute the command above we need to enter the password for xyzdomainuser. Assuming xyzuser2 user is present in the DOMAIN.

Creating SSSD Configuration.

Finally we are ready to configure SSSD. Below are the SSSD configuration to connect to xyzdomain.com.
If we want to connect to multiple AD servers, then we need to add multiple [domain/abcdomain.com] in the configuration.
[sssd]
config_file_version = 2
debug_level = 0
domains = xyzdomain.com, abcdomain.com
services = nss, pam

[nss]
filter_groups = root
filter_users = root
reconnection_retries = 3
entry_cache_timeout = 3
entry_cache_nowait_percentage = 75
debug_level = 8
account_cache_expiration = 1

[pam]
reconnection_retries = 3

[domain/xyzdomain.com]
debug_level = 8
id_provider = ldap
auth_provider = ldap
chpass_provider = krb5
access_provider = simple
cache_credentials = false
min_id = 1000
ad_server = xyzserver.xyzdomain.com
ldap_uri = ldap://xyzserver.xyzdomain.com:389
ldap_schema = ad
krb5_realm = XYZDOMAIN.COM
ldap_id_mapping = true
cache_credentials = false
entry_cache_timeout = 3
ldap_referrals = false
ldap_default_bind_dn = CN=xyzdomainuser,CN=Users,DC=xyzdomain,DC=com
ldap_default_authtok_type = password
ldap_default_authtok = Welcome@123
fallback_homedir = /home/%u
ldap_user_home_directory = unixHomeDirectory

###################################################
# Update below with another AD server as required #
###################################################

[domain/abcdomain.com]
debug_level = 8
id_provider = ldap
auth_provider = ldap
chpass_provider = krb5
access_provider = simple
cache_credentials = false
min_id = 1000
ad_server = abcserver.abcdomain.com
ldap_uri = ldap://abcserver.abcdomain.com/:389
ldap_schema = ad
krb5_realm = ABCDOMAIN.COM
ldap_id_mapping = true
cache_credentials = false
entry_cache_timeout = 3
ldap_referrals = false
ldap_default_bind_dn = CN=abcdomainuser,CN=Users,DC=abcdomain,DC=com
ldap_default_authtok_type = password
ldap_default_authtok = Welcome@123
fallback_homedir = /home/%u
ldap_user_home_directory = unixHomeDirectory
Install oddjob-mkhomedir to auto create the directory whenever a user logs in.
yum install oddjob-mkhomedir    
Enable sssd, localauth and update the configuration.
authconfig --enablesssd --enablesssdauth --enablelocauthorize --update    
NOTE : Check the sssd.conf again, sometimes authconfig will insert the default domain.
You can remove it and make the sssd.conf file similar to what we have above.
Start sssd services.
service sssd start
service oddjobd start
Testing our setup.
Checking our user, who is present in the Active Directory XYZDOMAIN.
[root@slave-server ~]# id xyzdomainuser
uid=62601149(xyzdomainuser) gid=62600513(Domain Users) groups=62600513(Domain Users),62601134(supergroup),62601133(hdfs)
[root@slave-server ~]# su xyzdomainuser
[xyzdomainuser@slave-server root]$ cd ~
[xyzdomainuser@slave-server ~]$ pwd
/home/xyzdomainuser
Next we try to login from remote.
[xyzdomainuser@slave-server ~]$ exit
exit
[root@slave-server ~]# ssh xyzdomainuser@192.168.0.9
xyzdomainuser@192.168.0.9's password:
Last login: Sat Sep 12 07:46:15 2015 from slave-server.xyzdomain.com
[xyzdomainuser@slave-server ~]$ pwd
/home/xyzdomainuser
[xyzdomainuser@slave-server ~]$ id
uid=62601149(xyzdomainuser) gid=62600513(Domain Users) groups=62600513(Domain Users),62601133(hdfs),62601134(supergroup)
[xyzdomainuser@slave-server ~]$
We are able to and the /home/xyzdomainuser is autocreated when the user logged-in.
Now checking users for ABCDOMAIN.
[root@slave-server ~]# id abcdomainuser
uid=1916401111(abcdomainuser) gid=1916400513 groups=1916400513,1916401114(supergroup-test),1916401113(hadoop-test),1916401112,1916401112
[root@slave-server ~]# id abcdomainuser
uid=1916401111(abcdomainuser) gid=1916400513 groups=1916400513,1916401114(supergroup-test),1916401113(hadoop-test),1916401112,1916401112
[root@slave-server ~]# su abcdomainuser
sh-4.1$ pwd
/root
sh-4.1$ cd ~
sh-4.1$ pwd
/home/abcdomainuser
sh-4.1$ id
uid=1916401111(abcdomainuser) gid=1916400513 groups=1916400513,1916401112,1916401113(hadoop-test),1916401114(supergroup-test)
sh-4.1$ exit
exit
We are done.

from Blogger http://ift.tt/1NoOAPk
via IFTTT

Categories: Others Tags: ,

No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

September 12, 2015 Leave a comment

Mechanism level: Failed to find any Kerberos tgt

Most of the information is there on the Cloudera Website.
You might want to check on the site first, if you see any thing similar.
http://ift.tt/1VTvsff
http://ift.tt/1F3XGj7
http://ift.tt/1VTvsfh
http://ift.tt/1F3XGj9
Since non of them fit our issue, we had to slog it out.
We have 2 domains forests in our environment, ABC and XYZ.
We were not able to authenticate normal users from either of the domains.
we get an error when we try to execute hadoop fs -ls / even after getting a tgt successfully from Active Directory.
  1. We have added Trusted Kerberos Realms in cloudera manager, and restarted the cluster. ABC.MYDOMAIN.COM and XYZ.MYDOMAIN.COM
  2. When we use the keytab (auto generated by cloudera Manager) – we are able to execute hadoop fs -ls /
Here is how the hdfs is working.
[root@my-edge-server ~]# su - hdfs 
[hdfs@my-edge-server ~]$ kinit -kt hdfs.keytab hdfs/my-edge-server.subdomain.in.mydomain.com@XYZ.MYDOMAIN.COM 
[hdfs@my-edge-server ~]$ klist -e 
Ticket cache: FILE:/tmp/krb5cc_496 
Default principal: hdfs/my-edge-server.subdomain.in.mydomain.com@XYZ.MYDOMAIN.COM 

Valid starting Expires Service principal 
09/11/15 10:44:31 09/11/15 20:44:31 krbtgt/XYZ.MYDOMAIN.COM@XYZ.MYDOMAIN.COM 
renew until 09/18/15 10:44:31, Etype (skey, tkt): arcfour-hmac, aes256-cts-hmac-sha1-96 
[hdfs@my-edge-server ~]$ hadoop fs -ls / 
Found 6 items 
drwxr-xr-x - hdfs supergroup 0 2015-05-29 15:32 /benchmarks 
drwxr-xr-x - hbase hbase 0 2015-09-11 09:11 /hbase 
drwxrwxr-x - solr solr 0 2015-05-29 11:49 /solr 
drwxrwxrwx - hdfs supergroup 0 2015-09-10 10:29 /tmp 
drwxr-xr-x - hdfs supergroup 0 2015-05-29 16:22 /use 
drwxrwxr-x - hdfs supergroup 0 2015-09-10 11:36 /user 
[hdfs@my-edge-server ~]$ 
Here is the Complete ERROR for user in ABC.MYDOMAIN.COM, we get a similar error from XYZ domain as well.
[root@my-edge-server ~]# kinit ahmed-user@ABC.MYDOMAIN.COM 
Password for ahmed-user@ABC.MYDOMAIN.COM: 
[root@my-edge-server ~]# klist -e 
Ticket cache: FILE:/tmp/krb5cc_0 
Default principal: ahmed-user@ABC.MYDOMAIN.COM 

Valid starting Expires Service principal 
09/11/15 10:31:16 09/11/15 20:31:22 krbtgt/ABC.MYDOMAIN.COM@ABC.MYDOMAIN.COM 
renew until 09/18/15 10:31:16, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96 
Before you execute the below command, set the HADOOP_OPTS to get more verbose for debugging.
[root@my-edge-server ~]# export HADOOP_OPTS="-Dsun.security.krb5.debug=true"
Then we execute the command.
[root@my-edge-server ~]# hadoop fs -ls / 
Java config name: null 
Native config name: /etc/krb5.conf 
Loaded from native config 
KinitOptions cache name is /tmp/krb5cc_0 
DEBUG CCacheInputStream client principal is ahmed-user@ABC.MYDOMAIN.COM 
DEBUG CCacheInputStream server principal is krbtgt/ABC.MYDOMAIN.COM@ABC.MYDOMAIN.COM 
DEBUG CCacheInputStream key type: 18 
DEBUG CCacheInputStream auth time: Fri Sep 11 10:31:22 BST 2015 
DEBUG CCacheInputStream start time: Fri Sep 11 10:31:16 BST 2015 
DEBUG CCacheInputStream end time: Fri Sep 11 20:31:22 BST 2015 
DEBUG CCacheInputStream renew_till time: Fri Sep 18 10:31:16 BST 2015 
 CCacheInputStream: readFlags() FORWARDABLE; RENEWABLE; INITIAL; PRE_AUTH; 
 unsupported key type found the default TGT: 18 
15/09/11 10:31:39 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 WARN security.UserGroupInformation: PriviledgedActionException as:root (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 
15/09/11 10:31:39 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over http://ift.tt/1VTvtjg after 1 fail over attempts. Trying to fail over immediately. 
java.io.IOException: Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: "http://ift.tt/1F3XGjb"; destination host is: "master-node.subdomain.in.mydomain.com":8020; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) 
at org.apache.hadoop.ipc.Client.call(Client.java:1472) 
at org.apache.hadoop.ipc.Client.call(Client.java:1399) 
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) 
at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) 
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:606) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) 
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1982) 
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1128) 
at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1124) 
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1124) 
at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57) 
at org.apache.hadoop.fs.Globber.glob(Globber.java:265) 
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1625) 
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326) 
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224) 
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207) 
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190) 
at org.apache.hadoop.fs.shell.Command.run(Command.java:154) 
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287) 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) 
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) 
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) 
Caused by: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Solution:
[ahmed-user@my-edge-server ~]$ kinit ahmed-user@ABC.MYDOMAIN.COM
Password for ahmed-user@ABC.MYDOMAIN.COM:
[ahmed-user@my-edge-server ~]$ klist -e
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: ahmed-user@ABC.MYDOMAIN.COM

Valid starting Expires Service principal
09/11/15 11:38:46 09/11/15 21:38:54 krbtgt/ABC.MYDOMAIN.COM@ABC.MYDOMAIN.COM
        renew until 09/18/15 11:38:46, Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96
But the cluster was expecting skey to be arcfour-hmac
So using ktuil, created a keytab with arcfour-hmac then it started working.
[ahmed-user@my-edge-server ~]$ ktutil     
 addent -password -p ahmed-user@ABC.MYDOMAIN.COM -k 1 -e RC4-HMAC
 enter password for ahmed-user
 wkt ahmed-user_new.keytab
 quit
[ahmed-user@my-edge-server ~]$  
[ahmed-user@my-edge-server ~]$ kinit -kt ahmed-user_new.keytab ahmed-user@ABC.MYDOMAIN.COM
[ahmed-user@my-edge-server ~]$ klist -e
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: ahmed-user@ABC.MYDOMAIN.COM

Valid starting     Expires            Service principal
09/11/15 11:45:29  09/11/15 21:45:30  krbtgt/ABC.MYDOMAIN.COM@ABC.MYDOMAIN.COM
        renew until 09/18/15 11:45:29, Etype (skey, tkt): arcfour-hmac, aes256-cts-hmac-sha1-96
We had already created a directory for ahmed-user using the hdfs superuser.
[ahmed-user@my-edge-server ~]$ hadoop fs -ls /
Found 6 items
drwxr-xr-x   - hdfs  supergroup          0 2015-05-29 15:32 /benchmarks
drwxr-xr-x   - hbase hbase               0 2015-09-11 09:11 /hbase
drwxrwxr-x   - solr  solr                0 2015-05-29 11:49 /solr
drwxrwxrwx   - hdfs  supergroup          0 2015-09-10 10:29 /tmp
drwxr-xr-x   - hdfs  supergroup          0 2015-05-29 16:22 /use
drwxrwxr-x   - hdfs  supergroup          0 2015-09-10 11:36 /user
[ahmed-user@my-edge-server ~]$ hadoop fs -mkdir /user/ahmed-user/test_directory
[ahmed-user@my-edge-server ~]$ hadoop fs -ls /user/ahmed-user
Found 2 items
drwx------   - ahmed-user ahmed-user          0 2015-09-11 11:17 /user/ahmed-user/.staging
drwxr-xr-x   - ahmed-user ahmed-user          0 2015-09-11 11:45 /user/ahmed-user/test_directory
[ahmed-user@my-edge-server ~]$

from Blogger http://ift.tt/1VTvtji
via IFTTT

Categories: Others Tags: ,

[SOLVED] `ansible` on RHEL 6.6 – dependency failure with `python-jinja2` error.

September 1, 2015 Leave a comment

Installing ansible on RHEL 6.6.

Download epel and install.
[root@server-cloudera-manager ~]# wget http://ift.tt/15BjGfv
[root@server-cloudera-manager ~]# rpm -ivh epel-release-6-8.noarch.rpm
warning: epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Preparing...                ########################################### [100%]
   1:epel-release           /etc/yum.repos.d/epel.repo
########################################### [100%]
IMPORTANT : As of now in redhat 6.6 python-jinja2 is moved to optional repo in redhat from epel. So we need to enable optional repo in redhat.
ERROR before we enable the optional rpms.
[root@server ~]# yum install ansible
Loaded plugins: product-id, rhnplugin, security, subscription-manager
epel/metalink                                                                                                                                |  12 kB     00:00
epel                                                                                                                                         | 4.3 kB     00:00
epel/primary_db                                                                                                                              | 5.7 MB     00:43
rhel-6-server-rpms                                                                                                                           | 3.7 kB     00:00
rhel-6-server-rpms/primary_db                                                                                                                |  35 MB     01:29
rhel-server-dts-6-rpms                                                                                                                       | 2.9 kB     00:00
rhel-server-dts2-6-rpms                                                                                                                      | 2.9 kB     00:00
Resolving Dependencies
--> Running transaction check
---> Package ansible.noarch 0:1.9.2-1.el6 will be installed
--> Processing Dependency: python-simplejson for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: python-keyczar for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: python-jinja2 for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: python-httplib2 for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: python-crypto2.6 for package: ansible-1.9.2-1.el6.noarch
--> Processing Dependency: PyYAML for package: ansible-1.9.2-1.el6.noarch
--> Running transaction check
---> Package PyYAML.x86_64 0:3.10-3.1.el6 will be installed
--> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-3.1.el6.x86_64
---> Package ansible.noarch 0:1.9.2-1.el6 will be installed
--> Processing Dependency: python-jinja2 for package: ansible-1.9.2-1.el6.noarch
---> Package python-crypto2.6.x86_64 0:2.6.1-2.el6 will be installed
---> Package python-httplib2.noarch 0:0.7.7-1.el6 will be installed
---> Package python-keyczar.noarch 0:0.71c-1.el6 will be installed
--> Processing Dependency: python-pyasn1 for package: python-keyczar-0.71c-1.el6.noarch
---> Package python-simplejson.x86_64 0:2.0.9-3.1.el6 will be installed
--> Running transaction check
---> Package ansible.noarch 0:1.9.2-1.el6 will be installed
--> Processing Dependency: python-jinja2 for package: ansible-1.9.2-1.el6.noarch
---> Package libyaml.x86_64 0:0.1.3-4.el6_6 will be installed
---> Package python-pyasn1.noarch 0:0.0.12a-1.el6 will be installed
--> Finished Dependency Resolution
Error: Package: ansible-1.9.2-1.el6.noarch (epel)
           Requires: python-jinja2
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
[root@server ~]#
Here is more verbose from the installation.
Enable optional repo.
[root@server-cloudera-manager yum.repos.d]# subscription-manager repos --enable=rhel-6-server-optional-rpms
Install ansible.
[root@server-cloudera-manager ~]# yum install ansible

===========================================================================
 Package           Arch   Version        Repository                   Size
===========================================================================
Installing:
 ansible           noarch 1.9.2-1.el6    epel                        1.7 M
Installing for dependencies:
 PyYAML            x86_64 3.10-3.1.el6   rhel-6-server-rpms          157 k
 libyaml           x86_64 0.1.3-4.el6_6  rhel-6-server-rpms           52 k
 python-babel      noarch 0.9.4-5.1.el6  rhel-6-server-rpms          1.4 M
 python-crypto2.6  x86_64 2.6.1-2.el6    epel                        513 k
 python-httplib2   noarch 0.7.7-1.el6    epel                         70 k
 python-jinja2     x86_64 2.2.1-2.el6_5  rhel-6-server-optional-rpms 466 k
 python-keyczar    noarch 0.71c-1.el6    epel                        219 k
 python-pyasn1     noarch 0.0.12a-1.el6  rhel-6-server-rpms           70 k
 python-simplejson x86_64 2.0.9-3.1.el6  rhel-6-server-rpms          126 k

Transaction Summary
===========================================================================
Install      10 Package(s)


[root@server-cloudera-manager ~]# ansible --version
ansible 1.9.2
  configured module search path = None
[root@server-cloudera-manager ~]#

from Blogger http://ift.tt/1KE8lSB
via IFTTT

Categories: Others Tags: ,