Archive

Archive for October, 2015

Access Filter in SSSD `ldap_access_filter` [SSSD Access denied / Permission denied ]

October 24, 2015 Leave a comment

Access Filter Setup with SSSD

ldap_access_filter (string)
If using access_provider = ldap, this option is mandatory. It specifies an LDAP search filter criteria that must be met for the user to be granted access on this host. If access_provider = ldap and this option is not set, it will result in all users being denied access. Use access_provider = allow to change this default behaviour.
Example:
access_provider = ldap 
ldap_access_filter = memberOf=cn=allowed_user_groups,ou=Groups,dc=example,dc=com 

Prerequisites

yum install sssd

Single LDAP Group

Under domain/default in /etc/sssd/sssd.conf add:
access_provider = ldap
ldap_access_filter = memberOf=cn=Group Name,ou=Groups,dc=example,dc=com

Multiple LDAP Groups

Under domain/default in /etc/sssd/sssd.conf add:
access_provider = ldap
ldap_access_filter = (|(memberOf=cn=System Adminstrators,ou=Groups,dc=example,dc=com)(memberOf=cn=Database Users,ou=Groups,dc=example,dc=com))
ldap_access_filter accepts standard LDAP filter syntax.
service sssd restart
Here is the complete SSSD configuration file.
[root@waepprrkb002 ~]# cat /etc/sssd/sssd.conf

###################################################
# SSSD Configuration
###################################################

[sssd]
config_file_vabcion = 2
debug_level = 0
domains = abc.domain.com
services = nss, pam

[nss]
filter_groups = root
filter_users = root
reconnection_retries = 3
entry_cache_timeout = 3
entry_cache_nowait_percentage = 75
debug_level = 8
account_cache_expiration = 1

[pam]
reconnection_retries = 3

###################################################
# `abc.domain.com` Configuration
###################################################

[domain/abc.domain.com]
debug_level = 8
id_provider = ldap
auth_provider = ldap
chpass_provider = krb5

# Setting up filters.
access_provider = ldap
ldap_access_filter = (&(objectClass=user)(memberof:1.2.840.113556.1.4.1941:=cn=lab_server_access_group,ou=servergroups,ou=accessmgmnt,dc=abc,dc=domain,dc=com))

# Use below string for multiple groups.
#ldap_access_filter = (|(&(objectClass=user)(memberof:1.2.840.113556.1.4.1941:=cn=lab_server_access_group,ou=servergroups,ou=accessmgmnt,dc=abc,dc=domain,dc=com))(&(objectClass=user)(memberof:1.2.840.113556.1.4.1941:=cn=lab_server_access_group_another,ou=servergroups,ou=accessmgmnt,dc=abc,dc=domain,dc=com)))

cache_crlabtials = true
min_id = 1000
ad_server = ad-server-a.abc.domain.com,ad-server-b.abc.domain.com

# If we are using ldaps then we need to use the certificate to connect, or else SSSD will not work.
ldap_uri = ldaps://ldap.abc.domain.com
ldap_tls_cacert = /etc/openldap/cacerts/ssl-cacerts.cer

ldap_schema = ad
krb5_realm = ABC.DOMAIN.COM
krb5_server =  ad-server.abc.domain.com,ad-server-b.abc.domain.com
ldap_id_mapping = true
entry_cache_timeout = 3
ldap_referrals = false
ldap_default_bind_dn = cn=svc-lab-abcldapbind,ou=serviceaccounts,ou=accounts,ou=accessmgmnt,dc=abc,dc=domain,dc=com
ldap_default_authtok_type = password
ldap_default_authtok = 8168127634812638126381
fallback_homedir = /home/%u
ldap_user_home_directory = unixHomeDirectory    
ignore_group_membabc = true

More about the sssd Configuration.

Setting up filters.
access_provider = ldap
ldap_access_filter = (&(objectClass=user)(memberof:1.2.840.113556.1.4.1941:=cn=lab_server_access_group,ou=servergroups,ou=accessmgmnt,dc=abc,dc=domain,dc=com))
Use below string for multiple groups.
#ldap_access_filter = (|(&(objectClass=user)(memberof:1.2.840.113556.1.4.1941:=cn=lab_server_access_group,ou=servergroups,ou=accessmgmnt,dc=abc,dc=domain,dc=com))(&(objectClass=user)(memberof:1.2.840.113556.1.4.1941:=cn=lab_server_access_group_another,ou=servergroups,ou=accessmgmnt,dc=abc,dc=domain,dc=com)))
If we are using ldaps then we need to use the certificate to connect, or else SSSD will not work.
ldap_uri = ldaps://ldap.abc.domain.com
ldap_tls_cacert = /etc/openldap/cacerts/ssl-cacerts.cer

โ€‹

from Blogger http://ift.tt/1k0SUcB
via IFTTT

Categories: Others Tags: ,

Update Cloudera Manager

October 22, 2015 Leave a comment

Update Cloudera Manager to specific version [5.4.5]

Take database backup.

If we are running a dedicated database which is recommended in production setup. then we need to take a backup of the DB as a precaution.
Assuming we are using a dedicated DB.

Stop Cloudera Manager Server, Database, and Agent

Shutdown cloudera manager server.
sudo service cloudera-scm-server stop
If cloudera manager is also running an Agent service.
sudo service cloudera-scm-agent stop
NOTE : If we are using a standalone/embedded database then we need to stop that as well.
sudo service cloudera-scm-server-db stop

Update repository to get the latest rpm.

Create a file cloudera-manager.repo with below contents.
[cloudera-manager]
# Packages for Cloudera Manager, Version 5.4.5, on RedHat or CentOS 6 x86_64
name=Cloudera Manager
baseurl=http://ift.tt/1MVnMac
gpgkey=http://ift.tt/1W6Ut4M 
gpgcheck=1
copy cloudera-manager.repo to /etc/yum.repos.d/
$ sudo yum clean all
$ sudo yum upgrade cloudera-manager-server cloudera-manager-daemons cloudera-manager-agent
Check if all the installation was good.
$ rpm -qa 'cloudera-manager-*'
cloudera-manager-repository-5.0-1.noarch
cloudera-manager-server-5.4.7-0.cm544.p0.932.el6.x86_64
cloudera-manager-server-db-2-5.4.7-0.cm544.p0.932.el6.x86_64
cloudera-manager-agent-5.4.7-0.cm544.p0.932.el6.x86_64
cloudera-manager-daemons-5.4.7-0.cm544.p0.932.el6.x86_64

Start the Cloudera Manager Server (Packages)

sudo service cloudera-scm-server start
sudo service cloudera-scm-agent start

Upgrade CDH version from 5.4.2 to 5.4.5

Manually adding – Parcel to Repository

  • Download and copy parcel to /opt/cloudera/parcel-repo on cloudera Manager.
wget http://ift.tt/1MVnOyO`
  • Download the sha file and copy to /opt/cloudera/parcel-repo directory.
wget http://ift.tt/1W6Ut4Q
  • Change permission to both files above to cloudera-scm.
  • Rename file which has the shasum from CDH-5.4.5-1.cdh5.4.5.p0.7-el6.parcel.sha1 to CDH-5.4.5-1.cdh5.4.5.p0.7-el6.parcel.sha.
  • Now check on Cloudera Manager portal for the new parcel.
  • Here is the link to the parcel http://cloudera-manager-server:7180/cmf/parcel/status
  • It will take 5-10min to update the list of parcels. Depending on the refresh frequency.
  • Once it is does we will see a Distruibute button.
  • Click Distruibute and then Active.
  • Next go to home and click on cluster and select upgrade cluster, follow the instructions.
  • Restart the cluster. Do a Rolling restart.
  • We are done.

โ€‹

from Blogger http://ift.tt/1MVnMae
via IFTTT

Categories: Others Tags: ,

Getting started with Hive with Kerberos.[FAILED: SemanticException No valid privileges]

October 21, 2015 Leave a comment

Getting started with Hive with Kerberos.

Grant Permissions to user groups to access hive.

Login to the server and create a role. If these roles are not created then we get permission (Privileges) Issues.
Issue as below.
Error: Error while compiling statement: FAILED: SemanticException No valid privileges
 Required privileges for this query: Server=server1->action=*; (state=42000,code=40000)
Here is how to grant permissions to hive group, so that you can access it.
[sas@waepprrkb004 root]$  beeline -u "jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM"

0: jdbc:http://ift.tt/1MSE2Zv; create role admin;
1 row affected
0: jdbc:http://ift.tt/1MSE2Zv; show roles;
+--------+--+
|  role  |
+--------+--+
| admin  |
+--------+--+

0: jdbc:http://ift.tt/1MSE2Zv; GRANT ROLE admin TO GROUP hive;
0: jdbc:http://ift.tt/1MSE2Zv; GRANT ALL ON DATABASE default TO ROLE admin;
Here is the complete output after the permissions are granted.
[sas@waepprrkb004 root]$  beeline -u "jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM"
scan complete in 1ms
Connecting to jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM
Connected to: Apache Hive (version 1.1.0-cdh5.4.5)
Driver: Hive JDBC (version 1.1.0-cdh5.4.5)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.4.5 by Apache Hive
0: jdbc:http://ift.tt/1MSE2Zv; show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
+----------------+--+
1 row selected (0.134 seconds)
0: jdbc:http://ift.tt/1MSE2Zv; show roles;
+--------+--+
|  role  |
+--------+--+
| admin  |
+--------+--+
1 row selected (0.063 seconds)
0: jdbc:http://ift.tt/1MSE2Zv; use default;
No rows affected (0.05 seconds)
0: jdbc:http://ift.tt/1MSE2Zv; show tables;
+------------+--+
|  tab_name  |
+------------+--+
| sample_07  |
| sample_08  |
+------------+--+
2 rows selected (0.08 seconds)
0: jdbc:http://ift.tt/1MSE2Zv;

Adding New Roles and Groups in Hive.

Before we start accessing the data, we need to give users permission.
  1. Create a role.
  2. Assign role some permissions (SELECT [readonly], INSERT [rw], ALL[all]).
  3. Add a group to the newly create role.

Creating a new role.

First we create roles which we later give permissions to.
0: jdbc:http://ift.tt/1MSE2Zv; CREATE ROLE admin;
0: jdbc:http://ift.tt/1MSE2Zv; CREATE ROLE readonly;

Assign role permissions.

We are assigning permission to a role readonly to a database (default)
0: jdbc:http://ift.tt/1MSE2Zv; GRANT SELECT ON DATABASE default TO ROLE readonly;

Adding a new active directory group to role.

Now we assign the role readonly to a group server-user-access-group. Here server-user-access-group is an Active Directory group which is sync with Linux using SSSD.
0: jdbc:http://ift.tt/1MSE2Zv; GRANT ROLE readonly TO GROUP server-user-access-group;

Adding new External tables

Grant permission to HDFS URI to access the AVRO data.
grant all on uri 'hdfs://nameservice1/data/location/hdfs/some_data/ahmed/' to role admin;     
Creating external table.
create external table ahmed-data partitioned by (partition_val1 String,partition_val2 String) stored as avro location '/data/location/hdfs/some_data/ahmed/' TBLPROPERTIES ('avro.schema.url'='data/location/hdfs/some_data_schema/v1/ahmed_schema.avsc');
Alter table to partition it.
alter table ahmed-data add partition (partition_val1="2015", partition_val2="07");

Testing Setup.

Logging as hive user to giver permission to a specific group.
[root@edge-gw-server keytabs]# beeline -u "jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM"
scan complete in 2ms
Connecting to jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM
Connected to: Apache Hive (version 1.1.0-cdh5.4.5)
Driver: Hive JDBC (version 1.1.0-cdh5.4.5)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.4.5 by Apache Hive
0: jdbc:http://ift.tt/1MSE2Zv;
0: jdbc:http://ift.tt/1MSE2Zv; create role admin;
0: jdbc:http://ift.tt/1MSE2Zv; create role readonly;
Checking for roles in hive.
0: jdbc:http://ift.tt/1MSE2Zv; show roles;
+-----------+--+
|   role    |
+-----------+--+
| admin     |
| readonly  |
+-----------+--+
2 rows selected (0.349 seconds)
Grant readonly role to server-user-access-group group. (But still we have not given permission to role readonly we do that in next step)
0: jdbc:http://ift.tt/1MSE2Zv; grant role readonly to group server-user-access-group;
No rows affected (0.04 seconds)
Assigning role readonly select permission on the default database.
0: jdbc:http://ift.tt/1MSE2Zv; grant select on database default to role readonly;
No rows affected (0.049 seconds)
0: jdbc:http://ift.tt/1MSE2Zv; show role grant group server-user-access-group;
+-----------+---------------+-------------+----------+--+
|   role    | grant_option  | grant_time  | grantor  |
+-----------+---------------+-------------+----------+--+
| readonly  | false         | NULL        | --       |
+-----------+---------------+-------------+----------+--+
1 row selected (0.062 seconds)
0: jdbc:http://ift.tt/1MSE2Zv; !quit
Closing: 0: jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM
Now checking for permission for user ahmed-user, since the user is not part of server-user-access-group he will still not be able to access the data.
[root@edge-gw-server keytabs]# su ahmed-user
[ahmed-user@edge-gw-server keytabs]$ cd ~
[ahmed-user@edge-gw-server ~]$ kinit -kt ahmed-user_new.keytab ahmed-user@ABC.DOMAIN.COM
[ahmed-user@edge-gw-server ~]$ beeline -u "jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM"
scan complete in 2ms
Connecting to jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM
Connected to: Apache Hive (version 1.1.0-cdh5.4.5)
Driver: Hive JDBC (version 1.1.0-cdh5.4.5)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.4.5 by Apache Hive
0: jdbc:http://ift.tt/1MSE2Zv; show tables;
+-----------+--+
| tab_name  |
+-----------+--+
+-----------+--+
No rows selected (1.382 seconds)
0: jdbc:http://ift.tt/1MSE2Zv; !quit
Closing: 0: jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM
[ahmed-user@edge-gw-server ~]$ exit
exit
Now again logging into hive superuser to grant permission to a group which ahmed-user user is part of.
[root@edge-gw-server keytabs]# kinit -kt hive.keytab hive/hive-server.server.com@XYZ.DOMAIN.COM                                             
[root@edge-gw-server keytabs]# beeline -u "jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM"
scan complete in 2ms
Connecting to jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM
Connected to: Apache Hive (version 1.1.0-cdh5.4.5)
Driver: Hive JDBC (version 1.1.0-cdh5.4.5)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.4.5 by Apache Hive
0: jdbc:http://ift.tt/1MSE2Zv; grant role readonly to group ahmed-user-access-group;
No rows affected (0.332 seconds)
0: jdbc:http://ift.tt/1MSE2Zv; !quit
Closing: 0: jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM
Login as ahmed-user user. Now we can see the tables.
[root@edge-gw-server keytabs]# su ahmed-user
[ahmed-user@edge-gw-server keytabs]$ cd ~
[ahmed-user@edge-gw-server ~]$ kinit -kt ahmed-user_new.keytab ahmed-user@ABC.DOMAIN.COM                                                                                 
[ahmed-user@edge-gw-server ~]$ beeline -u "jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM"
scan complete in 2ms
Connecting to jdbc:hive2://hive-server.server.com:10000/default;principal=hive/hive-server.server.com@XYZ.DOMAIN.COM
Connected to: Apache Hive (version 1.1.0-cdh5.4.5)
Driver: Hive JDBC (version 1.1.0-cdh5.4.5)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.4.5 by Apache Hive
0: jdbc:http://ift.tt/1MSE2Zv; show tables;
+------------+--+
|  tab_name  |
+------------+--+
| ahmed-data |
| sample_07  |
| sample_08  |
+------------+--+
8 rows selected (0.183 seconds)
0: jdbc:http://ift.tt/1MSE2Zv; 
Test Complete.
โ€‹

from Blogger http://ift.tt/1jS3nHe
via IFTTT

Categories: Others Tags: ,

NFS mount on Centos/RHEL 6.6

October 5, 2015 Leave a comment

Setup and Configure NFS Mounts on Linux Server

To setup we will need 2 servers. Master and Slave.
nfsmaster.server.com     192.168.33.135     # Hosts the NFS shared drive.
nfsslave.server.com     192.168.33.132  # Client to use the master shared drive.
NOTE: You can add the hostnames in the /etc/hosts file and user the hostnames in the configuration rather than IP addresses.
Steps to setup NFS.
  1. Install NFS and rpcbind on master and slave servers. nfs-utils nfs-utils-lib rpcbind
  2. Configure NFS on master server.
  3. Configure mount points on slave server.
  4. Mount NFS on slave server.

Installing NFS Server and NFS Slave

We need to install NFS packages using yum.
[root@nfsmaster ~]# yum install nfs-utils nfs-utils-lib
[root@nfsmaster ~]# yum install rpcbind 
Make sure to install rpcbind and start it first. Now start the services on both machines.
[root@nfsmaster ~]# /etc/init.d/rpcbind start
NOTE: Start rpcbind first else you will get the below error.
[root@nfsmaster /]# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: Cannot register service: RPC: Unable to receive; errno = Connection refused
rpc.rquotad: unable to register (RQUOTAPROG, RQUOTAVERS, udp).
[FAILED]
Starting NFS daemon: [FAILED]
After starting rpcbind check the rpcinfo. should look similar as below.
[root@nfsmaster /]# rpcinfo -p
   program vers proto   port  service
    100000    4   tcp    111  rpcbindper
    100000    3   tcp    111  rpcbindper
    100000    2   tcp    111  rpcbindper
    100000    4   udp    111  rpcbindper
    100000    3   udp    111  rpcbindper
    100000    2   udp    111  rpcbindper
    100024    1   udp  33472  status
    100024    1   tcp  40795  status
Now we start the NFS service.
[root@nfsmaster ~]# service nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
Setting services to start on reboot.
[root@nfsmaster ~]# chkconfig rpcbind on
[root@nfsmaster ~]# chkconfig nfs on
NOTE : Starting services on both the machines

Setting Up the NFS Server

First we will be configuring the NFS server.

Configure Export directory

For sharing a directory with NFS, we need to make an entry in /etc/exports configuration file. Lets create a directory to be shared across the network. We are creating a common shared directory for all the login users.
[root@nfsmaster ~]# mkdir /export/home
Now we need to make an entry in /etc/exports and restart the services to make our directory shared in the network.
[root@nfsmaster ~]# vi /etc/exports
# directory  Slave-IP  (permissions on the directory)
/export/home 192.168.33.132(rw,sync,no_root_squash)
/export/home 192.168.33.135(rw,sync,no_root_squash)
/export/home 192.168.33.131(rw,sync,no_root_squash)
In the above example, there is a directory as /export/home.
This is shared with Slaves with IP 192.168.33.132 with read and write (rw) privilege, you can also use hostname of the Slave in the place of IP.
In the above /etc/exports we have 3 clients which are allowed to access the shared mount.
1. 192.168.33.132 - nfsslave     (slave)
2. 192.168.33.135 - nfsmaster     (master)
3. 192.168.33.131 - nfsslave2     (another slave)
Now restart NFS service.
[root@nfsmaster home]# service nfs restart
Shutting down NFS daemon:                                  [  OK  ]
Shutting down NFS mountd:                                  [  OK  ]
Shutting down NFS quotas:                                  [  OK  ]
Shutting down NFS services:                                [  OK  ]
Shutting down RPC idmapd:                                  [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]

NFS Options

Some other options we can use in /etc/exports file for file sharing is as follows.
  1. ro: With the help of this option we can provide read only access to the shared files i.e Slave will only be able to read.
  2. rw: This option allows the Slave server to both read and write access within the shared directory.
  3. sync: Sync confirms requests to the shared directory only once the changes have been committed.
  4. no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger file system, nfs performs scans of every directory above it, in order to verify its permissions and details. Disabling the subtree check may increase the reliability of NFS, but reduce security.
  5. no_root_squash: This phrase allows root to connect to the designated directory.
For more options with /etc/exports, you are recommended to read the man pages for export.

Configuring and Setup of NFS Slave.

After the server is configured we need to mount the shared drive on the slave.
  1. Create a directory in slave to mount the shared drive.
  2. Mount the drive to the newly created mount point.
  3. update /etc/fstab to make changes permanent.
First lets check the mount points on the server.
[root@nfsslave ~]# showmount -e 192.168.33.135

Export list for 192.168.33.135:
/export/home 192.168.33.132,192.168.33.135,192.168.33.131
Command above shows that the directory /export/home is available at 192.168.33.135 and ready to be shared with 132,131 and 135 servers.

Create Mount Point and Mount Shared NFS Directory.

Creating a new mount point.
[root@nfsslave ~]# mkdir -p /nfs_client_mount
To mount that shared NFS directory we can use following mount command.
[root@nfsslave ~]# mount -t nfs 192.168.33.135:/export/home /nfs_client_mount
The above command will mount that shared directory in /nfs_client_mount on the slave server.
Update /etc/fstab file to automount on reboot.
[root@nfsslave ~]# vi /etc/fstab
Add the following new line as shown below.
192.168.33.135:/export/home /nfs_client_mount  nfs defaults 0 0
Checking mount points.
[root@nfsslave home]# mount -a
[root@nfsslave home]# mount
/dev/sda2 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
vmware-vmblock on /var/run/vmblock-fuse type fuse.vmware-vmblock (rw,nosuid,nodev,default_permissions,allow_other)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
192.168.33.135:/export/home on /nfs_client_mount type nfs (rw,vers=4,addr=192.168.33.135,clientaddr=192.168.33.132)
Testing by creating a TEST directory on slave which will be created in /export/home on master, which hosts the shared NFS drive.
[root@nfsslave ~]# df -h -F nfs
Filesystem            Size  Used Avail Use% Mounted on 
192.168.33.135:/export/home
                       18G  5.1G   12G  31% /nfs_client_mount
[root@nfsslave ~]#
[root@nfsslave ~]# cd /nfs_client_mount/
[root@nfsslave home]# mkdir TEST
Check directory creation on Master.
[root@nfsmaster ~]# cd /export/home/
[root@nfsmaster home]# ls
TEST
We can see the newly created directory. We are good. ๐Ÿ™‚

Removing the NFS Mount

If we need to unmount the NFS directory on slave.
[root@nfsslave ~]# umount /nfs_client_mount
Check if the NFS is unmounted.
[root@nfsslave ~]# df -h -F nfs
df: no file systems processed
No NFS available.

Important commands for NFS

Some more important commands for NFS.
showmount -e : Shows the available shares on your local machine
showmount -e : Lists the available shares at the remote server
showmount -d : Lists all the sub directories
exportfs -v : Displays a list of shares files and options on a server
exportfs -a : Exports all shares listed in /etc/exports, or given name
exportfs -u : Unexports all shares listed in /etc/exports, or given name
exportfs -r : Refresh the serverโ€™s list after modifying /etc/exports
http://ift.tt/1uLdvUr
http://ift.tt/1VAFCPZ
http://ift.tt/1OhiEhU
โ€‹

from Blogger http://ift.tt/1VAFEHB
via IFTTT

Categories: Others Tags: ,