Setting Up Zookeeper Services Using Cloudera API [Part 2]

March 23, 2017 Leave a comment
This is the second follow up post. In the earlier post Setting Up Cloudera Manager Services Using Cloudera API [Part 1] we install the cloudera management services. Now we will be installing Zookeeper service to the cluster.
But first we need to to couple of things before we install Zookeeper.
  1. Create a cluster.
  2. Download, Distribute, Activate CDH Parcels.
  3. Install Zookeeper service to our cluster.

Creating a Cluster

First we create a cluster if it does not exists. Config details below.
def init_cluster(cm_api_handle):
    try:
        cluster = cm_api_handle.get_cluster(config['cluster']['name'])
        return cluster
    except ApiException:
        cluster = cm_api_handle.create_cluster(config['cluster']['name'],
                                                config['cluster']['version'],
                                                config['cluster']['fullVersion'])
If it is the first time then we need to add all the hosts to the cluster (including the admin node), this information is coming from the configuration yaml file.
# Basic cluster information
cluster:
  name: AutomatedHadoopCluster
  version: CDH5
  fullVersion: 5.8.3
  hosts:
     - mycmhost.ahmed.com
In the above yaml snippet hosts will have all the hosts in the cluster, since we are testing on a VM with just one host we have that host added there.

    cluster_hosts = []
    #
    # Picking up all the nodes from the yaml configuration.
    #
    for host_in_cluster in cluster.list_hosts():
        cluster_hosts.append(host_in_cluster)

    hosts = []

    #
    # Create a host list, make sure we dont have duplicates.
    #
    for host in config['cluster']['hosts']:
        if host not in cluster_hosts:
            hosts.append(host)

    #
    # Adding all hosts to the cluster.
    #
    cluster.add_hosts(hosts)
    return cluster
Once we have the cluster ready then we can install the parcels to the cluster.

Download, Distribute, Activate CDH Parcels

For Parcels there are three major methods which initiate the parcels distribution.
  1. parcel.start_download()
  2. parcel.start_distribution()
  3. parcel.activate()
These are the three methods which do all the work. When the start_download command is running we need to keep track of the progress, this is done using the get_parcels method.
get_parcelshttp://ift.tt/2nD3tbG
#
# When we execute and parcel download/distribute/activate command
# we can track the progress using the `get_parcel` method.
# This return a JSON described here : http://ift.tt/2nD3tbG
# We can check progress by checking `stage`
#
#   AVAILABLE_REMOTELY: Stable stage - the parcel can be downloaded to the server.
#   DOWNLOADING: Transient stage - the parcel is in the process of being downloaded to the server.
#   DOWNLOADED: Stable stage - the parcel is downloaded and ready to be distributed or removed from the server.
#   DISTRIBUTING: Transient stage - the parcel is being sent to all the hosts in the cluster.
#   DISTRIBUTED: Stable stage - the parcel is on all the hosts in the cluster. The parcel can now be activated, or removed from all the hosts.
#   UNDISTRIBUTING: Transient stage - the parcel is being removed from all the hosts in the cluster>
#   ACTIVATING: Transient stage - the parcel is being activated on the hosts in the cluster. New in API v7
#   ACTIVATED: Steady stage - the parcel is set to active on every host in the cluster. If desired, a parcel can be deactivated from this stage.
#
We track the progress of each stage using the snippet below.
def check_current_state(cluster, product, version, states):
    logging.info("Checking Status for Parcel.")
    while True:
        parcel = cluster.get_parcel(product, version)
        logging.info("Parcel Current Stage: " + str(parcel.stage))
        if parcel.stage in states:
            break
        if parcel.state.errors:
            raise Exception(str(parcel.state.errors))

        logging.info("%s progress: %s / %s" % (states[0], parcel.state.progress,
                                               parcel.state.totalProgress))
        time.sleep(15)
Rest of the parcel execute is straight forward.

Install Zookeeper Service.

Zookeeper service is installed in stages.
  1. Create a service (if not exist)
  2. Update configuration for our newly create Zookeeper service.
  3. Create Zookeeper role (SERVER) on the Cluster.
  4. Initalize Zookeeper using the init_zookeeper() command.
  5. Start Zookeeper service.

Create a service.

This is simple create a service if it does not exist.
def zk_create_service(cluster):
    try:
        zk_service = cluster.get_service('ZOOKEEPER')
        logging.debug("Service {0} already present on the cluster".format(self.name))
    except ApiException:
        #
        # Create service if it the first time.
        #
        zk_service = cluster.create_service('ZOOKEEPER', 'ZOOKEEPER')
        logging.info("Created New Service: ZOOKEEPER")

    return zk_service

Update configuration for Zookeeper.

This information is picked up from the configuration yaml file.
yaml file.
  ZOOKEEPER:
    config:
      zookeeper_datadir_autocreate: true
Code snippet.
def zk_update_configuration(zk_service):
    """
        Update service configurations
    :return:
    """
    zk_service.update_config(config['services']['ZOOKEEPER']['config'])
    logging.info("Service Configuration Updated.")

Create Zookeeper role (SERVER) on the Cluster.

This is the important part, here we create Zookeeper roles SERVER (each instance of zookeeper on each server is referred as a SERVER role).
Role names should be unique, we combine the service_name, role, zookeeper_id to form a unique identifier. Example : Here it would be ZOOKEEPER-SERVER-1
Here is the code snippet.
zookeeper_host_id = 0

    #
    # Configure all the host.
    #
    for zookeeper_host in config['services']['ZOOKEEPER']['roles'][0]['hosts']:
        zookeeper_host_id += 1
        zookeeper_role_config = config['services']['ZOOKEEPER']['roles'][0]['config']
        role_name = "{0}-{1}-{2}".format('ZOOKEEPER', 'SERVER', zookeeper_host_id)
Next once we create the role we update the configuration which we get from the yaml file.
Yaml file.
roles:
  - group: SERVER
    hosts:
      - mycmhost.ahmed.com
    config:
      quorumPort: 2888
      electionPort: 3888
      dataLogDir: /var/lib/zookeeper
      dataDir: /var/lib/zookeeper
Code snippet.
#
# Configuring Zookeeper server ID
#
zookeeper_role_config['serverId'] = zookeeper_host_id

#
# Update configuration
#
role.update_config(zookeeper_role_config)
Now we are set to start the Zookeeper.

Initalize Zookeeper using the init_zookeeper() command.

Before we start Zookeeper we need to init the service, which create id for each service running on each server. ps: it creates the my_id for each service in /var/lib/zookeeper location. This will help each Zookeeper identify itself as unique.
If there were 3 Zookeeper servers (which is minimum recommended) we would have something like below.
  1. Role ZOOKEEPER-SERVER-1, Server 1, Zookeeper ID my_id 1
  2. Role ZOOKEEPER-SERVER-2, Server 2, Zookeeper ID my_id 2
  3. Role ZOOKEEPER-SERVER-3, Server 3, Zookeeper ID my_id 3
And so on.
Once we have the service initialized we are ready to start the service.

Start Zookeeper service

We do this using the zk_service.start() method. This method return ApiCommand which we can track the progress and wait for the service to start using cmd.wait().success
More details about the Api here
Our service should be up and running.

Yaml File

Code File.

Executing Code

from Blogger http://ift.tt/2nD1hAV
via IFTTT

Categories: Others Tags: ,

Setting Up Cloudera Manager Services Using Cloudera API [Part 1]

March 22, 2017 Leave a comment
Cloudera API is a very convenient way to setup a cluster and do more.

Here are some of the cool things you can do with [Cloudera Manager via the API](http://ift.tt/2mmdiLd):

– Deploy an entire Hadoop cluster programmatically. Cloudera Manager supports HDFS, MapReduce, YARN, ZooKeeper, HBase, Hive, Oozie, Hue, Flume, Impala, Solr, Sqoop, Spark and Accumulo.
– Configure various Hadoop services and get config validation.
– Take admin actions on services and roles, such as start, stop, restart, failover, etc. Also available are the more advanced workflows, such as setting up high availability and decommissioning.
– Monitor your services and hosts, with intelligent service health checks and metrics.
– Monitor user jobs and other cluster activities.
– Retrieve timeseries metric data.
– Search for events in the Hadoop system.
– Administer Cloudera Manager itself.
– Download the entire deployment description of your Hadoop cluster in a json file.

Additionally, with the appropriate licenses, the API lets you:

– Perform rolling restart and rolling upgrade.
– Audit user activities and accesses in Hadoop.
– Perform backup and cross data-center replication for HDFS and Hive.
– Retrieve per-user HDFS usage report and per-user MapReduce resource usage report.

### Prerequisites [IMPORTANT]

**Assuming initial setup is complete. [Cloudera Manager Setup Using Chef](http://ift.tt/2njFAVW)**

1. Mysql is install and all databases are created.
2. Cloudera Manager server is installed and configured.
3. Cloudera Manager repo is setup (optional if you are using internal repo else update yaml)

### Before we start.

**NOTE: We will be using Mysql for database and will assume that the database is already installed and all the necessary databases are created with users.**

1. We need to understand the services and setup configuration for each service.
2. Setup database configuration.

### Currently we will be doing below operation in this post.

– Install Hosts.
– Setup `Cloudera Manager Services` using API.
    – Activity Monitor
    – Alert Publisher
    – Event Server
    – Host Monitor
    – Reports Manager
    – Service Monitor

### Steps for the current task.

1. Set License (or set trial for the setup)
2. Install Hosts
3. Configure and setup all the services above.
3. Start Services on cloudera manager.

### Activity Monitor

Cloudera Manager’s activity monitoring capability monitors the MapReduce, Pig, Hive, Oozie, and streaming jobs, Impala queries, and YARN applications running or that have run on your cluster. When the individual jobs are part of larger workflows (using Oozie, Hive, or Pig), these jobs are aggregated into MapReduce jobs that can be monitored as a whole, as well as by the component jobs. [Courtesy Cloudera Website](http://ift.tt/2o1mXmQ)

The following sections describe how to view and monitor activities that run on your cluster.

– [Monitoring MapReduce Jobs](http://ift.tt/2njHHJz)
– [Monitoring Impala Queries](http://ift.tt/2o1fuUY)
– [Monitoring YARN Applications](http://ift.tt/2njAUzB)
– [Monitoring Spark Applications](http://ift.tt/2o1wVod)

#### Configuration Setup Information for Activity Monitor.

Complete details about the configuration. [Cloudera Activity Monitor](http://ift.tt/2njAPvH)

“` json
{
  ‘config’: {
    ‘firehose_database_host’: ‘server-admin-node.ahmedinc.com:3306’,
    ‘firehose_database_password’: ‘amon_password’,
    ‘firehose_database_user’: ‘amon’,
    ‘firehose_database_type’: ‘mysql’,
    ‘firehose_database_name’: ‘amon’
  },
  ‘hosts’: [
    ‘server-admin-node.ahmedinc.com’
  ],
  ‘group’: ‘ACTIVITYMONITOR’
}
“`

For activity monitor we need to set the database and will leave other configuration as default which cloudera manager will take care. Activity monitor was earlier called as `firehose`. So the configuration information still has `firehose` in it.

– Group Name `ACTIVITYMONITOR` (This the information which cloudera manager uses to know which service it needs to create)
– DB Host `admin-node` or the node which is hosting the `mysql` database.

### Reports Manager

The Reports page lets you create reports about the usage of HDFS in your cluster—data size and file count by user, group, or directory. It also lets you report on the MapReduce activity in your cluster, by user.

“` json
‘config’: {
    ‘headlamp_database_name’: ‘rman’,
    ‘headlamp_database_user’: ‘rman’,
    ‘headlamp_database_type’: ‘mysql’,
    ‘headlamp_database_password’: ‘rman_password’,
    ‘headlamp_database_host’: ‘server-admin-node.ahmedinc.com:3306’
},
    ‘hosts’: [
    ‘server-admin-node.ahmedinc.com’
    ],
    ‘group’: ‘REPORTSMANAGER’
“`

– Group Name `REPORTSMANAGER` (This the information which cloudera manager uses to know which service it needs to create)

### Rest of the service Alert Publiser, Event Server, Host Monitor, Service Monitor

– **Alert Publisher** can be to send alert notifications by email or by SNMP trap to a trap receiver.
– **Event Server** aggregates relevant events and makes them available for alerting and for searching. This way, you have a view into the history of all relevant events that occur cluster-wide.
– **Host Monitoring** features let you manage and monitor the status of the hosts in your clusters.
– **Service Monitoring** feature monitors dozens of service health and performance metrics about the services and role instances running on your cluster:
    – Presents health and performance data in a variety of formats including interactive charts
    – Monitors metrics against configurable thresholds
    – Generates events related to system and service health and critical log entries and makes them available for searching and alerting
    – Maintains a complete record of service-related actions and configuration changes

Group Names : `ALERTPUBLISHER``EVENTSERVER``HOSTMONITOR``SERVICEMONITOR`

## Creating Script For Deploying Cloudera Management Services. 

Tasks we need to work on.

1. Create a configuration JSON file. (We will be using the same JSON above)
2. Setup license for the cluster (we will be setting trial version)
3. Install Hosts.
4. Initialize Cluster for the First time.
5. Deploy Management Roles (Update service configuration and create roles for each service).

### Creating JSON file.

Host Installation

“` json
‘cm_host_installation’: {
    ‘host_cm_repo_gpg_key_custom_url’: ‘http://ift.tt/2o1gfxp’,
    ‘host_java_install_strategy’: ‘AUTO’,
    ‘host_cm_repo_url’: ‘http://ift.tt/2njHnug’,
    ‘host_unlimited_jce_policy’: True,
    ‘host_password’: ‘Bigdata@123’,
    ‘host_username’: ‘cmadmin’,
    ‘ssh_port’: 22
}
“`

Cloudera Manager Credentials and Remote Repo setup.

“` json
‘cm’: {
    ‘username’: ‘admin’,
    ‘tls’: False,
    ‘host’: ‘server-admin-node.ahmedinc.com’,
    ‘api-version’: 13,
    ‘remote_parcel_repo_urls’: ‘http://ift.tt/2o1Emvw’,
    ‘password’: ‘admin’,
    ‘port’: 7180
}
“`

Cluster Information.

“` json
‘cluster’: {
    ‘version’: ‘CDH5’,
    ‘hosts’: [
      ‘server-admin-node.ahmedinc.com’,
      ‘server-edge-node.ahmedinc.com’,
      ‘server-worker-node.ahmedinc.com’
    ],
    ‘name’: ‘AutomatedHadoopCluster’,
    ‘fullVersion’: ‘5.8.3’
}
“`

Management Service.

“` json
‘MGMT’: {
      ‘roles’: [
        {
          ‘config’: {
            ‘firehose_database_host’: ‘server-admin-node.ahmedinc.com:3306’,
            ‘firehose_database_password’: ‘amon_password’,
            ‘firehose_database_user’: ‘amon’,
            ‘firehose_database_type’: ‘mysql’,
            ‘firehose_database_name’: ‘amon’
          },
          ‘hosts’: [
            ‘server-admin-node.ahmedinc.com’
          ],
          ‘group’: ‘ACTIVITYMONITOR’
        },
        {
          ‘group’: ‘ALERTPUBLISHER’,
          ‘hosts’: [
            ‘server-admin-node.ahmedinc.com’
          ]
        },
        {
          ‘group’: ‘EVENTSERVER’,
          ‘hosts’: [
            ‘server-admin-node.ahmedinc.com’
          ]
        },
        {
          ‘group’: ‘HOSTMONITOR’,
          ‘hosts’: [
            ‘server-admin-node.ahmedinc.com’
          ]
        },
        {
          ‘group’: ‘SERVICEMONITOR’,
          ‘hosts’: [
            ‘server-admin-node.ahmedinc.com’
          ]
        },
        {
          ‘config’: {
            ‘headlamp_database_name’: ‘rman’,
            ‘headlamp_database_user’: ‘rman’,
            ‘headlamp_database_type’: ‘mysql’,
            ‘headlamp_database_password’: ‘rman_password’,
            ‘headlamp_database_host’: ‘server-admin-node.ahmedinc.com:3306’
          },
          ‘hosts’: [
            ‘server-admin-node.ahmedinc.com’
          ],
          ‘group’: ‘REPORTSMANAGER’
        }
      ]
    }
“`

### License Update.

First we get license for the cluster, if we receive an exception the we set trial version.

“` python
def enable_license_for_cm(cloudera_manager)
    try:
        # Check for current License.
        cloudera_license = cloudera_manager.get_license()
    except ApiException:
        # If we recieve an exception, then this is first time we are setting the license
        cloudera_manager.begin_trial()
“`

### Installing Hosts.

Once we have set the lic installed, we need to setup the hosts to have all the services running. `host_install` command will install all the services like `agent``daemon``java` and `jce_policy` files. This will also configure the hosts to contact the admin-node for heartbeat.

Below is a code snippet to install hosts.

“` python
def host_installation(cloudera_manager, config):
    “””
        Host installation.
        http://ift.tt/2njKnXE
    “””
    logging.info(“Installing HOSTs.”)
    cmd = cloudera_manager.host_install(config[‘cm_host_installation’][‘host_username’],
                                   config[‘cluster’][‘hosts’],
                                   ssh_port=config[‘cm_host_installation’][‘ssh_port’],
                                   password=config[‘cm_host_installation’][‘host_password’],
                                   parallel_install_count=10,
                                   cm_repo_url=config[‘cm_host_installation’][‘host_cm_repo_url’],
                                   gpg_key_custom_url=config[‘cm_host_installation’][‘host_cm_repo_gpg_key_custom_url’],
                                   java_install_strategy=config[‘cm_host_installation’][‘host_java_install_strategy’],
                                   unlimited_jce=config[‘cm_host_installation’][‘host_unlimited_jce_policy’])

    #
    # Check command to complete.
    #
    if not cmd.wait().success:
        logging.info(“Command `host_install` Failed. {0}”.format(cmd.resultMessage))
        if (cmd.resultMessage is not None and
                    ‘There is already a pending command on this entity’ in cmd.resultMessage):
            raise Exception(“HOST INSTALLATION FAILED.”)
“`

### Deploy Management Services.

First we need to create a management services if it does not exsist.

“` python
mgmt_service = cloudera_manager.create_mgmt_service(ApiServiceSetupInfo()) 
“`

Adding all services based on roles.

“` python 
for role in config[‘services’][‘MGMT’][‘roles’]:
    if not len(mgmt_service.get_roles_by_type(role[‘group’])) > 0:
        logging.info(“Creating role for {0}”.format(role[‘group’]))
        mgmt_service.create_role(‘{0}-1’.format(role[‘group’]), role[‘group’], role[‘hosts’][0])
“`

Update configuration

“` python
for role in config[‘services’][‘MGMT’][‘roles’]:
    role_group = mgmt_service.get_role_config_group(‘mgmt-{0}-BASE’.format(role[‘group’]))
    logging.info(role_group)
    #
    # Update the group’s configuration.
    # [http://ift.tt/2o1ijoO]
    #
    role_group.update_config(role.get(‘config’, {}))
“`

Now we start the service.

“` python
#
# Start mgmt services.
#
mgmt_service.start().wait()
“`

### Yaml File

http://ift.tt/2njFBcs

### Code File

http://ift.tt/2o1nyVt

### Executing Code.

[Video – Setting Up Cloudera Manager Services Using Cloudera API](https://youtu.be/kH1DhfQuD5M)

### Useful Links.

– [Ansible Hadoop Playbook](http://ift.tt/2njI1rR)
– [Cloudera API Example](http://ift.tt/1ug59pN)
– [Cloudera API](http://ift.tt/2njLJBE)
– [Cloudera Epy Document](http://ift.tt/2n1tZsE)
– [Cloudera API Getting Started](http://ift.tt/2njEd9E)
– [Cloudera API Properties](http://ift.tt/2o1tpdF)
– [Cloudera Manager Server Properties](http://ift.tt/2njSLWV)
– [Cloudera Manager Service Properties](http://ift.tt/2njAPvH)

from Blogger http://ift.tt/2o1kMjg
via IFTTT

Categories: Others Tags: ,

Getting Started with Cloudera API

March 21, 2017 Leave a comment
This is a basic steps to get connected with cloudera manager.
Here are some of the cool things you can do with Cloudera Manager via the API:
  • Deploy an entire Hadoop cluster programmatically. Cloudera Manager supports HDFS, MapReduce, YARN, ZooKeeper, HBase, Hive, Oozie, Hue, Flume, Impala, Solr, Sqoop, Spark and Accumulo.
  • Configure various Hadoop services and get config validation.
  • Take admin actions on services and roles, such as start, stop, restart, failover, etc. Also available are the more advanced workflows, such as setting up high availability and decommissioning.
  • Monitor your services and hosts, with intelligent service health checks and metrics.
  • Monitor user jobs and other cluster activities.
  • Retrieve timeseries metric data.
  • Search for events in the Hadoop system.
  • Administer Cloudera Manager itself.
  • Download the entire deployment description of your Hadoop cluster in a json file.
    Additionally, with the appropriate licenses, the API lets you:
  • Perform rolling restart and rolling upgrade.
  • Audit user activities and accesses in Hadoop.
  • Perform backup and cross data-center replication for HDFS and Hive.
  • Retrieve per-user HDFS usage report and per-user MapReduce resource usage report.

API Installations.

Getting Connected To Cloudera Manager.

First we get a API handle to use to connect to Cloudera Manager Services and Cluster Services. config is coming from a yaml file.
 @property
 def cm_api_handle(self):

     """
         This method is to create a handle to CM.
     :return: cm_api_handle
     """
     if self._cm_api_handle is None:
         self._cm_api_handle = ApiResource(self.config['cm']['host'],
                                           self.config['cm']['port'],
                                           self.config['cm']['username'],
                                           self.config['cm']['password'],
                                           self.config['cm']['tls'],
                                           version=self.config['cm']['api-version'])
     return self._cm_api_handle
A simple way to write it would be as below. (I am using version=13 here)
 cm_api_handle = api = ApiResource(cm_host, username="admin", password="admin", version=13)
Now we can use this handle to connect to CM or Cluster. Now lets look at what we can do once we have the cloudera_manager object.
We can do all these method calls on this. CM Class
 cloudera_manager = cm_api_handle.get_cloudera_manager()
 cloudera_manager.get_license()

 cm_api_response = cloudera_manager.get_services()
Here API response cm_api_response would be APIService
Similarly we get cluster methods for the cluster but in a List format as there are many services on the cluster.
 cloudera_cluster = cm_api_handle.get_cluster("CLUSTER_NAME")
 cluster_api_response = cloudera_cluster.get_all_services()
Again the response would be a ApiService.

Example Code to get started.

First lets create a yaml file.
 # Cloudera Manager config
 cm:
   host: 127.0.0.1
   port: 7180
   username: admin
   password: admin
   tls: false
   version: 13

 # Basic cluster information
 cluster:
   name: AutomatedHadoopCluster
   version: CDH5
   fullVersion: 5.8.3
   hosts:
      - 127.0.0.1
Next we create script to process that data.
 import yaml
 import sys
 from cm_api.api_client import ApiResource, ApiException


 def fail(msg):
     print (msg)
     sys.exit(1)

 if __name__ == '__main__':

     try:
         with open('cloudera.yaml', 'r') as cluster_yaml:
             config = yaml.load(cluster_yaml)

         api_handle = ApiResource(config['cm']['host'],
                                  config['cm']['port'],
                                  config['cm']['username'],
                                  config['cm']['password'],
                                  config['cm']['tls'],
                                  version=config['cm']['version'])

         # Checking CM services
         cloudera_manager = api_handle.get_cloudera_manager()
         cm_api_response = cloudera_manager.get_service()

         print "\nCLOUDERA MANAGER SERVICES\n----------------------------"
         print "Complete ApiService: " + str(cm_api_response)
         print "Check URL for details : http://ift.tt/2n1dYCL"
         print "name: " + str(cm_api_response.name)
         print "type: " + str(cm_api_response.type)
         print "serviceUrl: " + str(cm_api_response.serviceUrl)
         print "roleInstancesUrl: " + str(cm_api_response.roleInstancesUrl)
         print "displayName: " + str(cm_api_response.displayName)

         # Checking Cluster services
         cm_cluster = api_handle.get_cluster(config['cluster']['name'])
         cluster_api_response = cm_cluster.get_all_services()
         print "\n\nCLUSTER SERVICES\n----------------------------"
         for api_service_list in cluster_api_response:
             print "Complete ApiService: " + str(api_service_list)
             print "Check URL for details : http://ift.tt/2n1dYCL"
             print "name: " + str(api_service_list.name)
             print "type: " + str(api_service_list.type)
             print "serviceUrl: " + str(api_service_list.serviceUrl)
             print "roleInstancesUrl: " + str(api_service_list.roleInstancesUrl)
             print "displayName: " + str(api_service_list.displayName)

     except IOError as e:
         fail("Error creating cluster {}".format(e))
Output
 CLOUDERA MANAGER SERVICES
 ----------------------------
 Complete ApiService: : mgmt (cluster: None)
 Check URL for details : http://ift.tt/2n1dYCL
 name: mgmt
 type: MGMT
 serviceUrl: http://ift.tt/2n1dfll:7180/cmf/serviceRedirect/mgmt
 roleInstancesUrl: http://ift.tt/2n1dfll:7180/cmf/serviceRedirect/mgmt/instances
 displayName: Cloudera Management Service


 CLUSTER SERVICES
 ----------------------------
 Complete ApiService: : ZOOKEEPER (cluster: AutomatedHadoopCluster)
 Check URL for details : http://ift.tt/2n1dYCL
 name: ZOOKEEPER
 type: ZOOKEEPER
 serviceUrl: http://ift.tt/2n1dfll:7180/cmf/serviceRedirect/ZOOKEEPER
 roleInstancesUrl: http://ift.tt/2n1dfll:7180/cmf/serviceRedirect/ZOOKEEPER/instances
 displayName: ZOOKEEPER
This is the basics, we will be build on top of this in coming blog posts.

from Blogger http://ift.tt/2mlZikz
via IFTTT

Categories: Others Tags: ,

Basic Testing On Hadoop Environment [Cloudera]

March 20, 2017 Leave a comment
These are a set of testing which we can do on a Hadoop environment. These are basic testing to make sure the environment is setup correctly.
NOTE : On a kerberized cluster we need to use the keytab to execute these commands.
Creating keytab.
 $ ktutil
 ktutil:  addent -password -p @ADDOMAIN.AHMEDINC.COM -k 1 -e RC4-HMAC
 Password for @ADDOMAIN.AHMEDINC.COM: ********
 ktutil:  wkt .keytab
 ktutil:  quit
 $ ls
 .keytab

HDFS Testing

Running pi
 hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 100 100000
Running TestDFSIO
 hadoop jar  /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient*tests*.jar
Command output.
 $ hadoop jar  /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient*tests*.jar
 Unknown program '/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-tests.jar' chosen.
 Valid program names are:
   DFSCIOTest: Distributed i/o benchmark of libhdfs.
   DistributedFSCheck: Distributed checkup of the file system consistency.
   JHLogAnalyzer: Job History Log analyzer.
   MRReliabilityTest: A program that tests the reliability of the MR framework by injecting faults/failures
   SliveTest: HDFS Stress Test and Live Data Verification.
   TestDFSIO: Distributed i/o benchmark.
   fail: a job that always fails
   filebench: Benchmark SequenceFile(Input|Output)Format (block,record compressed and uncompressed), Text(Input|Output)Format (compressed and uncompressed)
   largesorter: Large-Sort tester
   loadgen: Generic map/reduce load generator
   mapredtest: A map/reduce test check.
   minicluster: Single process HDFS and MR cluster.
   mrbench: A map/reduce benchmark that can create many small jobs
   nnbench: A benchmark that stresses the namenode.
   sleep: A job that sleeps at each map and reduce task.
   testbigmapoutput: A map/reduce program that works on a very big non-splittable file and does identity map/reduce
   testfilesystem: A test for FileSystem read/write.
   testmapredsort: A map/reduce program that validates the map-reduce framework's sort.
   testsequencefile: A test for flat files of binary key value pairs.
   testsequencefileinputformat: A test for sequence file input format.
   testtextinputformat: A test for text input format.
   threadedmapbench: A map/reduce benchmark that compares the performance of maps with multiple spills over maps with 1 spill
Example execution.
 hadoop jar  /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient*tests*.jar TestDFSIO -write -nrFiles 10 -fileSize 1000
 hadoop jar  /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient*tests*.jar TestDFSIO -read -nrFiles 10 -fileSize 1000
Running Terasort
First create the data using teragen.
 hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 1000000 /user/zahmed/terasort-input
Then execute terasort (mapreduce job) on the generated teragen data set.
 hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar terasort /user/zahmed/terasort-input /user/zahmed/terasort-output

YARN Testing

When the above jobs are running we can go to Cloudera manager -> YARN -> Applications to check the application running.

Testing Hive from Hue

If using a kerberos environment do this before creating a table.
Creating a Database.
 create database TEST;
Creating a Table.
 use TEST;
 CREATE TABLE IF NOT EXISTS employee ( eid int, name String, salary String, destination String);
Insert into table.
 insert into table employee values (1,'zubair','13123123','eng')
 select * from employee where eid=1;
This should return inserted value.

Testing Impala from Hue

Invalidate metastore and check for hive database.
 invalidate metadata;
You should see the test database created earlier. Execute select query to verify.
 select * from employee where eid=1;

Testing Spark

Running a Pi Job. Logon to one of the Gateway nodes.
 spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode cluster --master yarn /opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/lib/spark-examples.jar 10

Testing and Grant Permission on Hbase

First pick the hbase keytab above and execute below command.
NOTE: If you are using a kerberos environment and want to give access to other users, you need to use the hbase keytab.
 $ hbase shell
 17/02/20 08:44:29 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
 HBase Shell; enter 'help' for list of supported commands.
 Type "exit" to leave the HBase Shell
 Vaddomainion 1.2.0-cdh5.8.3, rUnknown, Wed Oct 12 20:32:08 PDT 2016
Creating emp table.
 hbase(main):001:0> create 'emp', 'paddomainonal data', 'professional data'
 0 row(s) in 2.5390 seconds

 => Hbase::Table - emp
 hbase(main):002:0> list
 TABLE
 emp
 1 row(s) in 0.0120 seconds

 => ["emp"]
 hbase(main):003:0> user_permission emp
 NameError: undefined local variable or method `emp' for #
Checking user permission on the table, currently we have hbase user as the owner
 hbase(main):004:0> user_permission "emp"
 User                                        Namespace,Table,Family,Qualifier:Permission
  hbase                                      default,emp,,: [Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN]
 1 row(s) in 0.3380 seconds
Adding permission to new user.
 hbase(main):005:0> grant "zahmed", "RWC", "emp"
 0 row(s) in 0.2320 seconds
Checking Permission.
 hbase(main):006:0> user_permission "emp"
 User                                        Namespace,Table,Family,Qualifier:Permission
  zahmed                                      default,emp,,: [Permission: actions=READ,WRITE,CREATE]
  hbase                                      default,emp,,: [Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN]
 2 row(s) in 0.0510 seconds

 hbase(main):007:0>
Now logon to hue to check the new hbase table appear there.

Testing SQOOP

Create a mysql database and add table with data.
Creating database.
 mysql> create database employee;
 Query OK, 1 row affected (0.01 sec)
Creating Table.
 mysql> CREATE TABLE IF NOT EXISTS employees ( eid varchar(20), name varchar(25), salary varchar(20), destination varchar(15));
 Query OK, 0 rows affected (0.00 sec)

 mysql> show tables;
 +--------------------+
 | Tables_in_employee |
 +--------------------+
 | employees          |
 +--------------------+
 1 row in set (0.00 sec)


 mysql> describe employees;
 +-------------+-------------+------+-----+---------+-------+
 | Field       | Type        | Null | Key | Default | Extra |
 +-------------+-------------+------+-----+---------+-------+
 | eid         | varchar(20) | YES  |     | NULL    |       |
 | name        | varchar(25) | YES  |     | NULL    |       |
 | salary      | varchar(20) | YES  |     | NULL    |       |
 | destination | varchar(15) | YES  |     | NULL    |       |
 +-------------+-------------+------+-----+---------+-------+
 4 rows in set (0.00 sec)
Inserting data into the table.
 mysql> insert into employees values ("123EFD", "ZUBAIR AHMED", "1000", "ENGINEER");
 Query OK, 1 row affected (0.00 sec)
Checking table.
 mysql> select * from employees;
 +--------+--------------+--------+-------------+
 | eid    | name         | salary | destination |
 +--------+--------------+--------+-------------+
 | 123EFD | ZUBAIR AHMED | 1000   | ENGINEER    |
 +--------+--------------+--------+-------------+
 1 row in set (0.01 sec)

 mysql> insert into employees values ("123EFD123", "Z AHMED", "11000", "ENGINEER");
 Query OK, 1 row affected (0.00 sec)

 mysql> insert into employees values ("123123EFD123", "Z AHMD", "11000", "ENGINEER");
 Query OK, 1 row affected (0.00 sec)

 mysql> select * from employees;
 +--------------+--------------+--------+-------------+
 | eid          | name         | salary | destination |
 +--------------+--------------+--------+-------------+
 | 123EFD       | ZUBAIR AHMED | 1000   | ENGINEER    |
 | 123EFD123    | Z AHMED      | 11000  | ENGINEER    |
 | 123123EFD123 | Z AHMD       | 11000  | ENGINEER    |
 +--------------+--------------+--------+-------------+
 3 rows in set (0.00 sec)
Grant permission to a user which can access the database.
 mysql> grant all privileges on employee.* to emp@'%' identified by 'emp@123';
 Query OK, 0 rows affected (0.00 sec)
Once we have the database created, execute command below.
 sqoop import --connect jdbc:mysql://atlbdl1drlha001.gpsbd.lab1.ahmedinc.com/employee --username emp --password emp@123 --query 'SELECT * from employees where $CONDITIONS' --split-by eid --target-dir /user/zahmed/sqoop_test
Command output.
 $ sqoop import --connect jdbc:mysql://atlbdl1drlha001.gpsbd.lab1.ahmedinc.com/employee --username emp --password emp@123 --query 'SELECT * from employees where $CONDITIONS' --split-by eid --target-dir /user/zahmed/sqoop_test
 Warning: /opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
 Please set $ACCUMULO_HOME to the root of your Accumulo installation.
 17/02/21 08:54:15 INFO sqoop.Sqoop: Running Sqoop vaddomainion: 1.4.6-cdh5.8.3
 17/02/21 08:54:15 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
 17/02/21 08:54:16 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
 17/02/21 08:54:16 INFO tool.CodeGenTool: Beginning code generation
 17/02/21 08:54:16 INFO manager.SqlManager: Executing SQL statement: SELECT * from employees where  (1 = 0)
 17/02/21 08:54:16 INFO manager.SqlManager: Executing SQL statement: SELECT * from employees where  (1 = 0)
 17/02/21 08:54:16 INFO manager.SqlManager: Executing SQL statement: SELECT * from employees where  (1 = 0)
 17/02/21 08:54:16 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
 Note: /tmp/sqoop-cmadmin/compile/32f74db698040b57c22af35843d5af89/QueryResult.java uses or overrides a deprecated API.
 Note: Recompile with -Xlint:deprecation for details.
 17/02/21 08:54:17 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cmadmin/compile/32f74db698040b57c22af35843d5af89/QueryResult.jar
 17/02/21 08:54:17 INFO mapreduce.ImportJobBase: Beginning query import.
 17/02/21 08:54:17 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
 17/02/21 08:54:18 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
 17/02/21 08:54:18 INFO hdfs.DFSClient: Created token for zahmed: HDFS_DELEGATION_TOKEN owner=zahmed@ADDOMAIN.AHMEDINC.COM, renewer=yarn, realUser=, issueDate=1487667258619, maxDate=1488272058619, sequenceNumber=19, masterKeyId=10 on ha-hdfs:hdfsHA
 17/02/21 08:54:18 INFO security.TokenCache: Got dt for hdfs://hdfsHA; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hdfsHA, Ident: (token for zahmed: HDFS_DELEGATION_TOKEN owner=zahmed@ADDOMAIN.AHMEDINC.COM, renewer=yarn, realUser=, issueDate=1487667258619, maxDate=1488272058619, sequenceNumber=19, masterKeyId=10)
 17/02/21 08:54:20 INFO db.DBInputFormat: Using read commited transaction isolation
 17/02/21 08:54:20 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(eid), MAX(eid) FROM (SELECT * from employees where  (1 = 1) ) AS t1
 17/02/21 08:54:20 WARN db.TextSplitter: Generating splits for a textual index column.
 17/02/21 08:54:20 WARN db.TextSplitter: If your database sorts in a case-insensitive order, this may result in a partial import or duplicate records.
 17/02/21 08:54:20 WARN db.TextSplitter: You are strongly encouraged to choose an integral split column.
 17/02/21 08:54:20 INFO mapreduce.JobSubmitter: number of splits:5
 17/02/21 08:54:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1487410266772_0001
 17/02/21 08:54:20 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hdfsHA, Ident: (token for zahmed: HDFS_DELEGATION_TOKEN owner=zahmed@ADDOMAIN.AHMEDINC.COM, renewer=yarn, realUser=, issueDate=1487667258619, maxDate=1488272058619, sequenceNumber=19, masterKeyId=10)
 17/02/21 08:54:22 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1487410266772_0001 is still in NEW
 17/02/21 08:54:23 INFO impl.YarnClientImpl: Submitted application application_1487410266772_0001
 17/02/21 08:54:23 INFO mapreduce.Job: The url to track the job: http://ift.tt/2mJSdXd:8088/proxy/application_1487410266772_0001/
 17/02/21 08:54:23 INFO mapreduce.Job: Running job: job_1487410266772_0001
 17/02/21 08:54:34 INFO mapreduce.Job: Job job_1487410266772_0001 running in uber mode : false
 17/02/21 08:54:34 INFO mapreduce.Job:  map 0% reduce 0%
 17/02/21 08:54:40 INFO mapreduce.Job:  map 20% reduce 0%
 17/02/21 08:54:43 INFO mapreduce.Job:  map 60% reduce 0%
 17/02/21 08:54:46 INFO mapreduce.Job:  map 100% reduce 0%
 17/02/21 08:54:46 INFO mapreduce.Job: Job job_1487410266772_0001 completed successfully
 17/02/21 08:54:46 INFO mapreduce.Job: Countaddomain: 30
         File System Countaddomain
                 FILE: Number of bytes read=0
                 FILE: Number of bytes written=768050
                 FILE: Number of read operations=0
                 FILE: Number of large read operations=0
                 FILE: Number of write operations=0
                 HDFS: Number of bytes read=636
                 HDFS: Number of bytes written=102
                 HDFS: Number of read operations=20
                 HDFS: Number of large read operations=0
                 HDFS: Number of write operations=10
         Job Countaddomain
                 Launched map tasks=5
                 Other local map tasks=5
                 Total time spent by all maps in occupied slots (ms)=37208
                 Total time spent by all reduces in occupied slots (ms)=0
                 Total time spent by all map tasks (ms)=37208
                 Total vcore-seconds taken by all map tasks=37208
                 Total megabyte-seconds taken by all map tasks=38100992
         Map-Reduce Framework
                 Map input records=3
                 Map output records=3
                 Input split bytes=636
                 Spilled Records=0
                 Failed Shuffles=0
                 Merged Map outputs=0
                 GC time elapsed (ms)=94
                 CPU time spent (ms)=3680
                 Physical memory (bytes) snapshot=1625182208
                 Virtual memory (bytes) snapshot=8428191744
                 Total committed heap usage (bytes)=4120903680
         File Input Format Countaddomain
                 Bytes Read=0
         File Output Format Countaddomain
                 Bytes Written=102
 17/02/21 08:54:46 INFO mapreduce.ImportJobBase: Transferred 102 bytes in 27.8888 seconds (3.6574 bytes/sec)
 17/02/21 08:54:46 INFO mapreduce.ImportJobBase: Retrieved 3 records.
Checking for data in HDFS.
 $ hdfs dfs -ls /user/zahmed/
 Found 2 items
 drwx------   - zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/.staging
 drwxr-xr-x   - zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test
Here is the data which was picked up by the (SQOOP) MR job.
 $ hdfs dfs -ls /user/zahmed/sqoop_test
 Found 6 items
 -rw-r--r--   3 zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test/_SUCCESS
 -rw-r--r--   3 zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00000
 -rw-r--r--   3 zahmed supergroup         35 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00001
 -rw-r--r--   3 zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00002
 -rw-r--r--   3 zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00003
 -rw-r--r--   3 zahmed supergroup         67 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00004
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00000
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00001
 123123EFD123,Z AHMD,11000,ENGINEER
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00003
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00002
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00004
 123EFD,ZUBAIR AHMED,1000,ENGINEER
 123EFD123,Z AHMED,11000,ENGINEER
[Note: Few of the jobs did not recieve any data as there were only 3 row in the table.]

Key Trustee Testing

NOTE: To enable key trustee the server should be kerberos enabled.

Create a key and directory.

 kinit 
 hadoop key create mykey1
 hadoop fs -mkdir /tmp/zone1
 kinit hdfs
 hdfs crypto -createZone -keyName mykey1 -path /tmp/zone1

Create a file, put it in your zone and ensure the file can be decrypted.

 kinit 
 echo "Hello World" > /tmp/helloWorld.txt
 hadoop fs -put /tmp/helloWorld.txt /tmp/zone1
 hadoop fs -cat /tmp/zone1/helloWorld.txt
 rm /tmp/helloWorld.txt

Ensure the file is stored as encrypted.

 kinit hdfs
 hadoop fs -cat /.reserved/raw/tmp/zone1/helloWorld.txt
 hadoop fs -rm -R /tmp/zone1

Command Output

Getting user credentials.
 $ kinit zahmed@ADDOMAIN.AHMEDINC.COM
 Password for zahmed@ADDOMAIN.AHMEDINC.COM:
 $ hdfs dfs -ls /
 Found 3 items
 drwx------   - hbase hbase               0 2017-02-23 14:43 /hbase
 drwxrwxrwx   - hdfs  supergroup          0 2017-02-21 13:37 /tmp
 drwxr-xr-x   - hdfs  supergroup          0 2017-02-17 17:47 /user
 $ hdfs dfs -ls /user
 Found 10 items
 drwxr-xr-x   - hdfs   supergroup          0 2017-02-17 09:18 /user/hdfs
 drwxrwxrwx   - mapred hadoop              0 2017-02-16 15:13 /user/history
 drwxr-xr-x   - hdfs   supergroup          0 2017-02-17 19:15 /user/hive
 drwxrwxr-x   - hue    hue                 0 2017-02-16 15:16 /user/hue
 drwxrwxr-x   - impala impala              0 2017-02-16 15:16 /user/impala
 drwxrwxr-x   - oozie  oozie               0 2017-02-16 15:17 /user/oozie
 drwxr-x--x   - spark  spark               0 2017-02-16 15:14 /user/spark
 drwxrwxr-x   - sqoop2 sqoop               0 2017-02-16 15:18 /user/sqoop2
 drwxr-xr-x   - yxc27  supergroup          0 2017-02-17 18:09 /user/yxc27
 drwxr-xr-x   - zahmed  supergroup          0 2017-02-20 08:20 /user/zahmed
Creating a key
 $ hadoop key create mykey1
 mykey1 has been successfully created with options Options{cipher='AES/CTR/NoPadding', bitLength=128, description='null', attributes=null}.
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@62e10dd0 has been updated.
Creating a zone
 $ hadoop fs -mkdir /tmp/zone1
Login in as hdfs
 $ cd /var/run/cloudera-scm-agent/process/
 $ sudo su
 # ls -lt | grep hdfs
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:50 1071-namenodes-failover
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:48 1070-hdfs-NAMENODE-safemode-wait
 drwxr-x--x. 3 hdfs      hdfs      380 Feb 23 14:47 1069-hdfs-FAILOVERCONTROLLER
 drwxr-x--x. 3 hdfs      hdfs      400 Feb 23 14:47 598-hdfs-FAILOVERCONTROLLER
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:47 1068-hdfs-NAMENODE-nnRpcWait
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:47 1067-hdfs-NAMENODE
 drwxr-x--x. 3 hdfs      hdfs      520 Feb 23 14:47 1063-hdfs-NAMENODE-rollEdits
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:47 1065-hdfs-NAMENODE-jnSyncWait
 # cd 1071-namenodes-failover
 # hostname
 server.tigris.ahmedinc.com
 # kinit -kt hdfs.keytab hdfs/server.tigris.ahmedinc.com@DEVDOMAIN.AHMEDINC.COM
Creating Zone.
 # hdfs crypto -createZone -keyName mykey1 -path /tmp/zone1
 Added encryption zone /tmp/zone1
 # exit
 exit
Login in as admin user.
 $ klist
 Ticket cache: FILE:/tmp/krb5cc_9002
 Default principal: zahmed@ADDOMAIN.AHMEDINC.COM

 Valid starting     Expires            Service principal
 02/23/17 15:54:57  02/24/17 01:55:01  krbtgt/ADDOMAIN.AHMEDINC.COM@ADDOMAIN.AHMEDINC.COM
         renew until 03/02/17 15:54:57
 $ echo "Hello World" > /tmp/helloWorld.txt
 $ hadoop fs -put /tmp/helloWorld.txt /tmp/zone1
 $ hadoop fs -cat /tmp/zone1/helloWorld.txt
 Hello World
 $ rm /tmp/helloWorld.txt
 $ sudo su
 # klist
 Ticket cache: FILE:/tmp/krb5cc_0
 Default principal: hdfs/server.tigris.ahmedinc.com@DEVDOMAIN.AHMEDINC.COM

 Valid starting     Expires            Service principal
 02/23/17 15:57:15  02/24/17 01:57:14  krbtgt/DEVDOMAIN.AHMEDINC.COM@DEVDOMAIN.AHMEDINC.COM
         renew until 03/02/17 15:57:15
 # hadoop fs -cat /.reserved/raw/tmp/zone1/helloWorld.txt
 ▒▒▒i▒
 # hadoop fs -rm -R /tmp/zone1
 17/02/23 15:58:59 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsHA/tmp/zone1' to trash at: hdfs://hdfsHA/user/hdfs/.Trash/Current/tmp/zone1
 #

from Blogger http://ift.tt/2mJDEDg
via IFTTT

Categories: Others Tags: ,

Setting Hue to Listen on `0.0.0.0`

March 19, 2017 Leave a comment
We were working on setting up a cluster, but the Hue URL was set to a private IP of the server. As we had setup all the nodes to access each other using a private IP. But we wanted Hue to bind to public interface so that it can be accessed within the network.
Bind Hue to wild card address.
  1. Goto -> Hue Configuration -> search for Bind Hue.
  2. Check Bind Hue to Wildcard Address
  3. Restart Hue Server.
    We are done.

from Blogger http://ift.tt/2nDoBfd
via IFTTT

Categories: Others Tags: ,

Nagios – Service Group Summary ERROR

October 13, 2016 Leave a comment
We were working on nagios and found that after our migration, service group summary was not working.
You might get below error on the screen and the solution is similar for both issues.

Problem 1.

Error is Could not open CGI config file '/usr/local/nagios/etc/cgi.cfg' for reading
Problem 1

Problem 2.

Nagios: It appears as though you do not have permission to view information for any of the hosts you requested…
Problem 2

Solution.

Update the /usr/local/nagios/etc/cgi.cfg to below configuration, and restart nagios service.
# MODIFIED
default_statusmap_layout=6

# UNMODIFIED
action_url_target=_blank
authorized_for_all_host_commands=nagiosadmin
authorized_for_all_hosts=nagiosadmin
authorized_for_all_service_commands=nagiosadmin
authorized_for_all_services=nagiosadmin
authorized_for_configuration_information=nagiosadmin
authorized_for_system_commands=nagiosadmin
authorized_for_system_information=nagiosadmin
default_statuswrl_layout=4
escape_html_tags=1
lock_author_names=1
main_config_file=/usr/local/nagios/etc/nagios.cfg
notes_url_target=_blank
physical_html_path=/usr/local/nagios/share
ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$
refresh_rate=90
show_context_help=0
url_html_path=/nagios
use_authentication=1
use_pending_states=1
use_ssl_authentication=0

Steps to make the above change.

  1. Extract the installation archive.
  2. Find the cgi.cfg configuration file.
  3. Take a backup of the original file.
  4. Copy the cgi.cfg file to /usr/local/nagios/etc/cgi.cfg location.
  5. Restart nagios services.

1. Extract the installation archive.

[root@nagiosserver nagios_download]# tar xvf xi-5.2.9.tar.gz 
[root@nagiosserver nagios_download]# cd nagiosxi/
[root@nagiosserver nagiosxi]# ls
0-repos            5-sudoers        cpan                   get-os-info            install-sourceguardian-extension.sh      rpmupgrade                vmsetup
10-phplimits       6-firewall       dashlets.txt           get-version            install-sudoers                          sourceguardian            wizards.txt
11-sourceguardian  7-sendmail       D-chkconfigalldaemons  init-auditlog          install-templates                        subcomponents             xi-sys.cfg
12-mrtg            8-selinux        debianmods             init-mysql             licenses                                 susemods                  xivar
13-cacti           9-dbbackups      E-importnagiosql       init.sh                nagiosxi                                 tools                     Z-webroot
14-timezone        A-subcomponents  fedoramods             init-xidb              nagiosxi-deps-5.2.9-1.noarch.rpm         ubuntumods
1-prereqs          B-installxi      fix-nagiosadmin        install-2012-prereqs   nagiosxi-deps-el7-5.2.9-1.noarch.rpm     uninstall-crontab-nagios
2-usersgroups      C-cronjobs       F-startdaemons         install-html           nagiosxi-deps-suse11-5.2.9-1.noarch.rpm  uninstall-crontab-root
3-dbservers        CHANGELOG.txt    fullinstall            install-nagiosxi-init  packages                                 upgrade
4-services         components.txt   functions.sh           install-pnptemplates   rpminstall                               verify-prereqs.php

2. Find the cgi.cfg configuration file.

[root@nagiosserver nagiosxi]# find . -name "cgi.cfg" -print
./subcomponents/nagioscore/mods/cfg/cgi.cfg
[root@nagiosserver nagiosxi]# cat ./subcomponents/nagioscore/mods/cfg/cgi.cfg
# MODIFIED
default_statusmap_layout=6

# UNMODIFIED
action_url_target=_blank
authorized_for_all_host_commands=nagiosadmin
authorized_for_all_hosts=nagiosadmin
authorized_for_all_service_commands=nagiosadmin
authorized_for_all_services=nagiosadmin
authorized_for_configuration_information=nagiosadmin
authorized_for_system_commands=nagiosadmin
authorized_for_system_information=nagiosadmin
default_statuswrl_layout=4
escape_html_tags=1
lock_author_names=1
main_config_file=/usr/local/nagios/etc/nagios.cfg
notes_url_target=_blank
physical_html_path=/usr/local/nagios/share
ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$
refresh_rate=90
show_context_help=0
url_html_path=/nagios
use_authentication=1
use_pending_states=1
use_ssl_authentication=0

3. Take a backup of the original file.

[root@nagiosserver nagiosxi]# cp /usr/local/nagios/etc/cgi.cfg /usr/local/nagios/etc/cgi.cfg.org

4. Copy the cgi.cfg file to /usr/local/nagios/etc/cgi.cfg location.

[root@nagiosserver nagiosxi]# cp ./subcomponents/nagioscore/mods/cfg/cgi.cfg /usr/local/nagios/etc/cgi.cfg

5. Restart nagios services.

[root@nagiosserver nagiosxi]# service httpd restart; service nagios restart
Stopping httpd:                                            [  OK  ]
Starting httpd:                                            [  OK  ]
Running configuration check...
Stopping nagios:. done.
Starting nagios: done.
Resolved

Configuration file with explanation.

Location : cgi.cfg.in
#################################################################
#
# CGI.CFG - Sample CGI Configuration File for Nagios @VERSION@
#
#
#################################################################

# MAIN CONFIGURATION FILE
# This tells the CGIs where to find your main configuration file.
# The CGIs will read the main and host config files for any other
# data they might need.

main_config_file=@sysconfdir@/nagios.cfg

# PHYSICAL HTML PATH
# This is the path where the HTML files for Nagios reside.  This
# value is used to locate the logo images needed by the statusmap
# and statuswrl CGIs.

physical_html_path=@datadir@

# URL HTML PATH
# This is the path portion of the URL that corresponds to the
# physical location of the Nagios HTML files (as defined above).
# This value is used by the CGIs to locate the online documentation
# and graphics.  If you access the Nagios pages with an URL like
# http://ift.tt/2dna5Vv, this value should be '/nagios'
# (without the quotes).

url_html_path=@htmurl@

# CONTEXT-SENSITIVE HELP
# This option determines whether or not a context-sensitive
# help icon will be displayed for most of the CGIs.
# Values: 0 = disables context-sensitive help
#         1 = enables context-sensitive help

show_context_help=0

# PENDING STATES OPTION
# This option determines what states should be displayed in the web
# interface for hosts/services that have not yet been checked.
# Values: 0 = leave hosts/services that have not been check yet in their original state
#         1 = mark hosts/services that have not been checked yet as PENDING

use_pending_states=1

# AUTHENTICATION USAGE
# This option controls whether or not the CGIs will use any 
# authentication when displaying host and service information, as
# well as committing commands to Nagios for processing.  
#
# Read the HTML documentation to learn how the authorization works!
#
# NOTE: It is a really *bad* idea to disable authorization, unless
# you plan on removing the command CGI (cmd.cgi)!  Failure to do
# so will leave you wide open to kiddies messing with Nagios and
# possibly hitting you with a denial of service attack by filling up
# your drive by continuously writing to your command file!
#
# Setting this value to 0 will cause the CGIs to *not* use
# authentication (bad idea), while any other value will make them
# use the authentication functions (the default).

use_authentication=1

# x509 CERT AUTHENTICATION
# When enabled, this option allows you to use x509 cert (SSL)
# authentication in the CGIs.  This is an advanced option and should
# not be enabled unless you know what you're doing.

use_ssl_authentication=0

# DEFAULT USER
# Setting this variable will define a default user name that can
# access pages without authentication.  This allows people within a
# secure domain (i.e., behind a firewall) to see the current status
# without authenticating.  You may want to use this to avoid basic
# authentication if you are not using a secure server since basic
# authentication transmits passwords in the clear.
#
# Important:  Do not define a default username unless you are
# running a secure web server and are sure that everyone who has
# access to the CGIs has been authenticated in some manner!  If you
# define this variable, anyone who has not authenticated to the web
# server will inherit all rights you assign to this user!

#default_user_name=guest

# SYSTEM/PROCESS INFORMATION ACCESS
# This option is a comma-delimited list of all usernames that
# have access to viewing the Nagios process information as
# provided by the Extended Information CGI (extinfo.cgi).  By
# default, *no one* has access to this unless you choose to
# not use authorization.  You may use an asterisk (*) to
# authorize any user who has authenticated to the web server.

authorized_for_system_information=nagiosadmin

# CONFIGURATION INFORMATION ACCESS
# This option is a comma-delimited list of all usernames that
# can view ALL configuration information (hosts, commands, etc).
# By default, users can only view configuration information
# for the hosts and services they are contacts for. You may use
# an asterisk (*) to authorize any user who has authenticated
# to the web server.

authorized_for_configuration_information=nagiosadmin

# SYSTEM/PROCESS COMMAND ACCESS
# This option is a comma-delimited list of all usernames that
# can issue shutdown and restart commands to Nagios via the
# command CGI (cmd.cgi).  Users in this list can also change
# the program mode to active or standby. By default, *no one*
# has access to this unless you choose to not use authorization.
# You may use an asterisk (*) to authorize any user who has
# authenticated to the web server.

authorized_for_system_commands=nagiosadmin

# GLOBAL HOST/SERVICE VIEW ACCESS
# These two options are comma-delimited lists of all usernames that
# can view information for all hosts and services that are being
# monitored.  By default, users can only view information
# for hosts or services that they are contacts for (unless you
# you choose to not use authorization). You may use an asterisk (*)
# to authorize any user who has authenticated to the web server.

authorized_for_all_services=nagiosadmin
authorized_for_all_hosts=nagiosadmin

# GLOBAL HOST/SERVICE COMMAND ACCESS
# These two options are comma-delimited lists of all usernames that
# can issue host or service related commands via the command
# CGI (cmd.cgi) for all hosts and services that are being monitored. 
# By default, users can only issue commands for hosts or services 
# that they are contacts for (unless you you choose to not use 
# authorization).  You may use an asterisk (*) to authorize any
# user who has authenticated to the web server.

authorized_for_all_service_commands=nagiosadmin
authorized_for_all_host_commands=nagiosadmin

# READ-ONLY USERS
# A comma-delimited list of usernames that have read-only rights in
# the CGIs.  This will block any service or host commands normally shown
# on the extinfo CGI pages.  It will also block comments from being shown
# to read-only users.

#authorized_for_read_only=user1,user2

# STATUSMAP BACKGROUND IMAGE
# This option allows you to specify an image to be used as a 
# background in the statusmap CGI.  It is assumed that the image
# resides in the HTML images path (i.e. /usr/local/nagios/share/images).
# This path is automatically determined by appending "/images"
# to the path specified by the 'physical_html_path' directive.
# Note:  The image file may be in GIF, PNG, JPEG, or GD2 format.
# However, I recommend that you convert your image to GD2 format
# (uncompressed), as this will cause less CPU load when the CGI
# generates the image.

#statusmap_background_image=smbackground.gd2

# STATUSMAP TRANSPARENCY INDEX COLOR
# These options set the r,g,b values of the background color used the statusmap CGI,
# so normal browsers that can't show real png transparency set the desired color as
# a background color instead (to make it look pretty).  
# Defaults to white: (R,G,B) = (255,255,255).

#color_transparency_index_r=255
#color_transparency_index_g=255
#color_transparency_index_b=255

# DEFAULT STATUSMAP LAYOUT METHOD
# This option allows you to specify the default layout method
# the statusmap CGI should use for drawing hosts.  If you do
# not use this option, the default is to use user-defined
# coordinates.  Valid options are as follows:
#    0 = User-defined coordinates
#    1 = Depth layers
#       2 = Collapsed tree
#       3 = Balanced tree
#       4 = Circular
#       5 = Circular (Marked Up)

default_statusmap_layout=5

# DEFAULT STATUSWRL LAYOUT METHOD
# This option allows you to specify the default layout method
# the statuswrl (VRML) CGI should use for drawing hosts.  If you
# do not use this option, the default is to use user-defined
# coordinates.  Valid options are as follows:
#    0 = User-defined coordinates
#       2 = Collapsed tree
#       3 = Balanced tree
#       4 = Circular

default_statuswrl_layout=4

# STATUSWRL INCLUDE
# This option allows you to include your own objects in the 
# generated VRML world.  It is assumed that the file
# resides in the HTML path (i.e. /usr/local/nagios/share).

#statuswrl_include=myworld.wrl

# PING SYNTAX
# This option determines what syntax should be used when
# attempting to ping a host from the WAP interface (using
# the statuswml CGI.  You must include the full path to
# the ping binary, along with all required options.  The
# $HOSTADDRESS$ macro is substituted with the address of
# the host before the command is executed.
# Please note that the syntax for the ping binary is
# notorious for being different on virtually ever *NIX
# OS and distribution, so you may have to tweak this to
# work on your system.

ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$

# REFRESH RATE
# This option allows you to specify the refresh rate in seconds
# of various CGIs (status, statusmap, extinfo, and outages).  

refresh_rate=90

# DEFAULT PAGE LIMIT
# This option allows you to specify the default number of results 
# displayed on the status.cgi.  This number can be adjusted from
# within the UI after the initial page load. Setting this to 0
# will show all results.  

result_limit=100

# ESCAPE HTML TAGS
# This option determines whether HTML tags in host and service
# status output is escaped in the web interface.  If enabled,
# your plugin output will not be able to contain clickable links.

escape_html_tags=1

# SOUND OPTIONS
# These options allow you to specify an optional audio file
# that should be played in your browser window when there are
# problems on the network.  The audio files are used only in
# the status CGI.  Only the sound for the most critical problem
# will be played.  Order of importance (higher to lower) is as
# follows: unreachable hosts, down hosts, critical services,
# warning services, and unknown services. If there are no
# visible problems, the sound file optionally specified by
# 'normal_sound' variable will be played.
#
#
# =
#
# Note: All audio files must be placed in the /media subdirectory
# under the HTML path (i.e. /usr/local/nagios/share/media/).

#host_unreachable_sound=hostdown.wav
#host_down_sound=hostdown.wav
#service_critical_sound=critical.wav
#service_warning_sound=warning.wav
#service_unknown_sound=warning.wav
#normal_sound=noproblem.wav

# URL TARGET FRAMES
# These options determine the target frames in which notes and 
# action URLs will open.

action_url_target=_blank
notes_url_target=_blank

# LOCK AUTHOR NAMES OPTION
# This option determines whether users can change the author name 
# when submitting comments, scheduling downtime.  If disabled, the 
# author names will be locked into their contact name, as defined in Nagios.
# Values: 0 = allow editing author names
#         1 = lock author names (disallow editing)

lock_author_names=1

# SPLUNK INTEGRATION OPTIONS
# These options allow you to enable integration with Splunk
# in the web interface.  If enabled, you'll be presented with
# "Splunk It" links in various places in the CGIs (log file,
# alert history, host/service detail, etc).  Useful if you're
# trying to research why a particular problem occurred.
# For more information on Splunk, visit http://www.splunk.com/

# This option determines whether the Splunk integration is enabled
# Values: 0 = disable Splunk integration
#         1 = enable Splunk integration

#enable_splunk_integration=1

# This option should be the URL used to access your instance of Splunk

#splunk_url=http://127.0.0.1:8000/

# NAVIGATION BAR SEARCH OPTIONS
# The following options allow to configure the navbar search. Default
# is to search for hostnames. With enabled navbar_search_for_addresses,
# the navbar search queries IP addresses as well. It's also possible
# to enable search for aliases by setting navbar_search_for_aliases=1.

navbar_search_for_addresses=1
navbar_search_for_aliases=1

from Blogger http://ift.tt/2eaSQ7K
via IFTTT

Categories: Others Tags: ,

Zabbix History Table Clean Up

October 12, 2016 Leave a comment
Zabbix history table gets really big, and if you are in a situation where you want to clean it up.
Then we can do so, using the below steps.
  1. Stop zabbix server.
  2. Take table backup – just in case.
  3. Create a temporary table.
  4. Update the temporary table with data required, upto a specific date using epoch.
  5. Move old table to a different table name.
  6. Move updated (new temporary) table to original table which needs to be cleaned-up.
  7. Drop the old table. (Optional)
  8. Restart Zabbix
Since this is not offical procedure, but it has worked for me so use it at your own risk.
Here is another post which will help is reducing the size of history tables – http://ift.tt/2dMfqJ5
Zabbix Version : Zabbix v2.4
Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON

Step 1 Stop the Zabbix server

sudo service zabbix-server stop
Script.
echo "------------------------------------------"
echo "    1. Stopping Zabbix Server            "
echo "------------------------------------------"
sudo service zabbix-server stop;

Step 2 Table Table Backup.

mysqldump -uzabbix -pzabbix zabbix history_uint > /tmp/history_uint.dql
Script.
echo "------------------------------------------"
echo "    2. Backing up ${ZABBIX_TABLE_NAME} Table.    "
echo "    Location : ${BACKUP_FILE_PATH}        "
echo "------------------------------------------"
mkdir -p ${BACKUP_DIR_PATH}
mysqldump -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE ${ZABBIX_TABLE_NAME} > ${BACKUP_FILE_PATH}

Step 3 Open your favourite MySQL client and create a new table

CREATE TABLE history_uint_new_20161007 LIKE history_uint;
Script.
echo "------------------------------------------------------------------"
echo "    3. Create Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "CREATE TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} LIKE ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 4 Insert the latest records from the history_uint table to the history_uint_new table

Getting epoch time in bash is simple.
Current Date.
date --date "20160707" +%s
Date 3 Months Ago.
date --date "20161007" +%s
Here is the output.
[ahmed@localhost ~]$ date --date "20160707" +%s
1467829800
[ahmed@localhost ~]$ date --date "20161007" +%s
1475778600
Now insert data for 3 months.
INSERT INTO history_uint_new SELECT * FROM history_uint WHERE clock > '1413763200';
Script.
echo "------------------------------------------------------------------"
echo "    4. Inserting from ${ZABBIX_TABLE_NAME} Table to Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "INSERT INTO ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} SELECT * FROM ${ZABBIX_TABLE_NAME} WHERE clock > '${EPOCH_3MONTHS_BACK}'; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 5 – Move history_uint to history_uint_old table

ALTER TABLE history_uint RENAME history_uint_old;
Script.
echo "------------------------------------------------------------------"
echo "    5. Rename Table ${ZABBIX_TABLE_NAME} to ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old"
echo "------------------------------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME} RENAME ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old;" | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 6. Move newly created history_uint_new to history_uint

ALTER TABLE history_uint_new_20161007 RENAME history_uint;
Script.
echo "------------------------------------------"
echo "    6. Rename Temp Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) to Original Table (${ZABBIX_TABLE_NAME})"
echo "------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} RENAME ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 7. [OPTIONAL] Remove Old Table.

As we have backed-up the table we no long need it. So we can drop the old table.
DROP TABLE hostory_uint_old;
Script.
echo "------------------------------------------"
echo "    7. Dropping Old Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old), As we have already Backed it up. "
echo "------------------------------------------"
echo "DROP TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 8 – Start the Zabbix server

sudo service zabbix-server start
Script.
echo "------------------------------------------"
echo "    8. Starting Zabbix Server        "
echo "------------------------------------------"
sudo service zabbix-server start;

Step 9. Optional to reduce the history table.

Additionally you can update the items table and set the item history table record to a fewer days.
UPDATE items SET history = '15' WHERE history > '30';

Complete Script.

Location in Github

#!/bin/bash

THREE_MONTH_BACK_DATE=`date -d "now -3months" +%Y-%m-%d`
CURRENT_DATE=`date -d "now" +%Y-%m-%d`

EPOCH_3MONTHS_BACK=`date -d "$THREE_MONTH_BACK_DATE" +%s`
EPOCH_NOW=`date -d "$CURRENT_DATE" +%s`

ZABBIX_DATABASE="zabbix"
ZABBIX_USER="zabbix"
ZABBIX_PASSWD="zabbix"

ZABBIX_TABLE_NAME="history_uint"

BACKUP_DIR_PATH=/tmp/zabbix/zabbix_table_backup_${ZABBIX_TABLE_NAME}
BACKUP_FILE_PATH=${BACKUP_DIR_PATH}/${ZABBIX_TABLE_NAME}_${CURRENT_DATE}_${EPOCH_NOW}.sql

echo "------------------------------------------"
echo "Date to Keep Backup : $THREE_MONTH_BACK_DATE"
echo "Epoch to keep Backup : $EPOCH_3MONTHS_BACK"
echo "Today's Date : $CURRENT_DATE"
echo "Epoch For Today's Date : $EPOCH_NOW"
echo "------------------------------------------"

echo "##########################################"

echo "------------------------------------------"
echo "    1. Stopping Zabbix Server            "
echo "------------------------------------------"
sudo service zabbix-server stop; 
sleep 1

echo "------------------------------------------"
echo "    Display Tables                "
echo "------------------------------------------"
echo "show tables;" | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    2. Backing up ${ZABBIX_TABLE_NAME} Table.    "
echo "    Location : ${BACKUP_FILE_PATH}        "
echo "------------------------------------------"
mkdir -p ${BACKUP_DIR_PATH}
mysqldump -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE ${ZABBIX_TABLE_NAME} > ${BACKUP_FILE_PATH}
sleep 1

echo "------------------------------------------------------------------"
echo "    3. Create Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "CREATE TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} LIKE ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------------------------------"
echo "    4. Inserting from ${ZABBIX_TABLE_NAME} Table to Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "INSERT INTO ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} SELECT * FROM ${ZABBIX_TABLE_NAME} WHERE clock > '${EPOCH_3MONTHS_BACK}'; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------------------------------"
echo "    5. Rename Table ${ZABBIX_TABLE_NAME} to ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old"
echo "------------------------------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME} RENAME ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old;" | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    6. Rename Temp Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) to Original Table (${ZABBIX_TABLE_NAME})"
echo "------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} RENAME ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    7. Dropping Old Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old), As we have already Backed it up. "
echo "------------------------------------------"
echo "DROP TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    8. Starting Zabbix Server        "
echo "------------------------------------------"
sudo service zabbix-server start;

echo "##########################################"

from Blogger http://ift.tt/2dMg5KM
via IFTTT

Categories: Others Tags: ,