Basic Testing On Hadoop Environment [Cloudera]

March 20, 2017 Leave a comment
These are a set of testing which we can do on a Hadoop environment. These are basic testing to make sure the environment is setup correctly.
NOTE : On a kerberized cluster we need to use the keytab to execute these commands.
Creating keytab.
 $ ktutil
 ktutil:  addent -password -p @ADDOMAIN.AHMEDINC.COM -k 1 -e RC4-HMAC
 Password for @ADDOMAIN.AHMEDINC.COM: ********
 ktutil:  wkt .keytab
 ktutil:  quit
 $ ls
 .keytab

HDFS Testing

Running pi
 hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 100 100000
Running TestDFSIO
 hadoop jar  /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient*tests*.jar
Command output.
 $ hadoop jar  /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient*tests*.jar
 Unknown program '/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-tests.jar' chosen.
 Valid program names are:
   DFSCIOTest: Distributed i/o benchmark of libhdfs.
   DistributedFSCheck: Distributed checkup of the file system consistency.
   JHLogAnalyzer: Job History Log analyzer.
   MRReliabilityTest: A program that tests the reliability of the MR framework by injecting faults/failures
   SliveTest: HDFS Stress Test and Live Data Verification.
   TestDFSIO: Distributed i/o benchmark.
   fail: a job that always fails
   filebench: Benchmark SequenceFile(Input|Output)Format (block,record compressed and uncompressed), Text(Input|Output)Format (compressed and uncompressed)
   largesorter: Large-Sort tester
   loadgen: Generic map/reduce load generator
   mapredtest: A map/reduce test check.
   minicluster: Single process HDFS and MR cluster.
   mrbench: A map/reduce benchmark that can create many small jobs
   nnbench: A benchmark that stresses the namenode.
   sleep: A job that sleeps at each map and reduce task.
   testbigmapoutput: A map/reduce program that works on a very big non-splittable file and does identity map/reduce
   testfilesystem: A test for FileSystem read/write.
   testmapredsort: A map/reduce program that validates the map-reduce framework's sort.
   testsequencefile: A test for flat files of binary key value pairs.
   testsequencefileinputformat: A test for sequence file input format.
   testtextinputformat: A test for text input format.
   threadedmapbench: A map/reduce benchmark that compares the performance of maps with multiple spills over maps with 1 spill
Example execution.
 hadoop jar  /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient*tests*.jar TestDFSIO -write -nrFiles 10 -fileSize 1000
 hadoop jar  /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient*tests*.jar TestDFSIO -read -nrFiles 10 -fileSize 1000
Running Terasort
First create the data using teragen.
 hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar teragen 1000000 /user/zahmed/terasort-input
Then execute terasort (mapreduce job) on the generated teragen data set.
 hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar terasort /user/zahmed/terasort-input /user/zahmed/terasort-output

YARN Testing

When the above jobs are running we can go to Cloudera manager -> YARN -> Applications to check the application running.

Testing Hive from Hue

If using a kerberos environment do this before creating a table.
Creating a Database.
 create database TEST;
Creating a Table.
 use TEST;
 CREATE TABLE IF NOT EXISTS employee ( eid int, name String, salary String, destination String);
Insert into table.
 insert into table employee values (1,'zubair','13123123','eng')
 select * from employee where eid=1;
This should return inserted value.

Testing Impala from Hue

Invalidate metastore and check for hive database.
 invalidate metadata;
You should see the test database created earlier. Execute select query to verify.
 select * from employee where eid=1;

Testing Spark

Running a Pi Job. Logon to one of the Gateway nodes.
 spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode cluster --master yarn /opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/spark/lib/spark-examples.jar 10

Testing and Grant Permission on Hbase

First pick the hbase keytab above and execute below command.
NOTE: If you are using a kerberos environment and want to give access to other users, you need to use the hbase keytab.
 $ hbase shell
 17/02/20 08:44:29 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
 HBase Shell; enter 'help' for list of supported commands.
 Type "exit" to leave the HBase Shell
 Vaddomainion 1.2.0-cdh5.8.3, rUnknown, Wed Oct 12 20:32:08 PDT 2016
Creating emp table.
 hbase(main):001:0> create 'emp', 'paddomainonal data', 'professional data'
 0 row(s) in 2.5390 seconds

 => Hbase::Table - emp
 hbase(main):002:0> list
 TABLE
 emp
 1 row(s) in 0.0120 seconds

 => ["emp"]
 hbase(main):003:0> user_permission emp
 NameError: undefined local variable or method `emp' for #
Checking user permission on the table, currently we have hbase user as the owner
 hbase(main):004:0> user_permission "emp"
 User                                        Namespace,Table,Family,Qualifier:Permission
  hbase                                      default,emp,,: [Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN]
 1 row(s) in 0.3380 seconds
Adding permission to new user.
 hbase(main):005:0> grant "zahmed", "RWC", "emp"
 0 row(s) in 0.2320 seconds
Checking Permission.
 hbase(main):006:0> user_permission "emp"
 User                                        Namespace,Table,Family,Qualifier:Permission
  zahmed                                      default,emp,,: [Permission: actions=READ,WRITE,CREATE]
  hbase                                      default,emp,,: [Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN]
 2 row(s) in 0.0510 seconds

 hbase(main):007:0>
Now logon to hue to check the new hbase table appear there.

Testing SQOOP

Create a mysql database and add table with data.
Creating database.
 mysql> create database employee;
 Query OK, 1 row affected (0.01 sec)
Creating Table.
 mysql> CREATE TABLE IF NOT EXISTS employees ( eid varchar(20), name varchar(25), salary varchar(20), destination varchar(15));
 Query OK, 0 rows affected (0.00 sec)

 mysql> show tables;
 +--------------------+
 | Tables_in_employee |
 +--------------------+
 | employees          |
 +--------------------+
 1 row in set (0.00 sec)


 mysql> describe employees;
 +-------------+-------------+------+-----+---------+-------+
 | Field       | Type        | Null | Key | Default | Extra |
 +-------------+-------------+------+-----+---------+-------+
 | eid         | varchar(20) | YES  |     | NULL    |       |
 | name        | varchar(25) | YES  |     | NULL    |       |
 | salary      | varchar(20) | YES  |     | NULL    |       |
 | destination | varchar(15) | YES  |     | NULL    |       |
 +-------------+-------------+------+-----+---------+-------+
 4 rows in set (0.00 sec)
Inserting data into the table.
 mysql> insert into employees values ("123EFD", "ZUBAIR AHMED", "1000", "ENGINEER");
 Query OK, 1 row affected (0.00 sec)
Checking table.
 mysql> select * from employees;
 +--------+--------------+--------+-------------+
 | eid    | name         | salary | destination |
 +--------+--------------+--------+-------------+
 | 123EFD | ZUBAIR AHMED | 1000   | ENGINEER    |
 +--------+--------------+--------+-------------+
 1 row in set (0.01 sec)

 mysql> insert into employees values ("123EFD123", "Z AHMED", "11000", "ENGINEER");
 Query OK, 1 row affected (0.00 sec)

 mysql> insert into employees values ("123123EFD123", "Z AHMD", "11000", "ENGINEER");
 Query OK, 1 row affected (0.00 sec)

 mysql> select * from employees;
 +--------------+--------------+--------+-------------+
 | eid          | name         | salary | destination |
 +--------------+--------------+--------+-------------+
 | 123EFD       | ZUBAIR AHMED | 1000   | ENGINEER    |
 | 123EFD123    | Z AHMED      | 11000  | ENGINEER    |
 | 123123EFD123 | Z AHMD       | 11000  | ENGINEER    |
 +--------------+--------------+--------+-------------+
 3 rows in set (0.00 sec)
Grant permission to a user which can access the database.
 mysql> grant all privileges on employee.* to emp@'%' identified by 'emp@123';
 Query OK, 0 rows affected (0.00 sec)
Once we have the database created, execute command below.
 sqoop import --connect jdbc:mysql://atlbdl1drlha001.gpsbd.lab1.ahmedinc.com/employee --username emp --password emp@123 --query 'SELECT * from employees where $CONDITIONS' --split-by eid --target-dir /user/zahmed/sqoop_test
Command output.
 $ sqoop import --connect jdbc:mysql://atlbdl1drlha001.gpsbd.lab1.ahmedinc.com/employee --username emp --password emp@123 --query 'SELECT * from employees where $CONDITIONS' --split-by eid --target-dir /user/zahmed/sqoop_test
 Warning: /opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
 Please set $ACCUMULO_HOME to the root of your Accumulo installation.
 17/02/21 08:54:15 INFO sqoop.Sqoop: Running Sqoop vaddomainion: 1.4.6-cdh5.8.3
 17/02/21 08:54:15 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
 17/02/21 08:54:16 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
 17/02/21 08:54:16 INFO tool.CodeGenTool: Beginning code generation
 17/02/21 08:54:16 INFO manager.SqlManager: Executing SQL statement: SELECT * from employees where  (1 = 0)
 17/02/21 08:54:16 INFO manager.SqlManager: Executing SQL statement: SELECT * from employees where  (1 = 0)
 17/02/21 08:54:16 INFO manager.SqlManager: Executing SQL statement: SELECT * from employees where  (1 = 0)
 17/02/21 08:54:16 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
 Note: /tmp/sqoop-cmadmin/compile/32f74db698040b57c22af35843d5af89/QueryResult.java uses or overrides a deprecated API.
 Note: Recompile with -Xlint:deprecation for details.
 17/02/21 08:54:17 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cmadmin/compile/32f74db698040b57c22af35843d5af89/QueryResult.jar
 17/02/21 08:54:17 INFO mapreduce.ImportJobBase: Beginning query import.
 17/02/21 08:54:17 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
 17/02/21 08:54:18 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
 17/02/21 08:54:18 INFO hdfs.DFSClient: Created token for zahmed: HDFS_DELEGATION_TOKEN owner=zahmed@ADDOMAIN.AHMEDINC.COM, renewer=yarn, realUser=, issueDate=1487667258619, maxDate=1488272058619, sequenceNumber=19, masterKeyId=10 on ha-hdfs:hdfsHA
 17/02/21 08:54:18 INFO security.TokenCache: Got dt for hdfs://hdfsHA; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hdfsHA, Ident: (token for zahmed: HDFS_DELEGATION_TOKEN owner=zahmed@ADDOMAIN.AHMEDINC.COM, renewer=yarn, realUser=, issueDate=1487667258619, maxDate=1488272058619, sequenceNumber=19, masterKeyId=10)
 17/02/21 08:54:20 INFO db.DBInputFormat: Using read commited transaction isolation
 17/02/21 08:54:20 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(eid), MAX(eid) FROM (SELECT * from employees where  (1 = 1) ) AS t1
 17/02/21 08:54:20 WARN db.TextSplitter: Generating splits for a textual index column.
 17/02/21 08:54:20 WARN db.TextSplitter: If your database sorts in a case-insensitive order, this may result in a partial import or duplicate records.
 17/02/21 08:54:20 WARN db.TextSplitter: You are strongly encouraged to choose an integral split column.
 17/02/21 08:54:20 INFO mapreduce.JobSubmitter: number of splits:5
 17/02/21 08:54:20 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1487410266772_0001
 17/02/21 08:54:20 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:hdfsHA, Ident: (token for zahmed: HDFS_DELEGATION_TOKEN owner=zahmed@ADDOMAIN.AHMEDINC.COM, renewer=yarn, realUser=, issueDate=1487667258619, maxDate=1488272058619, sequenceNumber=19, masterKeyId=10)
 17/02/21 08:54:22 INFO impl.YarnClientImpl: Application submission is not finished, submitted application application_1487410266772_0001 is still in NEW
 17/02/21 08:54:23 INFO impl.YarnClientImpl: Submitted application application_1487410266772_0001
 17/02/21 08:54:23 INFO mapreduce.Job: The url to track the job: http://ift.tt/2mJSdXd:8088/proxy/application_1487410266772_0001/
 17/02/21 08:54:23 INFO mapreduce.Job: Running job: job_1487410266772_0001
 17/02/21 08:54:34 INFO mapreduce.Job: Job job_1487410266772_0001 running in uber mode : false
 17/02/21 08:54:34 INFO mapreduce.Job:  map 0% reduce 0%
 17/02/21 08:54:40 INFO mapreduce.Job:  map 20% reduce 0%
 17/02/21 08:54:43 INFO mapreduce.Job:  map 60% reduce 0%
 17/02/21 08:54:46 INFO mapreduce.Job:  map 100% reduce 0%
 17/02/21 08:54:46 INFO mapreduce.Job: Job job_1487410266772_0001 completed successfully
 17/02/21 08:54:46 INFO mapreduce.Job: Countaddomain: 30
         File System Countaddomain
                 FILE: Number of bytes read=0
                 FILE: Number of bytes written=768050
                 FILE: Number of read operations=0
                 FILE: Number of large read operations=0
                 FILE: Number of write operations=0
                 HDFS: Number of bytes read=636
                 HDFS: Number of bytes written=102
                 HDFS: Number of read operations=20
                 HDFS: Number of large read operations=0
                 HDFS: Number of write operations=10
         Job Countaddomain
                 Launched map tasks=5
                 Other local map tasks=5
                 Total time spent by all maps in occupied slots (ms)=37208
                 Total time spent by all reduces in occupied slots (ms)=0
                 Total time spent by all map tasks (ms)=37208
                 Total vcore-seconds taken by all map tasks=37208
                 Total megabyte-seconds taken by all map tasks=38100992
         Map-Reduce Framework
                 Map input records=3
                 Map output records=3
                 Input split bytes=636
                 Spilled Records=0
                 Failed Shuffles=0
                 Merged Map outputs=0
                 GC time elapsed (ms)=94
                 CPU time spent (ms)=3680
                 Physical memory (bytes) snapshot=1625182208
                 Virtual memory (bytes) snapshot=8428191744
                 Total committed heap usage (bytes)=4120903680
         File Input Format Countaddomain
                 Bytes Read=0
         File Output Format Countaddomain
                 Bytes Written=102
 17/02/21 08:54:46 INFO mapreduce.ImportJobBase: Transferred 102 bytes in 27.8888 seconds (3.6574 bytes/sec)
 17/02/21 08:54:46 INFO mapreduce.ImportJobBase: Retrieved 3 records.
Checking for data in HDFS.
 $ hdfs dfs -ls /user/zahmed/
 Found 2 items
 drwx------   - zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/.staging
 drwxr-xr-x   - zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test
Here is the data which was picked up by the (SQOOP) MR job.
 $ hdfs dfs -ls /user/zahmed/sqoop_test
 Found 6 items
 -rw-r--r--   3 zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test/_SUCCESS
 -rw-r--r--   3 zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00000
 -rw-r--r--   3 zahmed supergroup         35 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00001
 -rw-r--r--   3 zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00002
 -rw-r--r--   3 zahmed supergroup          0 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00003
 -rw-r--r--   3 zahmed supergroup         67 2017-02-21 08:54 /user/zahmed/sqoop_test/part-m-00004
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00000
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00001
 123123EFD123,Z AHMD,11000,ENGINEER
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00003
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00002
 $ hdfs dfs -cat /user/zahmed/sqoop_test/part-m-00004
 123EFD,ZUBAIR AHMED,1000,ENGINEER
 123EFD123,Z AHMED,11000,ENGINEER
[Note: Few of the jobs did not recieve any data as there were only 3 row in the table.]

Key Trustee Testing

NOTE: To enable key trustee the server should be kerberos enabled.

Create a key and directory.

 kinit 
 hadoop key create mykey1
 hadoop fs -mkdir /tmp/zone1
 kinit hdfs
 hdfs crypto -createZone -keyName mykey1 -path /tmp/zone1

Create a file, put it in your zone and ensure the file can be decrypted.

 kinit 
 echo "Hello World" > /tmp/helloWorld.txt
 hadoop fs -put /tmp/helloWorld.txt /tmp/zone1
 hadoop fs -cat /tmp/zone1/helloWorld.txt
 rm /tmp/helloWorld.txt

Ensure the file is stored as encrypted.

 kinit hdfs
 hadoop fs -cat /.reserved/raw/tmp/zone1/helloWorld.txt
 hadoop fs -rm -R /tmp/zone1

Command Output

Getting user credentials.
 $ kinit zahmed@ADDOMAIN.AHMEDINC.COM
 Password for zahmed@ADDOMAIN.AHMEDINC.COM:
 $ hdfs dfs -ls /
 Found 3 items
 drwx------   - hbase hbase               0 2017-02-23 14:43 /hbase
 drwxrwxrwx   - hdfs  supergroup          0 2017-02-21 13:37 /tmp
 drwxr-xr-x   - hdfs  supergroup          0 2017-02-17 17:47 /user
 $ hdfs dfs -ls /user
 Found 10 items
 drwxr-xr-x   - hdfs   supergroup          0 2017-02-17 09:18 /user/hdfs
 drwxrwxrwx   - mapred hadoop              0 2017-02-16 15:13 /user/history
 drwxr-xr-x   - hdfs   supergroup          0 2017-02-17 19:15 /user/hive
 drwxrwxr-x   - hue    hue                 0 2017-02-16 15:16 /user/hue
 drwxrwxr-x   - impala impala              0 2017-02-16 15:16 /user/impala
 drwxrwxr-x   - oozie  oozie               0 2017-02-16 15:17 /user/oozie
 drwxr-x--x   - spark  spark               0 2017-02-16 15:14 /user/spark
 drwxrwxr-x   - sqoop2 sqoop               0 2017-02-16 15:18 /user/sqoop2
 drwxr-xr-x   - yxc27  supergroup          0 2017-02-17 18:09 /user/yxc27
 drwxr-xr-x   - zahmed  supergroup          0 2017-02-20 08:20 /user/zahmed
Creating a key
 $ hadoop key create mykey1
 mykey1 has been successfully created with options Options{cipher='AES/CTR/NoPadding', bitLength=128, description='null', attributes=null}.
 org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@62e10dd0 has been updated.
Creating a zone
 $ hadoop fs -mkdir /tmp/zone1
Login in as hdfs
 $ cd /var/run/cloudera-scm-agent/process/
 $ sudo su
 # ls -lt | grep hdfs
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:50 1071-namenodes-failover
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:48 1070-hdfs-NAMENODE-safemode-wait
 drwxr-x--x. 3 hdfs      hdfs      380 Feb 23 14:47 1069-hdfs-FAILOVERCONTROLLER
 drwxr-x--x. 3 hdfs      hdfs      400 Feb 23 14:47 598-hdfs-FAILOVERCONTROLLER
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:47 1068-hdfs-NAMENODE-nnRpcWait
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:47 1067-hdfs-NAMENODE
 drwxr-x--x. 3 hdfs      hdfs      520 Feb 23 14:47 1063-hdfs-NAMENODE-rollEdits
 drwxr-x--x. 3 hdfs      hdfs      500 Feb 23 14:47 1065-hdfs-NAMENODE-jnSyncWait
 # cd 1071-namenodes-failover
 # hostname
 server.tigris.ahmedinc.com
 # kinit -kt hdfs.keytab hdfs/server.tigris.ahmedinc.com@DEVDOMAIN.AHMEDINC.COM
Creating Zone.
 # hdfs crypto -createZone -keyName mykey1 -path /tmp/zone1
 Added encryption zone /tmp/zone1
 # exit
 exit
Login in as admin user.
 $ klist
 Ticket cache: FILE:/tmp/krb5cc_9002
 Default principal: zahmed@ADDOMAIN.AHMEDINC.COM

 Valid starting     Expires            Service principal
 02/23/17 15:54:57  02/24/17 01:55:01  krbtgt/ADDOMAIN.AHMEDINC.COM@ADDOMAIN.AHMEDINC.COM
         renew until 03/02/17 15:54:57
 $ echo "Hello World" > /tmp/helloWorld.txt
 $ hadoop fs -put /tmp/helloWorld.txt /tmp/zone1
 $ hadoop fs -cat /tmp/zone1/helloWorld.txt
 Hello World
 $ rm /tmp/helloWorld.txt
 $ sudo su
 # klist
 Ticket cache: FILE:/tmp/krb5cc_0
 Default principal: hdfs/server.tigris.ahmedinc.com@DEVDOMAIN.AHMEDINC.COM

 Valid starting     Expires            Service principal
 02/23/17 15:57:15  02/24/17 01:57:14  krbtgt/DEVDOMAIN.AHMEDINC.COM@DEVDOMAIN.AHMEDINC.COM
         renew until 03/02/17 15:57:15
 # hadoop fs -cat /.reserved/raw/tmp/zone1/helloWorld.txt
 ▒▒▒i▒
 # hadoop fs -rm -R /tmp/zone1
 17/02/23 15:58:59 INFO fs.TrashPolicyDefault: Moved: 'hdfs://hdfsHA/tmp/zone1' to trash at: hdfs://hdfsHA/user/hdfs/.Trash/Current/tmp/zone1
 #

from Blogger http://ift.tt/2mJDEDg
via IFTTT

Categories: Others Tags: ,

Setting Hue to Listen on `0.0.0.0`

March 19, 2017 Leave a comment
We were working on setting up a cluster, but the Hue URL was set to a private IP of the server. As we had setup all the nodes to access each other using a private IP. But we wanted Hue to bind to public interface so that it can be accessed within the network.
Bind Hue to wild card address.
  1. Goto -> Hue Configuration -> search for Bind Hue.
  2. Check Bind Hue to Wildcard Address
  3. Restart Hue Server.
    We are done.

from Blogger http://ift.tt/2nDoBfd
via IFTTT

Categories: Others Tags: ,

Nagios – Service Group Summary ERROR

October 13, 2016 Leave a comment
We were working on nagios and found that after our migration, service group summary was not working.
You might get below error on the screen and the solution is similar for both issues.

Problem 1.

Error is Could not open CGI config file '/usr/local/nagios/etc/cgi.cfg' for reading
Problem 1

Problem 2.

Nagios: It appears as though you do not have permission to view information for any of the hosts you requested…
Problem 2

Solution.

Update the /usr/local/nagios/etc/cgi.cfg to below configuration, and restart nagios service.
# MODIFIED
default_statusmap_layout=6

# UNMODIFIED
action_url_target=_blank
authorized_for_all_host_commands=nagiosadmin
authorized_for_all_hosts=nagiosadmin
authorized_for_all_service_commands=nagiosadmin
authorized_for_all_services=nagiosadmin
authorized_for_configuration_information=nagiosadmin
authorized_for_system_commands=nagiosadmin
authorized_for_system_information=nagiosadmin
default_statuswrl_layout=4
escape_html_tags=1
lock_author_names=1
main_config_file=/usr/local/nagios/etc/nagios.cfg
notes_url_target=_blank
physical_html_path=/usr/local/nagios/share
ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$
refresh_rate=90
show_context_help=0
url_html_path=/nagios
use_authentication=1
use_pending_states=1
use_ssl_authentication=0

Steps to make the above change.

  1. Extract the installation archive.
  2. Find the cgi.cfg configuration file.
  3. Take a backup of the original file.
  4. Copy the cgi.cfg file to /usr/local/nagios/etc/cgi.cfg location.
  5. Restart nagios services.

1. Extract the installation archive.

[root@nagiosserver nagios_download]# tar xvf xi-5.2.9.tar.gz 
[root@nagiosserver nagios_download]# cd nagiosxi/
[root@nagiosserver nagiosxi]# ls
0-repos            5-sudoers        cpan                   get-os-info            install-sourceguardian-extension.sh      rpmupgrade                vmsetup
10-phplimits       6-firewall       dashlets.txt           get-version            install-sudoers                          sourceguardian            wizards.txt
11-sourceguardian  7-sendmail       D-chkconfigalldaemons  init-auditlog          install-templates                        subcomponents             xi-sys.cfg
12-mrtg            8-selinux        debianmods             init-mysql             licenses                                 susemods                  xivar
13-cacti           9-dbbackups      E-importnagiosql       init.sh                nagiosxi                                 tools                     Z-webroot
14-timezone        A-subcomponents  fedoramods             init-xidb              nagiosxi-deps-5.2.9-1.noarch.rpm         ubuntumods
1-prereqs          B-installxi      fix-nagiosadmin        install-2012-prereqs   nagiosxi-deps-el7-5.2.9-1.noarch.rpm     uninstall-crontab-nagios
2-usersgroups      C-cronjobs       F-startdaemons         install-html           nagiosxi-deps-suse11-5.2.9-1.noarch.rpm  uninstall-crontab-root
3-dbservers        CHANGELOG.txt    fullinstall            install-nagiosxi-init  packages                                 upgrade
4-services         components.txt   functions.sh           install-pnptemplates   rpminstall                               verify-prereqs.php

2. Find the cgi.cfg configuration file.

[root@nagiosserver nagiosxi]# find . -name "cgi.cfg" -print
./subcomponents/nagioscore/mods/cfg/cgi.cfg
[root@nagiosserver nagiosxi]# cat ./subcomponents/nagioscore/mods/cfg/cgi.cfg
# MODIFIED
default_statusmap_layout=6

# UNMODIFIED
action_url_target=_blank
authorized_for_all_host_commands=nagiosadmin
authorized_for_all_hosts=nagiosadmin
authorized_for_all_service_commands=nagiosadmin
authorized_for_all_services=nagiosadmin
authorized_for_configuration_information=nagiosadmin
authorized_for_system_commands=nagiosadmin
authorized_for_system_information=nagiosadmin
default_statuswrl_layout=4
escape_html_tags=1
lock_author_names=1
main_config_file=/usr/local/nagios/etc/nagios.cfg
notes_url_target=_blank
physical_html_path=/usr/local/nagios/share
ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$
refresh_rate=90
show_context_help=0
url_html_path=/nagios
use_authentication=1
use_pending_states=1
use_ssl_authentication=0

3. Take a backup of the original file.

[root@nagiosserver nagiosxi]# cp /usr/local/nagios/etc/cgi.cfg /usr/local/nagios/etc/cgi.cfg.org

4. Copy the cgi.cfg file to /usr/local/nagios/etc/cgi.cfg location.

[root@nagiosserver nagiosxi]# cp ./subcomponents/nagioscore/mods/cfg/cgi.cfg /usr/local/nagios/etc/cgi.cfg

5. Restart nagios services.

[root@nagiosserver nagiosxi]# service httpd restart; service nagios restart
Stopping httpd:                                            [  OK  ]
Starting httpd:                                            [  OK  ]
Running configuration check...
Stopping nagios:. done.
Starting nagios: done.
Resolved

Configuration file with explanation.

Location : cgi.cfg.in
#################################################################
#
# CGI.CFG - Sample CGI Configuration File for Nagios @VERSION@
#
#
#################################################################

# MAIN CONFIGURATION FILE
# This tells the CGIs where to find your main configuration file.
# The CGIs will read the main and host config files for any other
# data they might need.

main_config_file=@sysconfdir@/nagios.cfg

# PHYSICAL HTML PATH
# This is the path where the HTML files for Nagios reside.  This
# value is used to locate the logo images needed by the statusmap
# and statuswrl CGIs.

physical_html_path=@datadir@

# URL HTML PATH
# This is the path portion of the URL that corresponds to the
# physical location of the Nagios HTML files (as defined above).
# This value is used by the CGIs to locate the online documentation
# and graphics.  If you access the Nagios pages with an URL like
# http://ift.tt/2dna5Vv, this value should be '/nagios'
# (without the quotes).

url_html_path=@htmurl@

# CONTEXT-SENSITIVE HELP
# This option determines whether or not a context-sensitive
# help icon will be displayed for most of the CGIs.
# Values: 0 = disables context-sensitive help
#         1 = enables context-sensitive help

show_context_help=0

# PENDING STATES OPTION
# This option determines what states should be displayed in the web
# interface for hosts/services that have not yet been checked.
# Values: 0 = leave hosts/services that have not been check yet in their original state
#         1 = mark hosts/services that have not been checked yet as PENDING

use_pending_states=1

# AUTHENTICATION USAGE
# This option controls whether or not the CGIs will use any 
# authentication when displaying host and service information, as
# well as committing commands to Nagios for processing.  
#
# Read the HTML documentation to learn how the authorization works!
#
# NOTE: It is a really *bad* idea to disable authorization, unless
# you plan on removing the command CGI (cmd.cgi)!  Failure to do
# so will leave you wide open to kiddies messing with Nagios and
# possibly hitting you with a denial of service attack by filling up
# your drive by continuously writing to your command file!
#
# Setting this value to 0 will cause the CGIs to *not* use
# authentication (bad idea), while any other value will make them
# use the authentication functions (the default).

use_authentication=1

# x509 CERT AUTHENTICATION
# When enabled, this option allows you to use x509 cert (SSL)
# authentication in the CGIs.  This is an advanced option and should
# not be enabled unless you know what you're doing.

use_ssl_authentication=0

# DEFAULT USER
# Setting this variable will define a default user name that can
# access pages without authentication.  This allows people within a
# secure domain (i.e., behind a firewall) to see the current status
# without authenticating.  You may want to use this to avoid basic
# authentication if you are not using a secure server since basic
# authentication transmits passwords in the clear.
#
# Important:  Do not define a default username unless you are
# running a secure web server and are sure that everyone who has
# access to the CGIs has been authenticated in some manner!  If you
# define this variable, anyone who has not authenticated to the web
# server will inherit all rights you assign to this user!

#default_user_name=guest

# SYSTEM/PROCESS INFORMATION ACCESS
# This option is a comma-delimited list of all usernames that
# have access to viewing the Nagios process information as
# provided by the Extended Information CGI (extinfo.cgi).  By
# default, *no one* has access to this unless you choose to
# not use authorization.  You may use an asterisk (*) to
# authorize any user who has authenticated to the web server.

authorized_for_system_information=nagiosadmin

# CONFIGURATION INFORMATION ACCESS
# This option is a comma-delimited list of all usernames that
# can view ALL configuration information (hosts, commands, etc).
# By default, users can only view configuration information
# for the hosts and services they are contacts for. You may use
# an asterisk (*) to authorize any user who has authenticated
# to the web server.

authorized_for_configuration_information=nagiosadmin

# SYSTEM/PROCESS COMMAND ACCESS
# This option is a comma-delimited list of all usernames that
# can issue shutdown and restart commands to Nagios via the
# command CGI (cmd.cgi).  Users in this list can also change
# the program mode to active or standby. By default, *no one*
# has access to this unless you choose to not use authorization.
# You may use an asterisk (*) to authorize any user who has
# authenticated to the web server.

authorized_for_system_commands=nagiosadmin

# GLOBAL HOST/SERVICE VIEW ACCESS
# These two options are comma-delimited lists of all usernames that
# can view information for all hosts and services that are being
# monitored.  By default, users can only view information
# for hosts or services that they are contacts for (unless you
# you choose to not use authorization). You may use an asterisk (*)
# to authorize any user who has authenticated to the web server.

authorized_for_all_services=nagiosadmin
authorized_for_all_hosts=nagiosadmin

# GLOBAL HOST/SERVICE COMMAND ACCESS
# These two options are comma-delimited lists of all usernames that
# can issue host or service related commands via the command
# CGI (cmd.cgi) for all hosts and services that are being monitored. 
# By default, users can only issue commands for hosts or services 
# that they are contacts for (unless you you choose to not use 
# authorization).  You may use an asterisk (*) to authorize any
# user who has authenticated to the web server.

authorized_for_all_service_commands=nagiosadmin
authorized_for_all_host_commands=nagiosadmin

# READ-ONLY USERS
# A comma-delimited list of usernames that have read-only rights in
# the CGIs.  This will block any service or host commands normally shown
# on the extinfo CGI pages.  It will also block comments from being shown
# to read-only users.

#authorized_for_read_only=user1,user2

# STATUSMAP BACKGROUND IMAGE
# This option allows you to specify an image to be used as a 
# background in the statusmap CGI.  It is assumed that the image
# resides in the HTML images path (i.e. /usr/local/nagios/share/images).
# This path is automatically determined by appending "/images"
# to the path specified by the 'physical_html_path' directive.
# Note:  The image file may be in GIF, PNG, JPEG, or GD2 format.
# However, I recommend that you convert your image to GD2 format
# (uncompressed), as this will cause less CPU load when the CGI
# generates the image.

#statusmap_background_image=smbackground.gd2

# STATUSMAP TRANSPARENCY INDEX COLOR
# These options set the r,g,b values of the background color used the statusmap CGI,
# so normal browsers that can't show real png transparency set the desired color as
# a background color instead (to make it look pretty).  
# Defaults to white: (R,G,B) = (255,255,255).

#color_transparency_index_r=255
#color_transparency_index_g=255
#color_transparency_index_b=255

# DEFAULT STATUSMAP LAYOUT METHOD
# This option allows you to specify the default layout method
# the statusmap CGI should use for drawing hosts.  If you do
# not use this option, the default is to use user-defined
# coordinates.  Valid options are as follows:
#    0 = User-defined coordinates
#    1 = Depth layers
#       2 = Collapsed tree
#       3 = Balanced tree
#       4 = Circular
#       5 = Circular (Marked Up)

default_statusmap_layout=5

# DEFAULT STATUSWRL LAYOUT METHOD
# This option allows you to specify the default layout method
# the statuswrl (VRML) CGI should use for drawing hosts.  If you
# do not use this option, the default is to use user-defined
# coordinates.  Valid options are as follows:
#    0 = User-defined coordinates
#       2 = Collapsed tree
#       3 = Balanced tree
#       4 = Circular

default_statuswrl_layout=4

# STATUSWRL INCLUDE
# This option allows you to include your own objects in the 
# generated VRML world.  It is assumed that the file
# resides in the HTML path (i.e. /usr/local/nagios/share).

#statuswrl_include=myworld.wrl

# PING SYNTAX
# This option determines what syntax should be used when
# attempting to ping a host from the WAP interface (using
# the statuswml CGI.  You must include the full path to
# the ping binary, along with all required options.  The
# $HOSTADDRESS$ macro is substituted with the address of
# the host before the command is executed.
# Please note that the syntax for the ping binary is
# notorious for being different on virtually ever *NIX
# OS and distribution, so you may have to tweak this to
# work on your system.

ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$

# REFRESH RATE
# This option allows you to specify the refresh rate in seconds
# of various CGIs (status, statusmap, extinfo, and outages).  

refresh_rate=90

# DEFAULT PAGE LIMIT
# This option allows you to specify the default number of results 
# displayed on the status.cgi.  This number can be adjusted from
# within the UI after the initial page load. Setting this to 0
# will show all results.  

result_limit=100

# ESCAPE HTML TAGS
# This option determines whether HTML tags in host and service
# status output is escaped in the web interface.  If enabled,
# your plugin output will not be able to contain clickable links.

escape_html_tags=1

# SOUND OPTIONS
# These options allow you to specify an optional audio file
# that should be played in your browser window when there are
# problems on the network.  The audio files are used only in
# the status CGI.  Only the sound for the most critical problem
# will be played.  Order of importance (higher to lower) is as
# follows: unreachable hosts, down hosts, critical services,
# warning services, and unknown services. If there are no
# visible problems, the sound file optionally specified by
# 'normal_sound' variable will be played.
#
#
# =
#
# Note: All audio files must be placed in the /media subdirectory
# under the HTML path (i.e. /usr/local/nagios/share/media/).

#host_unreachable_sound=hostdown.wav
#host_down_sound=hostdown.wav
#service_critical_sound=critical.wav
#service_warning_sound=warning.wav
#service_unknown_sound=warning.wav
#normal_sound=noproblem.wav

# URL TARGET FRAMES
# These options determine the target frames in which notes and 
# action URLs will open.

action_url_target=_blank
notes_url_target=_blank

# LOCK AUTHOR NAMES OPTION
# This option determines whether users can change the author name 
# when submitting comments, scheduling downtime.  If disabled, the 
# author names will be locked into their contact name, as defined in Nagios.
# Values: 0 = allow editing author names
#         1 = lock author names (disallow editing)

lock_author_names=1

# SPLUNK INTEGRATION OPTIONS
# These options allow you to enable integration with Splunk
# in the web interface.  If enabled, you'll be presented with
# "Splunk It" links in various places in the CGIs (log file,
# alert history, host/service detail, etc).  Useful if you're
# trying to research why a particular problem occurred.
# For more information on Splunk, visit http://www.splunk.com/

# This option determines whether the Splunk integration is enabled
# Values: 0 = disable Splunk integration
#         1 = enable Splunk integration

#enable_splunk_integration=1

# This option should be the URL used to access your instance of Splunk

#splunk_url=http://127.0.0.1:8000/

# NAVIGATION BAR SEARCH OPTIONS
# The following options allow to configure the navbar search. Default
# is to search for hostnames. With enabled navbar_search_for_addresses,
# the navbar search queries IP addresses as well. It's also possible
# to enable search for aliases by setting navbar_search_for_aliases=1.

navbar_search_for_addresses=1
navbar_search_for_aliases=1

from Blogger http://ift.tt/2eaSQ7K
via IFTTT

Categories: Others Tags: ,

Zabbix History Table Clean Up

October 12, 2016 Leave a comment
Zabbix history table gets really big, and if you are in a situation where you want to clean it up.
Then we can do so, using the below steps.
  1. Stop zabbix server.
  2. Take table backup – just in case.
  3. Create a temporary table.
  4. Update the temporary table with data required, upto a specific date using epoch.
  5. Move old table to a different table name.
  6. Move updated (new temporary) table to original table which needs to be cleaned-up.
  7. Drop the old table. (Optional)
  8. Restart Zabbix
Since this is not offical procedure, but it has worked for me so use it at your own risk.
Here is another post which will help is reducing the size of history tables – http://ift.tt/2dMfqJ5
Zabbix Version : Zabbix v2.4
Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON

Step 1 Stop the Zabbix server

sudo service zabbix-server stop
Script.
echo "------------------------------------------"
echo "    1. Stopping Zabbix Server            "
echo "------------------------------------------"
sudo service zabbix-server stop;

Step 2 Table Table Backup.

mysqldump -uzabbix -pzabbix zabbix history_uint > /tmp/history_uint.dql
Script.
echo "------------------------------------------"
echo "    2. Backing up ${ZABBIX_TABLE_NAME} Table.    "
echo "    Location : ${BACKUP_FILE_PATH}        "
echo "------------------------------------------"
mkdir -p ${BACKUP_DIR_PATH}
mysqldump -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE ${ZABBIX_TABLE_NAME} > ${BACKUP_FILE_PATH}

Step 3 Open your favourite MySQL client and create a new table

CREATE TABLE history_uint_new_20161007 LIKE history_uint;
Script.
echo "------------------------------------------------------------------"
echo "    3. Create Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "CREATE TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} LIKE ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 4 Insert the latest records from the history_uint table to the history_uint_new table

Getting epoch time in bash is simple.
Current Date.
date --date "20160707" +%s
Date 3 Months Ago.
date --date "20161007" +%s
Here is the output.
[ahmed@localhost ~]$ date --date "20160707" +%s
1467829800
[ahmed@localhost ~]$ date --date "20161007" +%s
1475778600
Now insert data for 3 months.
INSERT INTO history_uint_new SELECT * FROM history_uint WHERE clock > '1413763200';
Script.
echo "------------------------------------------------------------------"
echo "    4. Inserting from ${ZABBIX_TABLE_NAME} Table to Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "INSERT INTO ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} SELECT * FROM ${ZABBIX_TABLE_NAME} WHERE clock > '${EPOCH_3MONTHS_BACK}'; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 5 – Move history_uint to history_uint_old table

ALTER TABLE history_uint RENAME history_uint_old;
Script.
echo "------------------------------------------------------------------"
echo "    5. Rename Table ${ZABBIX_TABLE_NAME} to ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old"
echo "------------------------------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME} RENAME ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old;" | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 6. Move newly created history_uint_new to history_uint

ALTER TABLE history_uint_new_20161007 RENAME history_uint;
Script.
echo "------------------------------------------"
echo "    6. Rename Temp Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) to Original Table (${ZABBIX_TABLE_NAME})"
echo "------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} RENAME ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 7. [OPTIONAL] Remove Old Table.

As we have backed-up the table we no long need it. So we can drop the old table.
DROP TABLE hostory_uint_old;
Script.
echo "------------------------------------------"
echo "    7. Dropping Old Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old), As we have already Backed it up. "
echo "------------------------------------------"
echo "DROP TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 8 – Start the Zabbix server

sudo service zabbix-server start
Script.
echo "------------------------------------------"
echo "    8. Starting Zabbix Server        "
echo "------------------------------------------"
sudo service zabbix-server start;

Step 9. Optional to reduce the history table.

Additionally you can update the items table and set the item history table record to a fewer days.
UPDATE items SET history = '15' WHERE history > '30';

Complete Script.

Location in Github

#!/bin/bash

THREE_MONTH_BACK_DATE=`date -d "now -3months" +%Y-%m-%d`
CURRENT_DATE=`date -d "now" +%Y-%m-%d`

EPOCH_3MONTHS_BACK=`date -d "$THREE_MONTH_BACK_DATE" +%s`
EPOCH_NOW=`date -d "$CURRENT_DATE" +%s`

ZABBIX_DATABASE="zabbix"
ZABBIX_USER="zabbix"
ZABBIX_PASSWD="zabbix"

ZABBIX_TABLE_NAME="history_uint"

BACKUP_DIR_PATH=/tmp/zabbix/zabbix_table_backup_${ZABBIX_TABLE_NAME}
BACKUP_FILE_PATH=${BACKUP_DIR_PATH}/${ZABBIX_TABLE_NAME}_${CURRENT_DATE}_${EPOCH_NOW}.sql

echo "------------------------------------------"
echo "Date to Keep Backup : $THREE_MONTH_BACK_DATE"
echo "Epoch to keep Backup : $EPOCH_3MONTHS_BACK"
echo "Today's Date : $CURRENT_DATE"
echo "Epoch For Today's Date : $EPOCH_NOW"
echo "------------------------------------------"

echo "##########################################"

echo "------------------------------------------"
echo "    1. Stopping Zabbix Server            "
echo "------------------------------------------"
sudo service zabbix-server stop; 
sleep 1

echo "------------------------------------------"
echo "    Display Tables                "
echo "------------------------------------------"
echo "show tables;" | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    2. Backing up ${ZABBIX_TABLE_NAME} Table.    "
echo "    Location : ${BACKUP_FILE_PATH}        "
echo "------------------------------------------"
mkdir -p ${BACKUP_DIR_PATH}
mysqldump -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE ${ZABBIX_TABLE_NAME} > ${BACKUP_FILE_PATH}
sleep 1

echo "------------------------------------------------------------------"
echo "    3. Create Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "CREATE TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} LIKE ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------------------------------"
echo "    4. Inserting from ${ZABBIX_TABLE_NAME} Table to Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "INSERT INTO ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} SELECT * FROM ${ZABBIX_TABLE_NAME} WHERE clock > '${EPOCH_3MONTHS_BACK}'; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------------------------------"
echo "    5. Rename Table ${ZABBIX_TABLE_NAME} to ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old"
echo "------------------------------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME} RENAME ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old;" | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    6. Rename Temp Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) to Original Table (${ZABBIX_TABLE_NAME})"
echo "------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} RENAME ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    7. Dropping Old Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old), As we have already Backed it up. "
echo "------------------------------------------"
echo "DROP TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    8. Starting Zabbix Server        "
echo "------------------------------------------"
sudo service zabbix-server start;

echo "##########################################"

from Blogger http://ift.tt/2dMg5KM
via IFTTT

Categories: Others Tags: ,

Windows Testing Using Kitchen Chef

October 3, 2016 Leave a comment
Kitchen-Vagrant has the capability to spin up a windows instatance for testing.
To make it make it work you will need the vagrant-winrm to be installted on the workstation.

Installing vagrant-winrm

┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ vagrant plugin install vagrant-winrm
Once you have have installed you might still get the below error.
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ kitchen list
 ------Exception-------
 Class: Kitchen::UserError
 Message: WinRM Transport requires the vagrant-winrm Vagrant plugin to properly communicate with this Vagrant VM. Please install this plugin with: `vagrant plugin install vagrant-winrm' and try again.

 Please see .kitchen/logs/kitchen.log for more details
 Also try running `kitchen diagnose --all` for configuration

Download Windows Box.

There is a nice repos which creates windows vagrant box.
git clone https://github.com/boxcutter/windows.git
Here is the output.
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ git clone http://ift.tt/2dmZuKC
Cloning into 'windows'...
remote: Counting objects: 2929, done.
remote: Total 2929 (delta 0), reused 0 (delta 0), pack-reused 2929
Receiving objects: 100% (2929/2929), 6.40 MiB | 1010.00 KiB/s, done.
Resolving deltas: 100% (2318/2318), done.
Checking connectivity... done.

Download and List of Available Boxes.

┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ cd windows/
┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows]
└─▪ ls
AUTHORS                                win2008r2-web.json
bin                                    win2008r2-web-ssh.json
box                                    win2012-datacenter-cygwin.json
CHANGELOG.md                           win2012-datacenter.json
eval-win10x64-enterprise-cygwin.json   win2012-datacenter-ssh.json
eval-win10x64-enterprise.json          win2012r2-datacenter-cygwin.json
eval-win10x64-enterprise-ssh.json      win2012r2-datacenter.json
eval-win10x86-enterprise-cygwin.json   win2012r2-datacenter-ssh.json
eval-win10x86-enterprise.json          win2012r2-standardcore-cygwin.json
eval-win10x86-enterprise-ssh.json      win2012r2-standardcore.json
eval-win2008r2-datacenter-cygwin.json  win2012r2-standardcore-ssh.json
eval-win2008r2-datacenter.json         win2012r2-standard-cygwin.json
eval-win2008r2-datacenter-ssh.json     win2012r2-standard.json
eval-win2008r2-standard-cygwin.json    win2012r2-standard-ssh.json
eval-win2008r2-standard.json           win2012-standard-cygwin.json
eval-win2008r2-standard-ssh.json       win2012-standard.json
eval-win2012r2-datacenter-cygwin.json  win2012-standard-ssh.json
eval-win2012r2-datacenter.json         win7x64-enterprise-cygwin.json
eval-win2012r2-datacenter-ssh.json     win7x64-enterprise.json
eval-win2012r2-standard-cygwin.json    win7x64-enterprise-ssh.json
eval-win2012r2-standard.json           win7x64-pro-cygwin.json
eval-win2012r2-standard-ssh.json       win7x64-pro.json
eval-win7x64-enterprise-cygwin.json    win7x64-pro-ssh.json
eval-win7x64-enterprise.json           win7x86-enterprise-cygwin.json
eval-win7x64-enterprise-ssh.json       win7x86-enterprise.json
eval-win7x86-enterprise-cygwin.json    win7x86-enterprise-ssh.json
eval-win7x86-enterprise.json           win7x86-pro-cygwin.json
eval-win7x86-enterprise-ssh.json       win7x86-pro.json
eval-win81x64-enterprise-cygwin.json   win7x86-pro-ssh.json
eval-win81x64-enterprise.json          win81x64-enterprise-cygwin.json
eval-win81x64-enterprise-ssh.json      win81x64-enterprise.json
eval-win81x86-enterprise-cygwin.json   win81x64-enterprise-ssh.json
eval-win81x86-enterprise.json          win81x64-pro-cygwin.json
eval-win81x86-enterprise-ssh.json      win81x64-pro.json
eval-win8x64-enterprise-cygwin.json    win81x64-pro-ssh.json
eval-win8x64-enterprise.json           win81x86-enterprise-cygwin.json
eval-win8x64-enterprise-ssh.json       win81x86-enterprise.json
floppy                                 win81x86-enterprise-ssh.json
LICENSE                                win81x86-pro-cygwin.json
Makefile                               win81x86-pro.json
README.md                              win81x86-pro-ssh.json
script                                 win8x64-enterprise-cygwin.json
test                                   win8x64-enterprise.json
tpl                                    win8x64-enterprise-ssh.json
VERSION                                win8x64-pro-cygwin.json
win2008r2-datacenter-cygwin.json       win8x64-pro.json
win2008r2-datacenter.json              win8x64-pro-ssh.json
win2008r2-datacenter-ssh.json          win8x86-enterprise-cygwin.json
win2008r2-enterprise-cygwin.json       win8x86-enterprise.json
win2008r2-enterprise.json              win8x86-enterprise-ssh.json
win2008r2-enterprise-ssh.json          win8x86-pro-cygwin.json
win2008r2-standard-cygwin.json         win8x86-pro.json
win2008r2-standard.json                win8x86-pro-ssh.json
win2008r2-standard-ssh.json            wip
win2008r2-web-cygwin.json              wsim

We get error packer not found.

┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows]
└─▪ make virtualbox/eval-win2012r2-standard
rm -rf output-virtualbox-iso
mkdir -p box/virtualbox
packer build -only=virtualbox-iso -var 'cm=nocm' -var 'version=1.0.4' -var 'update=false' -var 'headless=false' -var "shutdown_command=shutdown /s /t 10 /f /d p:4:1 /c Packer_Shutdown" -var "iso_url=http://ift.tt/1io5XVj" -var "iso_checksum=7e3f89dbff163e259ca9b0d1f078daafd2fed513" eval-win2012r2-standard.json
/bin/sh: 1: packer: not found
Makefile:428: recipe for target 'box/virtualbox/eval-win2012r2-standard-nocm-1.0.4.box' failed
make: *** [box/virtualbox

Let us install packer from hashicorp

┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ wget http://ift.tt/2asbbAB
--2016-09-22 11:21:14--  http://ift.tt/2asbbAB
Resolving releases.hashicorp.com (releases.hashicorp.com)... 151.101.12.69
Connecting to releases.hashicorp.com (releases.hashicorp.com)|151.101.12.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8985735 (8.6M) [application/zip]
Saving to: ‘packer_0.10.1_linux_amd64.zip’

packer_0.10.1_linux_ 100%[======================]   8.57M   204KB/s    in 29s

2016-09-22 11:21:44 (298 KB/s) - ‘packer_0.10.1_linux_amd64.zip’ saved [8985735/8985735]

Unzip and Install packer

Unpacking.
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ unzip packer_0.10.1_linux_amd64.zip
Archive:  packer_0.10.1_linux_amd64.zip
  inflating: packer
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ ls
backups    configs          others  packer_0.10.1_linux_amd64.zip  tech_documents
chef-repo  hepsi-chef-repo  packer  scripts                        windows
Copy packer to /usr/local/sbin/
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ sudo cp packer /usr/local/sbin/
[sudo] password for ahmed:
Now we are ready to use packer
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ packer
usage: packer [--version] [--help] command [args]

Available commands are:
    build       build image(s) from template
    fix         fixes templates from old versions of packer
    inspect     see components of a template
    push        push a template and supporting files to a Packer build service
    validate    check that a template is valid
    version     Prints the Packer version

┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ packer --version
0.10.1

Now lets install eval-win2012r2-standard.


┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows]
└─▪ make virtualbox/eval-win2012r2-standard
rm -rf output-virtualbox-iso
mkdir -p box/virtualbox
packer build -only=virtualbox-iso -var 'cm=nocm' -var 'version=1.0.4' -var 'update=false' -var 'headless=false' -var "shutdown_command=shutdown /s /t 10 /f /d p:4:1 /c Packer_Shutdown" -var "iso_url=http://ift.tt/1io5XVj" -var "iso_checksum=7e3f89dbff163e259ca9b0d1f078daafd2fed513" eval-win2012r2-standard.json
virtualbox-iso output will be in this color.

== virtualbox-iso: Cannot find "Default Guest Additions ISO" in vboxmanage output (or it is empty)
== virtualbox-iso: Downloading or copying Guest additions checksums
    virtualbox-iso: Downloading or copying: http://ift.tt/2dMHfMU
== virtualbox-iso: Downloading or copying Guest additions
    virtualbox-iso: Downloading or copying: http://ift.tt/2dmXrGv
    virtualbox-iso: Download progress: 7%
    virtualbox-iso: Download progress: 99%
    virtualbox-iso: Download progress: 100%
    virtualbox-iso: Download progress: 100%
    virtualbox-iso: Download progress: 100%
    virtualbox-iso: Download progress: 100%
== virtualbox-iso: Creating floppy disk...
    virtualbox-iso: Copying: floppy/00-run-all-scripts.cmd
    virtualbox-iso: Copying: floppy/01-install-wget.cmd
    virtualbox-iso: Copying: floppy/_download.cmd
    virtualbox-iso: Copying: floppy/_packer_config.cmd
    virtualbox-iso: Copying: floppy/disablewinupdate.bat
    virtualbox-iso: Copying: floppy/eval-win2012r2-standard/Autounattend.xml
    virtualbox-iso: Copying: floppy/fixnetwork.ps1
    virtualbox-iso: Copying: floppy/install-winrm.cmd
    virtualbox-iso: Copying: floppy/oracle-cert.cer
    virtualbox-iso: Copying: floppy/passwordchange.bat
    virtualbox-iso: Copying: floppy/powerconfig.bat
    virtualbox-iso: Copying: floppy/zz-start-sshd.cmd
== virtualbox-iso: Creating virtual machine...
== virtualbox-iso: Creating hard drive...
== virtualbox-iso: Attaching floppy disk...
== virtualbox-iso: Creating forwarded port mapping for communicator (SSH, WinRM, etc) (host port 4185)
== virtualbox-iso: Executing custom VBoxManage commands...
    virtualbox-iso: Executing: modifyvm eval-win2012r2-standard --memory 1536
    virtualbox-iso: Executing: modifyvm eval-win2012r2-standard --cpus 1
    virtualbox-iso: Executing: setextradata eval-win2012r2-standard VBoxInternal/CPUM/CMPXCHG16B 1
== virtualbox-iso: Starting the virtual machine...
== virtualbox-iso: Waiting 10s for boot...
== virtualbox-iso: Typing the boot command...
== virtualbox-iso: Waiting for WinRM to become available...
== virtualbox-iso: Connected to WinRM!
== virtualbox-iso: Uploading VirtualBox version info (5.0.18)
== virtualbox-iso: Uploading VirtualBox guest additions ISO...
== virtualbox-iso: Provisioning with windows-shell...
== virtualbox-iso: Provisioning with shell script: script/vagrant.bat
    virtualbox-iso: == Creating "C:\Users\vagrant\AppData\Local\Temp\vagrant"
    virtualbox-iso: == Downloading "http://ift.tt/1SMPogV" to "C:\Users\vagrant\AppData\Local\Temp\vagrant\vagrant.pub"
    virtualbox-iso: WARNING: cannot verify raw.githubusercontent.com's certificate, issued by 'CN=DigiCert SHA2 High Assurance Server CA,OU=http://www.digicert.com,O=DigiCert Inc,C=US':
    virtualbox-iso: Unable to locally verify the issuer's authority.
    virtualbox-iso: 2016-09-22 13:44:20 URL:http://ift.tt/1SMPogV [409/409] - "C:/Users/vagrant/AppData/Local/Temp/vagrant/vagrant.pub" [1]
    virtualbox-iso: == Creating "C:\Users\vagrant\.ssh"
    virtualbox-iso: == Adding "C:\Users\vagrant\AppData\Local\Temp\vagrant\vagrant.pub" to "C:\Users\vagrant\.ssh\authorized_keys"
    virtualbox-iso: == Disabling account password expiration for user "vagrant"
    virtualbox-iso: Updating property(s) of '\\WIN-80PPKE0JMK0\ROOT\CIMV2:Win32_UserAccount.Domain="WIN-80PPKE0JMK0",Name="vagrant"'
    virtualbox-iso: Property(s) update successful.
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/cmtool.bat
    virtualbox-iso: == Building box without a configuration management tool
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/vmtool.bat
    virtualbox-iso: == Creating "C:\Users\vagrant\AppData\Local\Temp\sevenzip"
    virtualbox-iso: == Downloading "http://ift.tt/2dMHt6V" to "C:\Users\vagrant\AppData\Local\Temp\sevenzip\7z1600-x64.msi"
    virtualbox-iso: 2016-09-22 13:44:33 URL:http://ift.tt/2dmYhmE [1664000/1664000] - "C:/Users/vagrant/AppData/Local/Temp/sevenzip/7z1600-x64.msi" [1]
    virtualbox-iso: == Installing "C:\Users\vagrant\AppData\Local\Temp\sevenzip\7z1600-x64.msi"
    virtualbox-iso: == Copying "C:\Program Files\7-Zip\7z.exe" to "C:\Windows"
    virtualbox-iso: 1 file(s) copied.
    virtualbox-iso: 1 file(s) copied.
    virtualbox-iso: == Extracting the VirtualBox Guest Additions installer
    virtualbox-iso:
    virtualbox-iso: 7-Zip [64] 16.00 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-10
    virtualbox-iso:
    virtualbox-iso: Scanning the drive for archives:
    virtualbox-iso: 1 file, 58144768 bytes (56 MiB)
    virtualbox-iso:
    virtualbox-iso: Extracting archive: C:\Users\vagrant\VBoxGuestAdditions.iso
    virtualbox-iso: --
    virtualbox-iso: Path = C:\Users\vagrant\VBoxGuestAdditions.iso
    virtualbox-iso: Type = Iso
    virtualbox-iso: Physical Size = 58144768
    virtualbox-iso: Created = 2016-04-18 06:38:18
    virtualbox-iso: Modified = 2016-04-18 06:38:18
    virtualbox-iso:
    virtualbox-iso: Everything is Ok
    virtualbox-iso:
    virtualbox-iso: Size:       16169336
    virtualbox-iso: Compressed: 58144768
    virtualbox-iso: == Installing Oracle certificate to keep install silent
    virtualbox-iso: TrustedPublisher "Trusted Publishers"
    virtualbox-iso: Certificate "Oracle Corporation" added to store.
    virtualbox-iso: CertUtil: -addstore command completed successfully.
    virtualbox-iso: == Installing VirtualBox Guest Additions
    virtualbox-iso: == Script exiting with errorlevel 0
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Could Not Find C:\Users\vagrant\AppData\Local\Temp\script.bat-25146.tmp
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
== virtualbox-iso: Provisioning with shell script: script/clean.bat
    virtualbox-iso: del /f /q /s "C:\Windows\TEMP\DMI7F57.tmp"
    virtualbox-iso: del /f /q /s "C:\Windows\TEMP\winstore.log"
    virtualbox-iso: == Cleaning "C:\Users\vagrant\AppData\Local\Temp" directories
    virtualbox-iso: == Cleaning "C:\Users\vagrant\AppData\Local\Temp" files
    virtualbox-iso: == Cleaning "C:\Windows\TEMP" directories
    virtualbox-iso: == Removing potentially corrupt recycle bin
    virtualbox-iso: == Cleaning "C:\Windows\TEMP" files
    virtualbox-iso: == Cleaning "C:\Users\vagrant"
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/ultradefrag.bat
    virtualbox-iso: == Creating "C:\Users\vagrant\AppData\Local\Temp\ultradefrag"
    virtualbox-iso: == Downloading "http://ift.tt/2dMGmEj" to "C:\Users\vagrant\AppData\Local\Temp\ultradefrag\ultradefrag-portable-7.0.1.bin.amd64.zip"
    virtualbox-iso: http://ift.tt/2dMGmEj:
    virtualbox-iso: 2016-09-22 13:45:01 ERROR 404: Not Found.
    virtualbox-iso: == Unzipping "C:\Users\vagrant\AppData\Local\Temp\ultradefrag\ultradefrag-portable-7.0.1.bin.amd64.zip" to "C:\Users\vagrant\AppData\Local\Temp\ultradefrag"
    virtualbox-iso:
    virtualbox-iso: 7-Zip [64] 16.00 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-10
    virtualbox-iso:
    virtualbox-iso: Scanning the drive for archives:
    virtualbox-iso: 1 file, 3596965 bytes (3513 KiB)
    virtualbox-iso:
    virtualbox-iso: Extracting archive: C:\Users\vagrant\AppData\Local\Temp\ultradefrag\ultradefrag-portable-7.0.1.bin.amd64.zip
    virtualbox-iso: --
    virtualbox-iso: Path = C:\Users\vagrant\AppData\Local\Temp\ultradefrag\ultradefrag-portable-7.0.1.bin.amd64.zip
    virtualbox-iso: Type = zip
    virtualbox-iso: Physical Size = 3596965
    virtualbox-iso:
    virtualbox-iso: Everything is Ok
    virtualbox-iso:
    virtualbox-iso: Files: 4
    virtualbox-iso: Size:       2753024
    virtualbox-iso: Compressed: 3596965
    virtualbox-iso: == Running UltraDefrag on C:
    virtualbox-iso: UltraDefrag 7.0.1, Copyright (c) UltraDefrag Development Team, 2007-2016.
    virtualbox-iso: UltraDefrag comes with ABSOLUTELY NO WARRANTY. This is free software,
    virtualbox-iso: and you are welcome to redistribute it under certain conditions.
    virtualbox-iso:
    virtualbox-iso: C: defrag:   100.00% complete, 7 passes needed, fragmented/total = 4/75370
    virtualbox-iso: == Removing "C:\Users\vagrant\AppData\Local\Temp\ultradefrag"
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/uninstall-7zip.bat
    virtualbox-iso: == Uninstalling 7zip
    virtualbox-iso: == WARNING: Directory not found: "C:\Users\vagrant\AppData\Local\Temp\sevenzip"
    virtualbox-iso: == Removing "C:\Program Files\7-Zip"
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/sdelete.bat
    virtualbox-iso: == Creating "C:\Users\vagrant\AppData\Local\Temp\sdelete"
    virtualbox-iso: == Downloading "http://ift.tt/2dmYAhm" to "C:\Users\vagrant\AppData\Local\Temp\sdelete\sdelete.exe"
    virtualbox-iso: WARNING: cannot verify live.sysinternals.com's certificate, issued by 'CN=Microsoft IT SSL SHA2,OU=Microsoft IT,O=Microsoft Corporation,L=Redmond,ST=Washington,C=US':
    virtualbox-iso: Unable to locally verify the issuer's authority.
    virtualbox-iso: The operation completed successfully.
    virtualbox-iso: 2016-09-22 13:59:14 URL:http://ift.tt/2dMHAzc [151200/151200] - "C:/Users/vagrant/AppData/Local/Temp/sdelete/sdelete.exe" [1]
    virtualbox-iso: == Running SDelete on C:
    virtualbox-iso:
    virtualbox-iso: SDelete v2.0 - Secure file delete
    virtualbox-iso: Copyright (C) 1999-2016 Mark Russinovich
    virtualbox-iso: Sysinternals - www.sysinternals.com
    virtualbox-iso:
    virtualbox-iso: SDelete is set for 1 pass.

Adding Box to vagrant

┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows]
└─▪ cd box/virtualbox/
┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows/box/virtualbox]
└─▪ ls
eval-win2012r2-standard-nocm-1.0.4.box
┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows/box/virtualbox]
└─▪ vagrant box add windows-2012r2 eval-win2012r2-standard-nocm-1.0.4.box

Update the .kitchen.yml on your cookbook.

---
driver:
  name: vagrant

provisioner:
  name: chef_zero

platforms:
  - name: windows-2012r2

suites:
  - name: default
    run_list:
      - recipe[starter-windows-cookbook::default]

List VM

Command
kitchen list

VM Details

┌─[ahmed][zubair-HP-ProBook][±][master U:3 ?:2 ✗][~/work/chef-repo/cookbooks/nagios_nrpe_deploy]
└─▪ kitchen list
Instance                Driver   Provisioner  Verifier  Transport  Last Action
windows-2012r2          Vagrant  ChefZero     Busser    Winrm      Not Created

Testing Windows VM – Using command below.

kitchen test
We are done !!!! Enjoy Windows Testing.

from Blogger http://ift.tt/2dmX5jj
via IFTTT

Categories: Others Tags: ,

Package Installer for Cygwin [apt-cyg].

October 2, 2016 Leave a comment
After a longtime I was on my windows machine and had to make it feel more like my linux machine. So install the thing what everyone else does cygwin.
Surpise my custom .bashrc and .vimrc worked without any issues. Good !! had the bashrc update vimrc update, we are back to linux .. like 🙂
My custom linux environment – howto.
Then I realized there is no way to install package from cygwin terminal.
Then I found below script apt-cyg which is really nice.
Package Installer – apt-cyg http://ift.tt/2dI1yyY

Installation

apt-cyg is a simple script, copy below script to home directory on cygwin
Here is the link http://ift.tt/2djAIdN
Execute below command.
install apt-cyg /bin
Now we can use – Example use of apt-cyg
apt-cyg install nano
apt-cyg install lynx
Output
┌─[Zubair][AHMD-WRK-HORSE][~]
└─▪ apt-cyg install lynx
Installing lynx
--2016-09-28 12:49:39--  http://cygwin.mirror.constant.com//x86_64/release/lynx/lynx-2.8.7-2.tar.bz2
Resolving cygwin.mirror.constant.com (cygwin.mirror.constant.com)... 108.61.5.83
Connecting to cygwin.mirror.constant.com (cygwin.mirror.constant.com)|108.61.5.83|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1746879 (1.7M) [application/octet-stream]
Saving to: ‘lynx-2.8.7-2.tar.bz2’

lynx-2.8.7-2.tar.bz2           100%[==================================>]   1.67M   181KB/s    in 12s

2016-09-28 12:49:52 (146 KB/s) - ‘lynx-2.8.7-2.tar.bz2’ saved [1746879/1746879]

lynx-2.8.7-2.tar.bz2: OK
Unpacking...
Package lynx requires the following packages, installing:
bash cygwin libiconv2 libintl8 libncursesw10 libopenssl100 zlib0
Package bash is already installed, skipping
Package cygwin is already installed, skipping
Package libiconv2 is already installed, skipping
Package libintl8 is already installed, skipping
Package libncursesw10 is already installed, skipping
Package libopenssl100 is already installed, skipping
Package zlib0 is already installed, skipping
Running /etc/postinstall/lynx.sh
Package lynx installed
Now we are good. !!!

from Blogger http://ift.tt/2dI1mPW
via IFTTT

Categories: Others Tags: ,

Issues – Monitoring MongoDB using Nagios XI.

October 1, 2016 Leave a comment
Monitoring for mongodb using nagiosxi is straight forword but you might have some issues when we are setting up.
Here are few issues which might come up using mongodb version 3.

Issue getting monitoring data in nagios.

1. ConnectionFailure object has no attribute strip

[ahmed@localhost libexec]$ ./check_mongodb.py -H 192.168.94.137 -P 27017 -u admin -p admin
Traceback (most recent call last):
  File "./check_mongodb.py", line 1372, in <module>
    sys.exit(main(sys.argv[1:]))
  File "./check_mongodb.py", line 196, in main
    err, con = mongo_connect(host, port, ssl, user, passwd, replicaset)
  File "./check_mongodb.py", line 294, in mongo_connect
    return exit_with_general_critical(e), None
  File "./check_mongodb.py", line 310, in exit_with_general_critical
    if e.strip() == "not master":
AttributeError: 'ConnectionFailure' object has no attribute 'strip'
Solution.
e.strip() expects e to be a string, which might not be the case sometimes, so remove strip(). Change below code on line 310.
  else:
      if e.strip() == "not master":
          print "UNKNOWN - Could not get data from server:", e
          return 3
to
  else:
      if e == "not master":
          print "UNKNOWN - Could not get data from server:", e
          return 3
After the change atleast you will get an error which gives you more information.
[ahmed@localhost libexec]$ ./check_mongodb_2.py -H 192.168.94.138 -P 27017 -u admin -p admin1 -A databases -W 5 -C 10
CRITICAL - General MongoDB Error: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'37a502d665186449'), ('key', u'd8c683f98a5e720c28a8007018ed7414')]) failed: auth failed
Next we will try to resolve, above auth failure.

2. Executing command from the nagios server.

[ahmed@localhost libexec]$ ./check_mongodb_2.py -H 192.168.94.138 -P 27017 -u admin -p admin1 -A databases -W 5 -C 10
CRITICAL - General MongoDB Error: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'42110dc29ee7fe6b'), ('key', u'827a2b0e4af97e88560800ab86b04e57')]) failed: auth failed

On the mongodb server.

Checking on the mongodb server shows that the AuthenticationFailed due to MONGODB-CR credentials missing in the user document
2016-09-14T19:11:12.142-0700 I ACCESS   [conn114] Successfully  authenticated as principal admin on admin
2016-09-14T19:11:32.892-0700 I NETWORK  [initandlisten] connection accepted from  192.168.94.130:48657 #115 (2 connections now open)
2016-09-14T19:11:32.894-0700 I ACCESS   [conn115]  authenticate db: admin { authenticate: 1, user: "admin", nonce: "xxx", key: "xxx" }
2016-09-14T19:11:32.894-0700 I ACCESS   [conn115] Failed to authenticate admin@admin with mechanism MONGODB-CR: AuthenticationFailed: MONGODB-CR credentials missing in the user document
2016-09-14T19:11:32.895-0700 I NETWORK  [conn115] end connection 192.168.94.130:48657 (1 connection now open)
2016-09-14T19:11:54.283-0700 I NETWORK  [initandlisten] connection accepted from 192.168.94.130:48663 #116 (2 connections now open)
2016-09-14T19:11:54.284-0700 I NETWORK  [conn116] end connection 192.168.94.130:48663 (1 connection now open)
2016-09-14T19:12:07.860-0700 I NETWORK  [initandlisten] connection accepted from 192.168.94.130:48666 #117 (2 connections now open)
2016-09-14T19:12:07.861-0700 I ACCESS   [conn117] Unauthorized: not authorized on admin to execute command { listDatabases: 1 }
Solution.
  1. Delete exsisting users on the database if it was already created.
  2. Modify the collection admin.system.version such that the authSchema currentVersion is 3 instead of 5
  3. Version 3 is using MongoDB-CR
  4. Recreate your user on the databases.
NOTE : Do not do it on PRODUCTION environment, use update instead and try on test database first.
mongo
use admin
db.system.users.remove({})
db.system.version.remove({})
db.system.version.insert({ "_id" : "authSchema", "currentVersion" : 3 })
More Details Here:

from Blogger http://ift.tt/2dzxVM9
via IFTTT

Categories: Others Tags: ,