Archive

Posts Tagged ‘IFTTT’

Nagios – Service Group Summary ERROR

October 13, 2016 Leave a comment
We were working on nagios and found that after our migration, service group summary was not working.
You might get below error on the screen and the solution is similar for both issues.

Problem 1.

Error is Could not open CGI config file '/usr/local/nagios/etc/cgi.cfg' for reading
Problem 1

Problem 2.

Nagios: It appears as though you do not have permission to view information for any of the hosts you requested…
Problem 2

Solution.

Update the /usr/local/nagios/etc/cgi.cfg to below configuration, and restart nagios service.
# MODIFIED
default_statusmap_layout=6

# UNMODIFIED
action_url_target=_blank
authorized_for_all_host_commands=nagiosadmin
authorized_for_all_hosts=nagiosadmin
authorized_for_all_service_commands=nagiosadmin
authorized_for_all_services=nagiosadmin
authorized_for_configuration_information=nagiosadmin
authorized_for_system_commands=nagiosadmin
authorized_for_system_information=nagiosadmin
default_statuswrl_layout=4
escape_html_tags=1
lock_author_names=1
main_config_file=/usr/local/nagios/etc/nagios.cfg
notes_url_target=_blank
physical_html_path=/usr/local/nagios/share
ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$
refresh_rate=90
show_context_help=0
url_html_path=/nagios
use_authentication=1
use_pending_states=1
use_ssl_authentication=0

Steps to make the above change.

  1. Extract the installation archive.
  2. Find the cgi.cfg configuration file.
  3. Take a backup of the original file.
  4. Copy the cgi.cfg file to /usr/local/nagios/etc/cgi.cfg location.
  5. Restart nagios services.

1. Extract the installation archive.

[root@nagiosserver nagios_download]# tar xvf xi-5.2.9.tar.gz 
[root@nagiosserver nagios_download]# cd nagiosxi/
[root@nagiosserver nagiosxi]# ls
0-repos            5-sudoers        cpan                   get-os-info            install-sourceguardian-extension.sh      rpmupgrade                vmsetup
10-phplimits       6-firewall       dashlets.txt           get-version            install-sudoers                          sourceguardian            wizards.txt
11-sourceguardian  7-sendmail       D-chkconfigalldaemons  init-auditlog          install-templates                        subcomponents             xi-sys.cfg
12-mrtg            8-selinux        debianmods             init-mysql             licenses                                 susemods                  xivar
13-cacti           9-dbbackups      E-importnagiosql       init.sh                nagiosxi                                 tools                     Z-webroot
14-timezone        A-subcomponents  fedoramods             init-xidb              nagiosxi-deps-5.2.9-1.noarch.rpm         ubuntumods
1-prereqs          B-installxi      fix-nagiosadmin        install-2012-prereqs   nagiosxi-deps-el7-5.2.9-1.noarch.rpm     uninstall-crontab-nagios
2-usersgroups      C-cronjobs       F-startdaemons         install-html           nagiosxi-deps-suse11-5.2.9-1.noarch.rpm  uninstall-crontab-root
3-dbservers        CHANGELOG.txt    fullinstall            install-nagiosxi-init  packages                                 upgrade
4-services         components.txt   functions.sh           install-pnptemplates   rpminstall                               verify-prereqs.php

2. Find the cgi.cfg configuration file.

[root@nagiosserver nagiosxi]# find . -name "cgi.cfg" -print
./subcomponents/nagioscore/mods/cfg/cgi.cfg
[root@nagiosserver nagiosxi]# cat ./subcomponents/nagioscore/mods/cfg/cgi.cfg
# MODIFIED
default_statusmap_layout=6

# UNMODIFIED
action_url_target=_blank
authorized_for_all_host_commands=nagiosadmin
authorized_for_all_hosts=nagiosadmin
authorized_for_all_service_commands=nagiosadmin
authorized_for_all_services=nagiosadmin
authorized_for_configuration_information=nagiosadmin
authorized_for_system_commands=nagiosadmin
authorized_for_system_information=nagiosadmin
default_statuswrl_layout=4
escape_html_tags=1
lock_author_names=1
main_config_file=/usr/local/nagios/etc/nagios.cfg
notes_url_target=_blank
physical_html_path=/usr/local/nagios/share
ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$
refresh_rate=90
show_context_help=0
url_html_path=/nagios
use_authentication=1
use_pending_states=1
use_ssl_authentication=0

3. Take a backup of the original file.

[root@nagiosserver nagiosxi]# cp /usr/local/nagios/etc/cgi.cfg /usr/local/nagios/etc/cgi.cfg.org

4. Copy the cgi.cfg file to /usr/local/nagios/etc/cgi.cfg location.

[root@nagiosserver nagiosxi]# cp ./subcomponents/nagioscore/mods/cfg/cgi.cfg /usr/local/nagios/etc/cgi.cfg

5. Restart nagios services.

[root@nagiosserver nagiosxi]# service httpd restart; service nagios restart
Stopping httpd:                                            [  OK  ]
Starting httpd:                                            [  OK  ]
Running configuration check...
Stopping nagios:. done.
Starting nagios: done.
Resolved

Configuration file with explanation.

Location : cgi.cfg.in
#################################################################
#
# CGI.CFG - Sample CGI Configuration File for Nagios @VERSION@
#
#
#################################################################

# MAIN CONFIGURATION FILE
# This tells the CGIs where to find your main configuration file.
# The CGIs will read the main and host config files for any other
# data they might need.

main_config_file=@sysconfdir@/nagios.cfg

# PHYSICAL HTML PATH
# This is the path where the HTML files for Nagios reside.  This
# value is used to locate the logo images needed by the statusmap
# and statuswrl CGIs.

physical_html_path=@datadir@

# URL HTML PATH
# This is the path portion of the URL that corresponds to the
# physical location of the Nagios HTML files (as defined above).
# This value is used by the CGIs to locate the online documentation
# and graphics.  If you access the Nagios pages with an URL like
# http://ift.tt/2dna5Vv, this value should be '/nagios'
# (without the quotes).

url_html_path=@htmurl@

# CONTEXT-SENSITIVE HELP
# This option determines whether or not a context-sensitive
# help icon will be displayed for most of the CGIs.
# Values: 0 = disables context-sensitive help
#         1 = enables context-sensitive help

show_context_help=0

# PENDING STATES OPTION
# This option determines what states should be displayed in the web
# interface for hosts/services that have not yet been checked.
# Values: 0 = leave hosts/services that have not been check yet in their original state
#         1 = mark hosts/services that have not been checked yet as PENDING

use_pending_states=1

# AUTHENTICATION USAGE
# This option controls whether or not the CGIs will use any 
# authentication when displaying host and service information, as
# well as committing commands to Nagios for processing.  
#
# Read the HTML documentation to learn how the authorization works!
#
# NOTE: It is a really *bad* idea to disable authorization, unless
# you plan on removing the command CGI (cmd.cgi)!  Failure to do
# so will leave you wide open to kiddies messing with Nagios and
# possibly hitting you with a denial of service attack by filling up
# your drive by continuously writing to your command file!
#
# Setting this value to 0 will cause the CGIs to *not* use
# authentication (bad idea), while any other value will make them
# use the authentication functions (the default).

use_authentication=1

# x509 CERT AUTHENTICATION
# When enabled, this option allows you to use x509 cert (SSL)
# authentication in the CGIs.  This is an advanced option and should
# not be enabled unless you know what you're doing.

use_ssl_authentication=0

# DEFAULT USER
# Setting this variable will define a default user name that can
# access pages without authentication.  This allows people within a
# secure domain (i.e., behind a firewall) to see the current status
# without authenticating.  You may want to use this to avoid basic
# authentication if you are not using a secure server since basic
# authentication transmits passwords in the clear.
#
# Important:  Do not define a default username unless you are
# running a secure web server and are sure that everyone who has
# access to the CGIs has been authenticated in some manner!  If you
# define this variable, anyone who has not authenticated to the web
# server will inherit all rights you assign to this user!

#default_user_name=guest

# SYSTEM/PROCESS INFORMATION ACCESS
# This option is a comma-delimited list of all usernames that
# have access to viewing the Nagios process information as
# provided by the Extended Information CGI (extinfo.cgi).  By
# default, *no one* has access to this unless you choose to
# not use authorization.  You may use an asterisk (*) to
# authorize any user who has authenticated to the web server.

authorized_for_system_information=nagiosadmin

# CONFIGURATION INFORMATION ACCESS
# This option is a comma-delimited list of all usernames that
# can view ALL configuration information (hosts, commands, etc).
# By default, users can only view configuration information
# for the hosts and services they are contacts for. You may use
# an asterisk (*) to authorize any user who has authenticated
# to the web server.

authorized_for_configuration_information=nagiosadmin

# SYSTEM/PROCESS COMMAND ACCESS
# This option is a comma-delimited list of all usernames that
# can issue shutdown and restart commands to Nagios via the
# command CGI (cmd.cgi).  Users in this list can also change
# the program mode to active or standby. By default, *no one*
# has access to this unless you choose to not use authorization.
# You may use an asterisk (*) to authorize any user who has
# authenticated to the web server.

authorized_for_system_commands=nagiosadmin

# GLOBAL HOST/SERVICE VIEW ACCESS
# These two options are comma-delimited lists of all usernames that
# can view information for all hosts and services that are being
# monitored.  By default, users can only view information
# for hosts or services that they are contacts for (unless you
# you choose to not use authorization). You may use an asterisk (*)
# to authorize any user who has authenticated to the web server.

authorized_for_all_services=nagiosadmin
authorized_for_all_hosts=nagiosadmin

# GLOBAL HOST/SERVICE COMMAND ACCESS
# These two options are comma-delimited lists of all usernames that
# can issue host or service related commands via the command
# CGI (cmd.cgi) for all hosts and services that are being monitored. 
# By default, users can only issue commands for hosts or services 
# that they are contacts for (unless you you choose to not use 
# authorization).  You may use an asterisk (*) to authorize any
# user who has authenticated to the web server.

authorized_for_all_service_commands=nagiosadmin
authorized_for_all_host_commands=nagiosadmin

# READ-ONLY USERS
# A comma-delimited list of usernames that have read-only rights in
# the CGIs.  This will block any service or host commands normally shown
# on the extinfo CGI pages.  It will also block comments from being shown
# to read-only users.

#authorized_for_read_only=user1,user2

# STATUSMAP BACKGROUND IMAGE
# This option allows you to specify an image to be used as a 
# background in the statusmap CGI.  It is assumed that the image
# resides in the HTML images path (i.e. /usr/local/nagios/share/images).
# This path is automatically determined by appending "/images"
# to the path specified by the 'physical_html_path' directive.
# Note:  The image file may be in GIF, PNG, JPEG, or GD2 format.
# However, I recommend that you convert your image to GD2 format
# (uncompressed), as this will cause less CPU load when the CGI
# generates the image.

#statusmap_background_image=smbackground.gd2

# STATUSMAP TRANSPARENCY INDEX COLOR
# These options set the r,g,b values of the background color used the statusmap CGI,
# so normal browsers that can't show real png transparency set the desired color as
# a background color instead (to make it look pretty).  
# Defaults to white: (R,G,B) = (255,255,255).

#color_transparency_index_r=255
#color_transparency_index_g=255
#color_transparency_index_b=255

# DEFAULT STATUSMAP LAYOUT METHOD
# This option allows you to specify the default layout method
# the statusmap CGI should use for drawing hosts.  If you do
# not use this option, the default is to use user-defined
# coordinates.  Valid options are as follows:
#    0 = User-defined coordinates
#    1 = Depth layers
#       2 = Collapsed tree
#       3 = Balanced tree
#       4 = Circular
#       5 = Circular (Marked Up)

default_statusmap_layout=5

# DEFAULT STATUSWRL LAYOUT METHOD
# This option allows you to specify the default layout method
# the statuswrl (VRML) CGI should use for drawing hosts.  If you
# do not use this option, the default is to use user-defined
# coordinates.  Valid options are as follows:
#    0 = User-defined coordinates
#       2 = Collapsed tree
#       3 = Balanced tree
#       4 = Circular

default_statuswrl_layout=4

# STATUSWRL INCLUDE
# This option allows you to include your own objects in the 
# generated VRML world.  It is assumed that the file
# resides in the HTML path (i.e. /usr/local/nagios/share).

#statuswrl_include=myworld.wrl

# PING SYNTAX
# This option determines what syntax should be used when
# attempting to ping a host from the WAP interface (using
# the statuswml CGI.  You must include the full path to
# the ping binary, along with all required options.  The
# $HOSTADDRESS$ macro is substituted with the address of
# the host before the command is executed.
# Please note that the syntax for the ping binary is
# notorious for being different on virtually ever *NIX
# OS and distribution, so you may have to tweak this to
# work on your system.

ping_syntax=/bin/ping -n -U -c 5 $HOSTADDRESS$

# REFRESH RATE
# This option allows you to specify the refresh rate in seconds
# of various CGIs (status, statusmap, extinfo, and outages).  

refresh_rate=90

# DEFAULT PAGE LIMIT
# This option allows you to specify the default number of results 
# displayed on the status.cgi.  This number can be adjusted from
# within the UI after the initial page load. Setting this to 0
# will show all results.  

result_limit=100

# ESCAPE HTML TAGS
# This option determines whether HTML tags in host and service
# status output is escaped in the web interface.  If enabled,
# your plugin output will not be able to contain clickable links.

escape_html_tags=1

# SOUND OPTIONS
# These options allow you to specify an optional audio file
# that should be played in your browser window when there are
# problems on the network.  The audio files are used only in
# the status CGI.  Only the sound for the most critical problem
# will be played.  Order of importance (higher to lower) is as
# follows: unreachable hosts, down hosts, critical services,
# warning services, and unknown services. If there are no
# visible problems, the sound file optionally specified by
# 'normal_sound' variable will be played.
#
#
# =
#
# Note: All audio files must be placed in the /media subdirectory
# under the HTML path (i.e. /usr/local/nagios/share/media/).

#host_unreachable_sound=hostdown.wav
#host_down_sound=hostdown.wav
#service_critical_sound=critical.wav
#service_warning_sound=warning.wav
#service_unknown_sound=warning.wav
#normal_sound=noproblem.wav

# URL TARGET FRAMES
# These options determine the target frames in which notes and 
# action URLs will open.

action_url_target=_blank
notes_url_target=_blank

# LOCK AUTHOR NAMES OPTION
# This option determines whether users can change the author name 
# when submitting comments, scheduling downtime.  If disabled, the 
# author names will be locked into their contact name, as defined in Nagios.
# Values: 0 = allow editing author names
#         1 = lock author names (disallow editing)

lock_author_names=1

# SPLUNK INTEGRATION OPTIONS
# These options allow you to enable integration with Splunk
# in the web interface.  If enabled, you'll be presented with
# "Splunk It" links in various places in the CGIs (log file,
# alert history, host/service detail, etc).  Useful if you're
# trying to research why a particular problem occurred.
# For more information on Splunk, visit http://www.splunk.com/

# This option determines whether the Splunk integration is enabled
# Values: 0 = disable Splunk integration
#         1 = enable Splunk integration

#enable_splunk_integration=1

# This option should be the URL used to access your instance of Splunk

#splunk_url=http://127.0.0.1:8000/

# NAVIGATION BAR SEARCH OPTIONS
# The following options allow to configure the navbar search. Default
# is to search for hostnames. With enabled navbar_search_for_addresses,
# the navbar search queries IP addresses as well. It's also possible
# to enable search for aliases by setting navbar_search_for_aliases=1.

navbar_search_for_addresses=1
navbar_search_for_aliases=1

from Blogger http://ift.tt/2eaSQ7K
via IFTTT

Categories: Others Tags: ,

Zabbix History Table Clean Up

October 12, 2016 Leave a comment
Zabbix history table gets really big, and if you are in a situation where you want to clean it up.
Then we can do so, using the below steps.
  1. Stop zabbix server.
  2. Take table backup – just in case.
  3. Create a temporary table.
  4. Update the temporary table with data required, upto a specific date using epoch.
  5. Move old table to a different table name.
  6. Move updated (new temporary) table to original table which needs to be cleaned-up.
  7. Drop the old table. (Optional)
  8. Restart Zabbix
Since this is not offical procedure, but it has worked for me so use it at your own risk.
Here is another post which will help is reducing the size of history tables – http://ift.tt/2dMfqJ5
Zabbix Version : Zabbix v2.4
Make sure MySql 5.1 is set with InnoDB as innodb_file_per_table=ON

Step 1 Stop the Zabbix server

sudo service zabbix-server stop
Script.
echo "------------------------------------------"
echo "    1. Stopping Zabbix Server            "
echo "------------------------------------------"
sudo service zabbix-server stop;

Step 2 Table Table Backup.

mysqldump -uzabbix -pzabbix zabbix history_uint > /tmp/history_uint.dql
Script.
echo "------------------------------------------"
echo "    2. Backing up ${ZABBIX_TABLE_NAME} Table.    "
echo "    Location : ${BACKUP_FILE_PATH}        "
echo "------------------------------------------"
mkdir -p ${BACKUP_DIR_PATH}
mysqldump -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE ${ZABBIX_TABLE_NAME} > ${BACKUP_FILE_PATH}

Step 3 Open your favourite MySQL client and create a new table

CREATE TABLE history_uint_new_20161007 LIKE history_uint;
Script.
echo "------------------------------------------------------------------"
echo "    3. Create Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "CREATE TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} LIKE ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 4 Insert the latest records from the history_uint table to the history_uint_new table

Getting epoch time in bash is simple.
Current Date.
date --date "20160707" +%s
Date 3 Months Ago.
date --date "20161007" +%s
Here is the output.
[ahmed@localhost ~]$ date --date "20160707" +%s
1467829800
[ahmed@localhost ~]$ date --date "20161007" +%s
1475778600
Now insert data for 3 months.
INSERT INTO history_uint_new SELECT * FROM history_uint WHERE clock > '1413763200';
Script.
echo "------------------------------------------------------------------"
echo "    4. Inserting from ${ZABBIX_TABLE_NAME} Table to Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "INSERT INTO ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} SELECT * FROM ${ZABBIX_TABLE_NAME} WHERE clock > '${EPOCH_3MONTHS_BACK}'; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 5 – Move history_uint to history_uint_old table

ALTER TABLE history_uint RENAME history_uint_old;
Script.
echo "------------------------------------------------------------------"
echo "    5. Rename Table ${ZABBIX_TABLE_NAME} to ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old"
echo "------------------------------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME} RENAME ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old;" | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 6. Move newly created history_uint_new to history_uint

ALTER TABLE history_uint_new_20161007 RENAME history_uint;
Script.
echo "------------------------------------------"
echo "    6. Rename Temp Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) to Original Table (${ZABBIX_TABLE_NAME})"
echo "------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} RENAME ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 7. [OPTIONAL] Remove Old Table.

As we have backed-up the table we no long need it. So we can drop the old table.
DROP TABLE hostory_uint_old;
Script.
echo "------------------------------------------"
echo "    7. Dropping Old Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old), As we have already Backed it up. "
echo "------------------------------------------"
echo "DROP TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;

Step 8 – Start the Zabbix server

sudo service zabbix-server start
Script.
echo "------------------------------------------"
echo "    8. Starting Zabbix Server        "
echo "------------------------------------------"
sudo service zabbix-server start;

Step 9. Optional to reduce the history table.

Additionally you can update the items table and set the item history table record to a fewer days.
UPDATE items SET history = '15' WHERE history > '30';

Complete Script.

Location in Github

#!/bin/bash

THREE_MONTH_BACK_DATE=`date -d "now -3months" +%Y-%m-%d`
CURRENT_DATE=`date -d "now" +%Y-%m-%d`

EPOCH_3MONTHS_BACK=`date -d "$THREE_MONTH_BACK_DATE" +%s`
EPOCH_NOW=`date -d "$CURRENT_DATE" +%s`

ZABBIX_DATABASE="zabbix"
ZABBIX_USER="zabbix"
ZABBIX_PASSWD="zabbix"

ZABBIX_TABLE_NAME="history_uint"

BACKUP_DIR_PATH=/tmp/zabbix/zabbix_table_backup_${ZABBIX_TABLE_NAME}
BACKUP_FILE_PATH=${BACKUP_DIR_PATH}/${ZABBIX_TABLE_NAME}_${CURRENT_DATE}_${EPOCH_NOW}.sql

echo "------------------------------------------"
echo "Date to Keep Backup : $THREE_MONTH_BACK_DATE"
echo "Epoch to keep Backup : $EPOCH_3MONTHS_BACK"
echo "Today's Date : $CURRENT_DATE"
echo "Epoch For Today's Date : $EPOCH_NOW"
echo "------------------------------------------"

echo "##########################################"

echo "------------------------------------------"
echo "    1. Stopping Zabbix Server            "
echo "------------------------------------------"
sudo service zabbix-server stop; 
sleep 1

echo "------------------------------------------"
echo "    Display Tables                "
echo "------------------------------------------"
echo "show tables;" | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    2. Backing up ${ZABBIX_TABLE_NAME} Table.    "
echo "    Location : ${BACKUP_FILE_PATH}        "
echo "------------------------------------------"
mkdir -p ${BACKUP_DIR_PATH}
mysqldump -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE ${ZABBIX_TABLE_NAME} > ${BACKUP_FILE_PATH}
sleep 1

echo "------------------------------------------------------------------"
echo "    3. Create Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "CREATE TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} LIKE ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------------------------------"
echo "    4. Inserting from ${ZABBIX_TABLE_NAME} Table to Temp (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) Table"
echo "------------------------------------------------------------------"
echo "INSERT INTO ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} SELECT * FROM ${ZABBIX_TABLE_NAME} WHERE clock > '${EPOCH_3MONTHS_BACK}'; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------------------------------"
echo "    5. Rename Table ${ZABBIX_TABLE_NAME} to ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old"
echo "------------------------------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME} RENAME ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old;" | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    6. Rename Temp Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}) to Original Table (${ZABBIX_TABLE_NAME})"
echo "------------------------------------------"
echo "ALTER TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW} RENAME ${ZABBIX_TABLE_NAME}; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    7. Dropping Old Table (${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old), As we have already Backed it up. "
echo "------------------------------------------"
echo "DROP TABLE ${ZABBIX_TABLE_NAME}_${EPOCH_NOW}_old; " | mysql -u$ZABBIX_USER -p$ZABBIX_PASSWD $ZABBIX_DATABASE;
sleep 1

echo "------------------------------------------"
echo "    8. Starting Zabbix Server        "
echo "------------------------------------------"
sudo service zabbix-server start;

echo "##########################################"

from Blogger http://ift.tt/2dMg5KM
via IFTTT

Categories: Others Tags: ,

Windows Testing Using Kitchen Chef

October 3, 2016 Leave a comment
Kitchen-Vagrant has the capability to spin up a windows instatance for testing.
To make it make it work you will need the vagrant-winrm to be installted on the workstation.

Installing vagrant-winrm

┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ vagrant plugin install vagrant-winrm
Once you have have installed you might still get the below error.
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ kitchen list
 ------Exception-------
 Class: Kitchen::UserError
 Message: WinRM Transport requires the vagrant-winrm Vagrant plugin to properly communicate with this Vagrant VM. Please install this plugin with: `vagrant plugin install vagrant-winrm' and try again.

 Please see .kitchen/logs/kitchen.log for more details
 Also try running `kitchen diagnose --all` for configuration

Download Windows Box.

There is a nice repos which creates windows vagrant box.
git clone https://github.com/boxcutter/windows.git
Here is the output.
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ git clone http://ift.tt/2dmZuKC
Cloning into 'windows'...
remote: Counting objects: 2929, done.
remote: Total 2929 (delta 0), reused 0 (delta 0), pack-reused 2929
Receiving objects: 100% (2929/2929), 6.40 MiB | 1010.00 KiB/s, done.
Resolving deltas: 100% (2318/2318), done.
Checking connectivity... done.

Download and List of Available Boxes.

┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ cd windows/
┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows]
└─▪ ls
AUTHORS                                win2008r2-web.json
bin                                    win2008r2-web-ssh.json
box                                    win2012-datacenter-cygwin.json
CHANGELOG.md                           win2012-datacenter.json
eval-win10x64-enterprise-cygwin.json   win2012-datacenter-ssh.json
eval-win10x64-enterprise.json          win2012r2-datacenter-cygwin.json
eval-win10x64-enterprise-ssh.json      win2012r2-datacenter.json
eval-win10x86-enterprise-cygwin.json   win2012r2-datacenter-ssh.json
eval-win10x86-enterprise.json          win2012r2-standardcore-cygwin.json
eval-win10x86-enterprise-ssh.json      win2012r2-standardcore.json
eval-win2008r2-datacenter-cygwin.json  win2012r2-standardcore-ssh.json
eval-win2008r2-datacenter.json         win2012r2-standard-cygwin.json
eval-win2008r2-datacenter-ssh.json     win2012r2-standard.json
eval-win2008r2-standard-cygwin.json    win2012r2-standard-ssh.json
eval-win2008r2-standard.json           win2012-standard-cygwin.json
eval-win2008r2-standard-ssh.json       win2012-standard.json
eval-win2012r2-datacenter-cygwin.json  win2012-standard-ssh.json
eval-win2012r2-datacenter.json         win7x64-enterprise-cygwin.json
eval-win2012r2-datacenter-ssh.json     win7x64-enterprise.json
eval-win2012r2-standard-cygwin.json    win7x64-enterprise-ssh.json
eval-win2012r2-standard.json           win7x64-pro-cygwin.json
eval-win2012r2-standard-ssh.json       win7x64-pro.json
eval-win7x64-enterprise-cygwin.json    win7x64-pro-ssh.json
eval-win7x64-enterprise.json           win7x86-enterprise-cygwin.json
eval-win7x64-enterprise-ssh.json       win7x86-enterprise.json
eval-win7x86-enterprise-cygwin.json    win7x86-enterprise-ssh.json
eval-win7x86-enterprise.json           win7x86-pro-cygwin.json
eval-win7x86-enterprise-ssh.json       win7x86-pro.json
eval-win81x64-enterprise-cygwin.json   win7x86-pro-ssh.json
eval-win81x64-enterprise.json          win81x64-enterprise-cygwin.json
eval-win81x64-enterprise-ssh.json      win81x64-enterprise.json
eval-win81x86-enterprise-cygwin.json   win81x64-enterprise-ssh.json
eval-win81x86-enterprise.json          win81x64-pro-cygwin.json
eval-win81x86-enterprise-ssh.json      win81x64-pro.json
eval-win8x64-enterprise-cygwin.json    win81x64-pro-ssh.json
eval-win8x64-enterprise.json           win81x86-enterprise-cygwin.json
eval-win8x64-enterprise-ssh.json       win81x86-enterprise.json
floppy                                 win81x86-enterprise-ssh.json
LICENSE                                win81x86-pro-cygwin.json
Makefile                               win81x86-pro.json
README.md                              win81x86-pro-ssh.json
script                                 win8x64-enterprise-cygwin.json
test                                   win8x64-enterprise.json
tpl                                    win8x64-enterprise-ssh.json
VERSION                                win8x64-pro-cygwin.json
win2008r2-datacenter-cygwin.json       win8x64-pro.json
win2008r2-datacenter.json              win8x64-pro-ssh.json
win2008r2-datacenter-ssh.json          win8x86-enterprise-cygwin.json
win2008r2-enterprise-cygwin.json       win8x86-enterprise.json
win2008r2-enterprise.json              win8x86-enterprise-ssh.json
win2008r2-enterprise-ssh.json          win8x86-pro-cygwin.json
win2008r2-standard-cygwin.json         win8x86-pro.json
win2008r2-standard.json                win8x86-pro-ssh.json
win2008r2-standard-ssh.json            wip
win2008r2-web-cygwin.json              wsim

We get error packer not found.

┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows]
└─▪ make virtualbox/eval-win2012r2-standard
rm -rf output-virtualbox-iso
mkdir -p box/virtualbox
packer build -only=virtualbox-iso -var 'cm=nocm' -var 'version=1.0.4' -var 'update=false' -var 'headless=false' -var "shutdown_command=shutdown /s /t 10 /f /d p:4:1 /c Packer_Shutdown" -var "iso_url=http://ift.tt/1io5XVj" -var "iso_checksum=7e3f89dbff163e259ca9b0d1f078daafd2fed513" eval-win2012r2-standard.json
/bin/sh: 1: packer: not found
Makefile:428: recipe for target 'box/virtualbox/eval-win2012r2-standard-nocm-1.0.4.box' failed
make: *** [box/virtualbox

Let us install packer from hashicorp

┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ wget http://ift.tt/2asbbAB
--2016-09-22 11:21:14--  http://ift.tt/2asbbAB
Resolving releases.hashicorp.com (releases.hashicorp.com)... 151.101.12.69
Connecting to releases.hashicorp.com (releases.hashicorp.com)|151.101.12.69|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8985735 (8.6M) [application/zip]
Saving to: ‘packer_0.10.1_linux_amd64.zip’

packer_0.10.1_linux_ 100%[======================]   8.57M   204KB/s    in 29s

2016-09-22 11:21:44 (298 KB/s) - ‘packer_0.10.1_linux_amd64.zip’ saved [8985735/8985735]

Unzip and Install packer

Unpacking.
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ unzip packer_0.10.1_linux_amd64.zip
Archive:  packer_0.10.1_linux_amd64.zip
  inflating: packer
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ ls
backups    configs          others  packer_0.10.1_linux_amd64.zip  tech_documents
chef-repo  hepsi-chef-repo  packer  scripts                        windows
Copy packer to /usr/local/sbin/
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ sudo cp packer /usr/local/sbin/
[sudo] password for ahmed:
Now we are ready to use packer
┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ packer
usage: packer [--version] [--help] command [args]

Available commands are:
    build       build image(s) from template
    fix         fixes templates from old versions of packer
    inspect     see components of a template
    push        push a template and supporting files to a Packer build service
    validate    check that a template is valid
    version     Prints the Packer version

┌─[ahmed][zubair-HP-ProBook][~/work]
└─▪ packer --version
0.10.1

Now lets install eval-win2012r2-standard.


┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows]
└─▪ make virtualbox/eval-win2012r2-standard
rm -rf output-virtualbox-iso
mkdir -p box/virtualbox
packer build -only=virtualbox-iso -var 'cm=nocm' -var 'version=1.0.4' -var 'update=false' -var 'headless=false' -var "shutdown_command=shutdown /s /t 10 /f /d p:4:1 /c Packer_Shutdown" -var "iso_url=http://ift.tt/1io5XVj" -var "iso_checksum=7e3f89dbff163e259ca9b0d1f078daafd2fed513" eval-win2012r2-standard.json
virtualbox-iso output will be in this color.

== virtualbox-iso: Cannot find "Default Guest Additions ISO" in vboxmanage output (or it is empty)
== virtualbox-iso: Downloading or copying Guest additions checksums
    virtualbox-iso: Downloading or copying: http://ift.tt/2dMHfMU
== virtualbox-iso: Downloading or copying Guest additions
    virtualbox-iso: Downloading or copying: http://ift.tt/2dmXrGv
    virtualbox-iso: Download progress: 7%
    virtualbox-iso: Download progress: 99%
    virtualbox-iso: Download progress: 100%
    virtualbox-iso: Download progress: 100%
    virtualbox-iso: Download progress: 100%
    virtualbox-iso: Download progress: 100%
== virtualbox-iso: Creating floppy disk...
    virtualbox-iso: Copying: floppy/00-run-all-scripts.cmd
    virtualbox-iso: Copying: floppy/01-install-wget.cmd
    virtualbox-iso: Copying: floppy/_download.cmd
    virtualbox-iso: Copying: floppy/_packer_config.cmd
    virtualbox-iso: Copying: floppy/disablewinupdate.bat
    virtualbox-iso: Copying: floppy/eval-win2012r2-standard/Autounattend.xml
    virtualbox-iso: Copying: floppy/fixnetwork.ps1
    virtualbox-iso: Copying: floppy/install-winrm.cmd
    virtualbox-iso: Copying: floppy/oracle-cert.cer
    virtualbox-iso: Copying: floppy/passwordchange.bat
    virtualbox-iso: Copying: floppy/powerconfig.bat
    virtualbox-iso: Copying: floppy/zz-start-sshd.cmd
== virtualbox-iso: Creating virtual machine...
== virtualbox-iso: Creating hard drive...
== virtualbox-iso: Attaching floppy disk...
== virtualbox-iso: Creating forwarded port mapping for communicator (SSH, WinRM, etc) (host port 4185)
== virtualbox-iso: Executing custom VBoxManage commands...
    virtualbox-iso: Executing: modifyvm eval-win2012r2-standard --memory 1536
    virtualbox-iso: Executing: modifyvm eval-win2012r2-standard --cpus 1
    virtualbox-iso: Executing: setextradata eval-win2012r2-standard VBoxInternal/CPUM/CMPXCHG16B 1
== virtualbox-iso: Starting the virtual machine...
== virtualbox-iso: Waiting 10s for boot...
== virtualbox-iso: Typing the boot command...
== virtualbox-iso: Waiting for WinRM to become available...
== virtualbox-iso: Connected to WinRM!
== virtualbox-iso: Uploading VirtualBox version info (5.0.18)
== virtualbox-iso: Uploading VirtualBox guest additions ISO...
== virtualbox-iso: Provisioning with windows-shell...
== virtualbox-iso: Provisioning with shell script: script/vagrant.bat
    virtualbox-iso: == Creating "C:\Users\vagrant\AppData\Local\Temp\vagrant"
    virtualbox-iso: == Downloading "http://ift.tt/1SMPogV" to "C:\Users\vagrant\AppData\Local\Temp\vagrant\vagrant.pub"
    virtualbox-iso: WARNING: cannot verify raw.githubusercontent.com's certificate, issued by 'CN=DigiCert SHA2 High Assurance Server CA,OU=http://www.digicert.com,O=DigiCert Inc,C=US':
    virtualbox-iso: Unable to locally verify the issuer's authority.
    virtualbox-iso: 2016-09-22 13:44:20 URL:http://ift.tt/1SMPogV [409/409] - "C:/Users/vagrant/AppData/Local/Temp/vagrant/vagrant.pub" [1]
    virtualbox-iso: == Creating "C:\Users\vagrant\.ssh"
    virtualbox-iso: == Adding "C:\Users\vagrant\AppData\Local\Temp\vagrant\vagrant.pub" to "C:\Users\vagrant\.ssh\authorized_keys"
    virtualbox-iso: == Disabling account password expiration for user "vagrant"
    virtualbox-iso: Updating property(s) of '\\WIN-80PPKE0JMK0\ROOT\CIMV2:Win32_UserAccount.Domain="WIN-80PPKE0JMK0",Name="vagrant"'
    virtualbox-iso: Property(s) update successful.
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/cmtool.bat
    virtualbox-iso: == Building box without a configuration management tool
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/vmtool.bat
    virtualbox-iso: == Creating "C:\Users\vagrant\AppData\Local\Temp\sevenzip"
    virtualbox-iso: == Downloading "http://ift.tt/2dMHt6V" to "C:\Users\vagrant\AppData\Local\Temp\sevenzip\7z1600-x64.msi"
    virtualbox-iso: 2016-09-22 13:44:33 URL:http://ift.tt/2dmYhmE [1664000/1664000] - "C:/Users/vagrant/AppData/Local/Temp/sevenzip/7z1600-x64.msi" [1]
    virtualbox-iso: == Installing "C:\Users\vagrant\AppData\Local\Temp\sevenzip\7z1600-x64.msi"
    virtualbox-iso: == Copying "C:\Program Files\7-Zip\7z.exe" to "C:\Windows"
    virtualbox-iso: 1 file(s) copied.
    virtualbox-iso: 1 file(s) copied.
    virtualbox-iso: == Extracting the VirtualBox Guest Additions installer
    virtualbox-iso:
    virtualbox-iso: 7-Zip [64] 16.00 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-10
    virtualbox-iso:
    virtualbox-iso: Scanning the drive for archives:
    virtualbox-iso: 1 file, 58144768 bytes (56 MiB)
    virtualbox-iso:
    virtualbox-iso: Extracting archive: C:\Users\vagrant\VBoxGuestAdditions.iso
    virtualbox-iso: --
    virtualbox-iso: Path = C:\Users\vagrant\VBoxGuestAdditions.iso
    virtualbox-iso: Type = Iso
    virtualbox-iso: Physical Size = 58144768
    virtualbox-iso: Created = 2016-04-18 06:38:18
    virtualbox-iso: Modified = 2016-04-18 06:38:18
    virtualbox-iso:
    virtualbox-iso: Everything is Ok
    virtualbox-iso:
    virtualbox-iso: Size:       16169336
    virtualbox-iso: Compressed: 58144768
    virtualbox-iso: == Installing Oracle certificate to keep install silent
    virtualbox-iso: TrustedPublisher "Trusted Publishers"
    virtualbox-iso: Certificate "Oracle Corporation" added to store.
    virtualbox-iso: CertUtil: -addstore command completed successfully.
    virtualbox-iso: == Installing VirtualBox Guest Additions
    virtualbox-iso: == Script exiting with errorlevel 0
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Could Not Find C:\Users\vagrant\AppData\Local\Temp\script.bat-25146.tmp
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
== virtualbox-iso: Provisioning with shell script: script/clean.bat
    virtualbox-iso: del /f /q /s "C:\Windows\TEMP\DMI7F57.tmp"
    virtualbox-iso: del /f /q /s "C:\Windows\TEMP\winstore.log"
    virtualbox-iso: == Cleaning "C:\Users\vagrant\AppData\Local\Temp" directories
    virtualbox-iso: == Cleaning "C:\Users\vagrant\AppData\Local\Temp" files
    virtualbox-iso: == Cleaning "C:\Windows\TEMP" directories
    virtualbox-iso: == Removing potentially corrupt recycle bin
    virtualbox-iso: == Cleaning "C:\Windows\TEMP" files
    virtualbox-iso: == Cleaning "C:\Users\vagrant"
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/ultradefrag.bat
    virtualbox-iso: == Creating "C:\Users\vagrant\AppData\Local\Temp\ultradefrag"
    virtualbox-iso: == Downloading "http://ift.tt/2dMGmEj" to "C:\Users\vagrant\AppData\Local\Temp\ultradefrag\ultradefrag-portable-7.0.1.bin.amd64.zip"
    virtualbox-iso: http://ift.tt/2dMGmEj:
    virtualbox-iso: 2016-09-22 13:45:01 ERROR 404: Not Found.
    virtualbox-iso: == Unzipping "C:\Users\vagrant\AppData\Local\Temp\ultradefrag\ultradefrag-portable-7.0.1.bin.amd64.zip" to "C:\Users\vagrant\AppData\Local\Temp\ultradefrag"
    virtualbox-iso:
    virtualbox-iso: 7-Zip [64] 16.00 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-10
    virtualbox-iso:
    virtualbox-iso: Scanning the drive for archives:
    virtualbox-iso: 1 file, 3596965 bytes (3513 KiB)
    virtualbox-iso:
    virtualbox-iso: Extracting archive: C:\Users\vagrant\AppData\Local\Temp\ultradefrag\ultradefrag-portable-7.0.1.bin.amd64.zip
    virtualbox-iso: --
    virtualbox-iso: Path = C:\Users\vagrant\AppData\Local\Temp\ultradefrag\ultradefrag-portable-7.0.1.bin.amd64.zip
    virtualbox-iso: Type = zip
    virtualbox-iso: Physical Size = 3596965
    virtualbox-iso:
    virtualbox-iso: Everything is Ok
    virtualbox-iso:
    virtualbox-iso: Files: 4
    virtualbox-iso: Size:       2753024
    virtualbox-iso: Compressed: 3596965
    virtualbox-iso: == Running UltraDefrag on C:
    virtualbox-iso: UltraDefrag 7.0.1, Copyright (c) UltraDefrag Development Team, 2007-2016.
    virtualbox-iso: UltraDefrag comes with ABSOLUTELY NO WARRANTY. This is free software,
    virtualbox-iso: and you are welcome to redistribute it under certain conditions.
    virtualbox-iso:
    virtualbox-iso: C: defrag:   100.00% complete, 7 passes needed, fragmented/total = 4/75370
    virtualbox-iso: == Removing "C:\Users\vagrant\AppData\Local\Temp\ultradefrag"
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/uninstall-7zip.bat
    virtualbox-iso: == Uninstalling 7zip
    virtualbox-iso: == WARNING: Directory not found: "C:\Users\vagrant\AppData\Local\Temp\sevenzip"
    virtualbox-iso: == Removing "C:\Program Files\7-Zip"
    virtualbox-iso:
    virtualbox-iso: Pinging 127.0.0.1 with 32 bytes of data:
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso: Reply from 127.0.0.1: bytes=32 time1ms TTL=128
    virtualbox-iso:
    virtualbox-iso: Ping statistics for 127.0.0.1:
    virtualbox-iso: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
    virtualbox-iso: Approximate round trip times in milli-seconds:
    virtualbox-iso: Minimum = 0ms, Maximum = 0ms, Average = 0ms
    virtualbox-iso: == Script exiting with errorlevel 0
== virtualbox-iso: Provisioning with shell script: script/sdelete.bat
    virtualbox-iso: == Creating "C:\Users\vagrant\AppData\Local\Temp\sdelete"
    virtualbox-iso: == Downloading "http://ift.tt/2dmYAhm" to "C:\Users\vagrant\AppData\Local\Temp\sdelete\sdelete.exe"
    virtualbox-iso: WARNING: cannot verify live.sysinternals.com's certificate, issued by 'CN=Microsoft IT SSL SHA2,OU=Microsoft IT,O=Microsoft Corporation,L=Redmond,ST=Washington,C=US':
    virtualbox-iso: Unable to locally verify the issuer's authority.
    virtualbox-iso: The operation completed successfully.
    virtualbox-iso: 2016-09-22 13:59:14 URL:http://ift.tt/2dMHAzc [151200/151200] - "C:/Users/vagrant/AppData/Local/Temp/sdelete/sdelete.exe" [1]
    virtualbox-iso: == Running SDelete on C:
    virtualbox-iso:
    virtualbox-iso: SDelete v2.0 - Secure file delete
    virtualbox-iso: Copyright (C) 1999-2016 Mark Russinovich
    virtualbox-iso: Sysinternals - www.sysinternals.com
    virtualbox-iso:
    virtualbox-iso: SDelete is set for 1 pass.

Adding Box to vagrant

┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows]
└─▪ cd box/virtualbox/
┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows/box/virtualbox]
└─▪ ls
eval-win2012r2-standard-nocm-1.0.4.box
┌─[ahmed][zubair-HP-ProBook][±][master ✓][~/work/windows/box/virtualbox]
└─▪ vagrant box add windows-2012r2 eval-win2012r2-standard-nocm-1.0.4.box

Update the .kitchen.yml on your cookbook.

---
driver:
  name: vagrant

provisioner:
  name: chef_zero

platforms:
  - name: windows-2012r2

suites:
  - name: default
    run_list:
      - recipe[starter-windows-cookbook::default]

List VM

Command
kitchen list

VM Details

┌─[ahmed][zubair-HP-ProBook][±][master U:3 ?:2 ✗][~/work/chef-repo/cookbooks/nagios_nrpe_deploy]
└─▪ kitchen list
Instance                Driver   Provisioner  Verifier  Transport  Last Action
windows-2012r2          Vagrant  ChefZero     Busser    Winrm      Not Created

Testing Windows VM – Using command below.

kitchen test
We are done !!!! Enjoy Windows Testing.

from Blogger http://ift.tt/2dmX5jj
via IFTTT

Categories: Others Tags: ,

Package Installer for Cygwin [apt-cyg].

October 2, 2016 Leave a comment
After a longtime I was on my windows machine and had to make it feel more like my linux machine. So install the thing what everyone else does cygwin.
Surpise my custom .bashrc and .vimrc worked without any issues. Good !! had the bashrc update vimrc update, we are back to linux .. like 🙂
My custom linux environment – howto.
Then I realized there is no way to install package from cygwin terminal.
Then I found below script apt-cyg which is really nice.
Package Installer – apt-cyg http://ift.tt/2dI1yyY

Installation

apt-cyg is a simple script, copy below script to home directory on cygwin
Here is the link http://ift.tt/2djAIdN
Execute below command.
install apt-cyg /bin
Now we can use – Example use of apt-cyg
apt-cyg install nano
apt-cyg install lynx
Output
┌─[Zubair][AHMD-WRK-HORSE][~]
└─▪ apt-cyg install lynx
Installing lynx
--2016-09-28 12:49:39--  http://cygwin.mirror.constant.com//x86_64/release/lynx/lynx-2.8.7-2.tar.bz2
Resolving cygwin.mirror.constant.com (cygwin.mirror.constant.com)... 108.61.5.83
Connecting to cygwin.mirror.constant.com (cygwin.mirror.constant.com)|108.61.5.83|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1746879 (1.7M) [application/octet-stream]
Saving to: ‘lynx-2.8.7-2.tar.bz2’

lynx-2.8.7-2.tar.bz2           100%[==================================>]   1.67M   181KB/s    in 12s

2016-09-28 12:49:52 (146 KB/s) - ‘lynx-2.8.7-2.tar.bz2’ saved [1746879/1746879]

lynx-2.8.7-2.tar.bz2: OK
Unpacking...
Package lynx requires the following packages, installing:
bash cygwin libiconv2 libintl8 libncursesw10 libopenssl100 zlib0
Package bash is already installed, skipping
Package cygwin is already installed, skipping
Package libiconv2 is already installed, skipping
Package libintl8 is already installed, skipping
Package libncursesw10 is already installed, skipping
Package libopenssl100 is already installed, skipping
Package zlib0 is already installed, skipping
Running /etc/postinstall/lynx.sh
Package lynx installed
Now we are good. !!!

from Blogger http://ift.tt/2dI1mPW
via IFTTT

Categories: Others Tags: ,

Issues – Monitoring MongoDB using Nagios XI.

October 1, 2016 Leave a comment
Monitoring for mongodb using nagiosxi is straight forword but you might have some issues when we are setting up.
Here are few issues which might come up using mongodb version 3.

Issue getting monitoring data in nagios.

1. ConnectionFailure object has no attribute strip

[ahmed@localhost libexec]$ ./check_mongodb.py -H 192.168.94.137 -P 27017 -u admin -p admin
Traceback (most recent call last):
  File "./check_mongodb.py", line 1372, in <module>
    sys.exit(main(sys.argv[1:]))
  File "./check_mongodb.py", line 196, in main
    err, con = mongo_connect(host, port, ssl, user, passwd, replicaset)
  File "./check_mongodb.py", line 294, in mongo_connect
    return exit_with_general_critical(e), None
  File "./check_mongodb.py", line 310, in exit_with_general_critical
    if e.strip() == "not master":
AttributeError: 'ConnectionFailure' object has no attribute 'strip'
Solution.
e.strip() expects e to be a string, which might not be the case sometimes, so remove strip(). Change below code on line 310.
  else:
      if e.strip() == "not master":
          print "UNKNOWN - Could not get data from server:", e
          return 3
to
  else:
      if e == "not master":
          print "UNKNOWN - Could not get data from server:", e
          return 3
After the change atleast you will get an error which gives you more information.
[ahmed@localhost libexec]$ ./check_mongodb_2.py -H 192.168.94.138 -P 27017 -u admin -p admin1 -A databases -W 5 -C 10
CRITICAL - General MongoDB Error: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'37a502d665186449'), ('key', u'd8c683f98a5e720c28a8007018ed7414')]) failed: auth failed
Next we will try to resolve, above auth failure.

2. Executing command from the nagios server.

[ahmed@localhost libexec]$ ./check_mongodb_2.py -H 192.168.94.138 -P 27017 -u admin -p admin1 -A databases -W 5 -C 10
CRITICAL - General MongoDB Error: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'42110dc29ee7fe6b'), ('key', u'827a2b0e4af97e88560800ab86b04e57')]) failed: auth failed

On the mongodb server.

Checking on the mongodb server shows that the AuthenticationFailed due to MONGODB-CR credentials missing in the user document
2016-09-14T19:11:12.142-0700 I ACCESS   [conn114] Successfully  authenticated as principal admin on admin
2016-09-14T19:11:32.892-0700 I NETWORK  [initandlisten] connection accepted from  192.168.94.130:48657 #115 (2 connections now open)
2016-09-14T19:11:32.894-0700 I ACCESS   [conn115]  authenticate db: admin { authenticate: 1, user: "admin", nonce: "xxx", key: "xxx" }
2016-09-14T19:11:32.894-0700 I ACCESS   [conn115] Failed to authenticate admin@admin with mechanism MONGODB-CR: AuthenticationFailed: MONGODB-CR credentials missing in the user document
2016-09-14T19:11:32.895-0700 I NETWORK  [conn115] end connection 192.168.94.130:48657 (1 connection now open)
2016-09-14T19:11:54.283-0700 I NETWORK  [initandlisten] connection accepted from 192.168.94.130:48663 #116 (2 connections now open)
2016-09-14T19:11:54.284-0700 I NETWORK  [conn116] end connection 192.168.94.130:48663 (1 connection now open)
2016-09-14T19:12:07.860-0700 I NETWORK  [initandlisten] connection accepted from 192.168.94.130:48666 #117 (2 connections now open)
2016-09-14T19:12:07.861-0700 I ACCESS   [conn117] Unauthorized: not authorized on admin to execute command { listDatabases: 1 }
Solution.
  1. Delete exsisting users on the database if it was already created.
  2. Modify the collection admin.system.version such that the authSchema currentVersion is 3 instead of 5
  3. Version 3 is using MongoDB-CR
  4. Recreate your user on the databases.
NOTE : Do not do it on PRODUCTION environment, use update instead and try on test database first.
mongo
use admin
db.system.users.remove({})
db.system.version.remove({})
db.system.version.insert({ "_id" : "authSchema", "currentVersion" : 3 })
More Details Here:

from Blogger http://ift.tt/2dzxVM9
via IFTTT

Categories: Others Tags: ,

Installing CouchDB on Ubuntu 14 LTS.

September 30, 2016 Leave a comment
CouchDB is a database that completely embraces the web. Store your data with JSON documents. Access your documents and query your indexes with your web browser, via HTTP. Index, combine, and transform your documents with JavaScript. CouchDB works well with modern web and mobile apps. You can even serve web apps directly out of CouchDB. And you can distribute your data, or your apps, efficiently using CouchDB’s incremental replication. CouchDB supports master-master setups with automatic conflict detection.

Installing CouchDB.

Setting up Repos and Packages.

sudo apt-get install software-properties-common -y
sudo add-apt-repository ppa:couchdb/stable -y
sudo apt-get update -y

Remove any exsisting installations.

sudo apt-get remove couchdb couchdb-bin couchdb-common -yf

Installation.

sudo apt-get install -V couchdb
  Reading package lists...
  Done Building dependency tree
  Reading state information...
  Done
  The following extra packages will be installed:
  ...
  Y

Stop and configure couchdb

sudo stop couchdb
  couchdb stop/waiting

update /etc/couchdb/local.ini with bind_address=0.0.0.0 as needed

sudo start couchdb
  couchdb start/running, process 3541

Start Server

sudo stop couchdb
  couchdb stop/waiting
Finally we can go to the browser and check the server is up.
Apache CouchDB has started on http://couchdb-server:5984/

from Blogger http://ift.tt/2dKLI5R
via IFTTT

Categories: Others Tags: ,

Installing MongoDB on Ubuntu 14 LTS.

September 29, 2016 Leave a comment
MongoDB is an open-source document database, and leading NoSQL database. MongoDB is written in c++. Below is a brief document about installing a mongodb on a test node to try it out.

Import the public key used by the package management system.

Signed Packages for dpkg and apt
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
Output.
ahmed@ubuntu:~$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
[sudo] password for ahmed:
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.ApILz9KbVd --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
gpg: requesting key EA312927 from hkp server keyserver.ubuntu.com
gpg: key EA312927: public key "MongoDB 3.2 Release Signing Key " imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

Create Repo

Create the /etc/apt/sources.list.d/mongodb-org-3.2.list list file using the command appropriate for your version of Ubuntu:
echo "deb http://ift.tt/19brKdo trusty/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list
Output.
ahmed@ubuntu:~$ echo "deb http://ift.tt/19brKdo trusty/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list
deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.2 multiverse

Reload local package database.

sudo apt-get update

Install MongoDB

sudo apt-get install -y mongodb-org

Start MongoDB.

Issue the following command to start mongod:
sudo service mongod start

Verify that MongoDB has started successfully

Verify that the mongod process has started successfully by checking the contents of the log file at `/var/log/mongodb/mongod.log for a line reading
[initandlisten] waiting for connections on port 
where is the port configured in `/etc/mongod.conf, 27017 by default.
Output
ahmed@ubuntu:~$ sudo tail -f /var/log/mongodb/mongod.log
2016-09-14T17:44:54.437-0700 I CONTROL  [initandlisten]
2016-09-14T17:44:54.437-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-09-14T17:44:54.437-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-09-14T17:44:54.437-0700 I CONTROL  [initandlisten]
2016-09-14T17:44:54.437-0700 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-09-14T17:44:54.437-0700 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-09-14T17:44:54.437-0700 I CONTROL  [initandlisten]
2016-09-14T17:44:54.439-0700 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/diagnostic.data'
2016-09-14T17:44:54.439-0700 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2016-09-14T17:44:54.533-0700 I NETWORK  [initandlisten] waiting for connections on port 27017

Importing First Dataset using mongoimport.

Get the file from link below.
wget http://ift.tt/2ddUZR3
unzip companies.zip
Output.
ahmed@ubuntu:~$ wget https://github.com/zubayr/big_data_learning/raw/master/bigData/mongodb/dataset/companies.zip
--2016-09-14 17:51:12--  https://github.com/zubayr/big_data_learning/raw/master/bigData/mongodb/dataset/companies.zip
Resolving github.com (github.com)... 192.30.253.112
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://http://ift.tt/2dHg0qa/big_data_learning/master/bigData/mongodb/dataset/companies.zip [following]
--2016-09-14 17:51:28--  https://http://ift.tt/2dHg0qa/big_data_learning/master/bigData/mongodb/dataset/companies.zip
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.100.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.100.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15493946 (15M) [application/octet-stream]
Saving to: ‘companies.zip.1100%[=======================================>] 15,493,946   590KB/s   in 34s   

ahmed@ubuntu:~$ unzip companies.zip
Archive:  companies.zip
  inflating: companies.json          
ahmed@ubuntu:~$ ls
companies.json  Desktop    Downloads         Music     Public     Videos
companies.zip   Documents  examples.desktop  Pictures  Templates
ahmed@ubuntu:~$

Importing dataset.

mongoimport --db company --collection companies --file companies.json
Output. mongoimport will by default connect to localhost on port 27017, if we are trying to import to a mongodb on a different machine, then need to pass the --host and --port options.
ahmed@ubuntu:~$ mongoimport --db company --collection companies --file companies.json
2016-09-14T17:54:34.032-0700    connected to: localhost
2016-09-14T17:54:37.025-0700    [#########...............] company.companies    30.0MB/74.6MB (40.3%)
2016-09-14T17:54:40.033-0700    [###################.....] company.companies    61.8MB/74.6MB (82.8%)
2016-09-14T17:54:41.274-0700    [########################] company.companies    74.6MB/74.6MB (100.0%)
2016-09-14T17:54:41.274-0700    imported 18801 documents
ahmed@ubuntu:~$

Setting up Authentication.

Create the user administrator.

use admin
db.createUser(
  {
    user: "mongoadmin",
    pwd: "ahmed@123",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
  }
)
Output.
> use admin
> db.createUser({user:"mongoadmin",pwd:"ahmed@123",roles:[{role:"userAdminAnyDatabase",db:"admin"}]})
Successfully added user: {
    "user" : "mongoadmin",
    "roles" : [
        {
            "role" : "userAdminAnyDatabase",
            "db" : "admin"
        }
    ]
}

Re-start the MongoDB instance with access control.

Re-start the mongod instance with the --auth command line option or, if using a configuration file, the security.authorization setting.
mongod --auth --port 27017 --dbpath /data/db1
Or Update the configuration /etc/mongod.conf file with below info.
security:
  authorization: enabled

To authenticate during connection.

mongo --port 27017 -u "mongoadmin" -p "ahmed@123" --authenticationDatabase "admin"

Create additional users as needed for your deployment.

use company
db.createUser(
  {
    user: "ahmed",
    pwd: "ahmed@123",
    roles: [ { role: "readWrite", db: "company" },
             { role: "read", db: "test" } ]
  }
)
Connect and authenticate as ahmed.
mongo --port 27017 -u "ahmed" -p "ahmed@123" --authenticationDatabase "company"

Insert into a collection as ahmed.

> use company
> db.authtesting.insert({x:1,y:1})
WriteResult({ "nInserted" : 1 })
> db.authtesting.findOne()
{ "_id" : ObjectId("57d9f85a3d1dcdf58c16cab3"), "x" : 1, "y" : 1 }
>

Bibliography.

Issue getting monitoring data in nagios.

Executing command from the nagios server.

[ahmed@localhost libexec]$ ./check_mongodb_2.py -H 192.168.94.138 -P 27017 -u admin -p admin1 -A databases -W 5 -C 10
CRITICAL - General MongoDB Error: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'42110dc29ee7fe6b'), ('key', u'827a2b0e4af97e88560800ab86b04e57')]) failed: auth failed

On the mongodb server.

2016-09-14T19:11:12.142-0700 I ACCESS   [conn114] Successfully  authenticated as principal admin on admin
2016-09-14T19:11:32.892-0700 I NETWORK  [initandlisten] connection accepted from  192.168.94.130:48657 #115 (2 connections now open)
2016-09-14T19:11:32.894-0700 I ACCESS   [conn115]  authenticate db: admin { authenticate: 1, user: "admin", nonce: "xxx", key: "xxx" }
2016-09-14T19:11:32.894-0700 I ACCESS   [conn115] Failed to authenticate admin@admin with mechanism MONGODB-CR: AuthenticationFailed: MONGODB-CR credentials missing in the user document
2016-09-14T19:11:32.895-0700 I NETWORK  [conn115] end connection 192.168.94.130:48657 (1 connection now open)
2016-09-14T19:11:54.283-0700 I NETWORK  [initandlisten] connection accepted from 192.168.94.130:48663 #116 (2 connections now open)
2016-09-14T19:11:54.284-0700 I NETWORK  [conn116] end connection 192.168.94.130:48663 (1 connection now open)
2016-09-14T19:12:07.860-0700 I NETWORK  [initandlisten] connection accepted from 192.168.94.130:48666 #117 (2 connections now open)
2016-09-14T19:12:07.861-0700 I ACCESS   [conn117] Unauthorized: not authorized on admin to execute command { listDatabases: 1 }
Solution.
  1. Delete exsisting users on the database if it was already created.
  2. Modify the collection admin.system.version such that the authSchema currentVersion is 3 instead of 5
  3. Version 3 is using MongoDB-CR
  4. Recreate your user on the databases.
NOTE : Do not do it on PRODUCTION environment, use update instead and try on test database first.
mongo
use admin
db.system.users.remove({})
db.system.version.remove({})
db.system.version.insert({ "_id" : "authSchema", "currentVersion" : 3 })
More Details Here:

from Blogger http://ift.tt/2ddTfaC
via IFTTT

Categories: Others Tags: ,