Archive

Archive for the ‘HOWTOs’ Category

Creating Documents Using pandoc

August 17, 2016 Leave a comment
Pandoc is an opensource utility to create documents from markdown. We can create PDF, Doc, doc, html and other formats. And can be also used to convert html to doc, html to pdf, markdown to pdf and many more.
If you need to convert files from one markup format into another, pandoc is your swiss-army knife. Pandoc can convert documents in markdown, reStructuredText, textile, HTML, DocBook, LaTeX, MediaWiki markup, TWiki markup, OPML, Emacs Org-Mode, Txt2Tags, Microsoft Word docx, LibreOffice ODT, EPUB, or Haddock markup to
  • HTML formats: XHTML, HTML5, and HTML slide shows using Slidy, reveal.js, Slideous, S5, or DZSlides.
  • Word processor formats: Microsoft Word docx, OpenOffice/LibreOffice ODT, OpenDocument XML
  • Ebooks: EPUB version 2 or 3, FictionBook2
  • Documentation formats: DocBook, TEI Simple, GNU TexInfo, Groff man pages, Haddock markup
  • Page layout formats: InDesign ICML
  • Outline formats: OPML
  • TeX formats: LaTeX, ConTeXt, LaTeX Beamer slides
  • PDF via LaTeX
  • Lightweight markup formats: Markdown (including CommonMark), reStructuredText, AsciiDoc, MediaWiki markup, DokuWiki markup, Emacs Org-Mode, Textile
  • Custom formats: custom writers can be written in lua.
Pandoc understands a number of useful markdown syntax extensions, including document metadata (title, author, date); footnotes; tables; definition lists; superscript and subscript; strikeout; enhanced ordered lists (start number and numbering style are significant); running example lists; delimited code blocks with syntax highlighting; smart quotes, dashes, and ellipses; markdown inside HTML blocks; and inline LaTeX. If strict markdown compatibility is desired, all of these extensions can be turned off.

Installing pandoc on Centos / Ubuntu

On Centos/RHEL
[ahmed@mylaptop ~]# yum install pandoc
[ahmed@mylaptop ~]# sudo yum install texlive
On Ubuntu
[ahmed@mylaptop ~]# apt-get install pandoc
[ahmed@mylaptop ~]# apt-get install texlive     

Converting markdown to pdf format

Create a sample markdown file.
[ahmed@mylaptop ~]$ mkdir markdown pdf
[ahmed@mylaptop ~]$ cat markdown/2016-08-15-firstmarkdownfile.md

# How To Configure Swappiness

Swappiness is a Linux kernel parameter that controls the relative weight given to swapping out runtime memory, as opposed to dropping pages from the system page cache. Swappiness can be set to values between 0 and 100 inclusive. A low value causes the kernel to avoid swapping, a higher value causes the kernel to try to use swap space. The default value is 60, and for most desktop systems, setting it to 100 may affect the overall performance, whereas setting it lower (even 0) may decrease response latency.

    Value                    Strategy
    vm.swappiness = 0        The kernel will swap only to avoid an out of memory condition. 
                                See the "VM Sysctl documentation".
    vm.swappiness = 1        Kernel version 3.5 and over, as well as kernel version 2.6.32-303 
                                and over: Minimum amount of swapping without disabling it entirely.
    vm.swappiness = 10        This value is sometimes recommended to improve performance 
                                when sufficient memory exists in a system.
    vm.swappiness = 60        The default value.
    vm.swappiness = 100     The kernel will swap aggressively.

With kernel version `3.5` and over, as well as kernel version `2.6.32-303` and over, it is likely better to use `1` for cases where `0` used to be optimal.
To temporarily set the swappiness in Linux, write the desired value (e.g. 10) to `/proc/sys/vm/swappiness` using the following command, running as root user.   

    #  Set the swappiness value as root
    echo 10 > /proc/sys/vm/swappiness

    #  Alternatively, run this 
    sysctl -w vm.swappiness=10

    #  Verify the change
    cat /proc/sys/vm/swappiness
    10

    #  Alternatively, verify the change
    sysctl vm.swappiness
    vm.swappiness = 10

To find the current swappiness settings, type:

    cat /proc/sys/vm/swappiness
    60

Swapiness can be a value from 0 to 100. 

1. Swappiness near 100 means that the operating system will swap often and usually, too soon. 
2. Although swap provides extra resources, RAM is much faster than swap space. Any time something is moved from RAM to swap, it slows down.

A swappiness value of 0 means that the operating will only rely on swap when it absolutely needs to. We can adjust the swappiness with the sysctl command:

    sysctl vm.swappiness=10
    vm.swappiness=10

If we check the system swappiness again, we can confirm that the setting was applied:

    cat /proc/sys/vm/swappiness
    10

To make changes permanent, you can add the setting to the /etc/sysctl.conf file:

    sudo nano /etc/sysctl.conf

Add the below line.

#  Search for the vm.swappiness setting.  Uncomment and change it as necessary.
vm.swappiness=10
This is our sample markdown file above. Now we will convert it to PDF using the below script.
[ahmed@mylaptop ~]$ cd pdf
[ahmed@mylaptop ~]$ cat create_pdf_from_md.sh
#!/bin/bash

for markdown_file in `ls ../markdown/201*`; 
do
    input_file=$markdown_file 
    output_file=`echo $markdown_file | cut -d'/' -f3 | cut -d'.' -f1`.pdf

    #echo $input_file
    #echo $output_file

    if [ -f $output_file ];
    then
        echo "File '$output_file' Already Converted to PDF."
    else
        pandoc -f markdown -r markdown -t pdf -w pdf $input_file -t latex -o $output_file -V geometry:"left=2.0cm, right=2.0cm,top=1.0cm, bottom=1.0cm" --latex-engine=xelatex
    fi
done;
Now lets execute the script.
[ahmed@mylaptop ~]$ sh create_pdf_from_md.sh
We are done. Here is how the file looks like.
pdf generate demo

Converting html to doc format

Creating a document file using pandoc
pandoc -f html -t docx -o chef-server-setup.docx http://zubayr.github.io/chef-server-setup/
This will create a doc file from html link.
Categories: HOWTOs, Others

Not enough physical memory is Available – VMware Workstation 10

February 11, 2015 Leave a comment

VMware Workstation 10 Error: Not enough physical memory is available to power this virtual machine

This issues starts after an update.

Issue on Windows 8.1

You attempt to start a virtual machine running on Workstation 10 and you get a message that you have no memory available to run the vm.
Message : Not enough physical memory is available to power this virtual machine with its configured settings.

Solution

  • Shut down all running virtual machines
  • Close VMware Workstation.
  • Open Command prompt in Admin mode.
    opening config.ini file using nodepad from command prompt.
    c:\ProgramData\VMware\VMware Workstation> notepad config.ini
  • Open config.ini located at C:\ProgramData\VMware\VMware Workstation
  • Insert this line: vmmon.disableHostParameters = TRUE
  • Save and close file.
  • Reboot Windows.

Error Message

alt text
Categories: HOWTOs, Internet

`zabbix-java-gateway` on Centos 6.5

February 9, 2015 Leave a comment

Installing zabbix-java-gateway on Centos 6.5

How does zabbix-java-gateway work?

  1. First we configure system which needs to be monitored using the JAVA_OPTS.
  2. Next we add a JMX Interface in Zabbix server UI under hosts.
  3. zabbix-server will communicate with zabbix-java-gateway which intern communicates to the system/server where we need to get all the JMX data.
  4. JMX is set using the JAVA_OPTS.
 [Zabbix-Server] --(port:10053)--> [zabbix-java-gateway] --(port:12345)--> [JMX enabled server, Example:Tomcat/WebServer]

Step 1 : Install zabbix-java-gateway on zabbix-server

 [ahmed@ahmed-server ~]$ sudo yum install zabbix-java-gateway

Step 2 : Configure the host with JMX which needs to be monitored.

Setting Tomcat/JMX Options. Add the below lines to setenv.sh and save it under apache-tomcat-7/bin/
So when the start.sh is started then these JMX options will be added to tomcat server.
NOTE: To make the monitoring secure use ssl and authentication options. You can find more information in the links at the end of this post.
IMPORTANT Lines are below. We will be getting data from port 12345.
  -Dcom.sun.management.jmxremote\
-Dcom.sun.management.jmxremote.port=12345\
-Dcom.sun.management.jmxremote.authenticate=false\
-Dcom.sun.management.jmxremote.ssl=false"
But here are the complete JAVA_OPTS. you can ignore the first few lines which sets the Heap Memory size.
 export JAVA_OPTS="$JAVA_OPTS\
-server\
-Xms1024m\
-Xmx2048m\
-XX:MaxPermSize=256m\
-XX:MaxNewSize=256m\
-XX:NewSize=256m\
-XX:SurvivorRatio=12\
-Dcom.sun.management.jmxremote\
-Dcom.sun.management.jmxremote.port=12345\
-Dcom.sun.management.jmxremote.authenticate=false\
-Dcom.sun.management.jmxremote.ssl=false"

Step 3 : Configuring zabbix-server.

  1. Here we configure the zabbix server to let it know where the zabbix-java-gateway is running.
  2. Since we are running the zabbix-java-gateway in the same server as zabbix-server, so we will be using the same ip for both.
  3. Only difference is that zabbix-java-gateway will be running on port 10053
Configuration in zabbix-server.conf. Add the below line. Rather un-comment them and add the IP/ports
 ### Option: JavaGateway
# IP address (or hostname) of Zabbix Java gateway.
# Only required if Java pollers are started.
#
# Mandatory: no
# Default:
JavaGateway=10.10.18.27

### Option: JavaGatewayPort
# Port that Zabbix Java gateway listens on.
#
# Mandatory: no
# Range: 1024-32767
# Default:
JavaGatewayPort=10053

### Option: StartJavaPollers
# Number of pre-forked instances of Java pollers.
#
# Mandatory: no
# Range: 0-1000
# Default:
StartJavaPollers=5

Step 4 : Configuring zabbix-java-gateway.

  1. We now set where the zabbix-java-gateway will be running and which port it will be listing on.
  2. Configuration in zabbix-java-gateway, here same ip as the zabbix-server and port 10053.
 ### Option: zabbix.listenIP
# IP address to listen on.
#
# Mandatory: no
# Default:
LISTEN_IP="10.10.18.27"

### Option: zabbix.listenPort
# Port to listen on.
#
# Mandatory: no
# Range: 1024-32767
# Default:
LISTEN_PORT=10053

### Option: zabbix.pidFile
# Name of PID file.
# If omitted, Zabbix Java Gateway is started as a console application.
#
# Mandatory: no
# Default:
# PID_FILE=

PID_FILE="/var/run/zabbix/zabbix_java.pid"

### Option: zabbix.startPollers
# Number of worker threads to start.
#
# Mandatory: no
# Range: 1-1000
# Default:
START_POLLERS=5
 https://www.zabbix.com/documentation/2.4/manual/concepts/java
https://www.zabbix.com/documentation/2.4/manual/config/items/itemtypes/jmx_monitoring
Categories: HOWTOs

`haproxy` Setup on Centos 6.5, Kernel 2.6, CPU x86_64

February 9, 2015 Leave a comment

How to setup HAProxy

HAProxy is the Reliable, High Performance TCP/HTTP Load Balancer and it works nicely with Deveo Cluster setup.

Follow these steps to install on CentOS:

 [ahmed@ahmed-server ~]$ sudo yum install make gcc wget
[ahmed@ahmed-server ~]$ wget http://www.haproxy.org/download/1.5/src/haproxy-1.5.11.tar.gz
[ahmed@ahmed-server ~]$ tar -zxvf haproxy-1.5.11.tar.gz -C /opt
[ahmed@ahmed-server ~]$ cd /opt/haproxy-1.5.11
[ahmed@ahmed-server haproxy-1.5.11]$ sudo make TARGET=linux26 CPU=x86_64
[ahmed@ahmed-server haproxy-1.5.11]$ sudo make install

Follow these steps to create init script:

 [ahmed@ahmed-server ~]$ sudo ln -sf /usr/local/sbin/haproxy /usr/sbin/haproxy
[ahmed@ahmed-server ~]$ sudo cp /opt/haproxy-1.5.11/examples/haproxy.init /etc/init.d/haproxy
[ahmed@ahmed-server ~]$ sudo chmod 755 /etc/init.d/haproxy

Follow these steps to configure haproxy:

 [ahmed@ahmed-server ~]$ sudo mkdir /etc/haproxy
[ahmed@ahmed-server ~]$ sudo cp /opt/haproxy-1.5.11/examples/examples.cfg /etc/haproxy/haproxy.cfg
[ahmed@ahmed-server ~]$ sudo mkdir /var/lib/haproxy
[ahmed@ahmed-server ~]$ sudo touch /var/lib/haproxy/stats
[ahmed@ahmed-server ~]$ sudo useradd haproxy

Finally start the service and enable on boot:

 [ahmed@ahmed-server ~]$ sudo service haproxy check
[ahmed@ahmed-server ~]$ sudo service haproxy start
[ahmed@ahmed-server ~]$ sudo chkconfig haproxy on

Configuration sample haproxy.cfg.

 global
log /dev/log local0
log /dev/log local1 notice
log 127.0.0.1 local2
#chroot /var/lib/haproxy
#stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon

# Default SSL material locations
#ca-base /etc/ssl/certs
#crt-base /etc/ssl/private

# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
#ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL

defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
#errorfile 400 /etc/haproxy/errors/400.http
#errorfile 403 /etc/haproxy/errors/403.http
#errorfile 408 /etc/haproxy/errors/408.http
#errorfile 500 /etc/haproxy/errors/500.http
#errorfile 502 /etc/haproxy/errors/502.http
#errorfile 503 /etc/haproxy/errors/503.http
#errorfile 504 /etc/haproxy/errors/504.http

frontend localnodes
bind *:9002
mode http
default_backend nodes

backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server web01 127.0.0.1:9090 check
server web02 127.0.0.1:9091 check
server web03 127.0.0.1:9092 check

listen stats *:9001
stats enable
stats uri /
stats hide-version
stats auth someuser:password

Configuring Logging

If you look at the top of /etc/haproxy/haproxy.cfg, you will see something like below. If you dont see it then add the line in the beginning.

Here is how my conf looks like.

 global
log /dev/log local0
log /dev/log local1 notice
log 127.0.0.1 local2
If you dont have the below line then add it.
 global
log 127.0.0.1 local2
This means that HAProxy will send its messages to rsyslog on 127.0.0.1. But by default, rsyslog doesn’t listen on any address.

Let’s edit /etc/rsyslog.conf and uncomment these lines:

 $ModLoad imudp
$UDPServerRun 514
This will make rsyslog listen on UDP port 514 for all IP addresses. Optionally you can limit to 127.0.0.1 by adding:
 $UDPServerAddress 127.0.0.1

Now create a /etc/rsyslog.d/haproxy.conf file containing:

 local2.*    /var/log/haproxy.log

You can of course be more specific and create separate log files according to the level of messages:

 local2.=info     /var/log/haproxy/haproxy-info.log
local2.notice /var/log/haproxy/haproxy-allbutinfo.log

Then restart rsyslog and see that log files are created:

 # service rsyslog restart
Shutting down system logger: [ OK ]
Starting system logger: [ OK ]

# ls -l /var/log/haproxy | grep haproxy
-rw-------. 1 root root 131 3 oct. 10:43 haproxy-allbutinfo.log
-rw-------. 1 root root 106 3 oct. 10:42 haproxy-info.log
Now you can start your debugging session!

More Details.

 https://serversforhackers.com/haproxy/
http://support.deveo.com/knowledgebase/articles/409523-how-to-setup-haproxy
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html
http://www.percona.com/blog/2014/10/03/haproxy-give-me-some-logs-on-centos-6-5/
Categories: HOWTOs, Internet

Using `npm` behind a proxy

February 7, 2015 Leave a comment
To permanently set the configuration we can do this.
 npm config set proxy http://proxy.company.com:8080
npm config set https-proxy http://proxy.company.com:8080
If you need to specify credentials, they can be passed in the url using the following syntax.
 http://user_name:password@proxy.company.com:8080
To Redirect traffic over proxy.
 npm --https-proxy=http://proxy.company.com:8080 -g install npmbox
npm --https-proxy=http://proxy.company.com:8080 -g install kafka-node
More Details.
 http://wil.boayue.com/blog/2013/06/14/using-npm-behind-a-proxy/
Categories: HOWTOs, Internet

Sending JSON to NodeJS to Multiple Topics in Kafka.

February 7, 2015 Leave a comment
What we are trying to achieve ?
  1. Send json from and browser/curl to nodejs.
  2. nodejs will redirect json data based on url to each topic in kafka. Example : URL /upload/topic/A will send the json to topic_a in kafka
  3. Further processing is done on kafka.
  4. We can then see the json arrival in kafka, using kafka-console-consumer.sh script.

Step 1 : Get the json_nodejs_multiple_topics.js from git

Step 2 : Start above script on the nodejs server.

 [nodejs-admin@nodejs nodejs]$ vim json_nodejs_multiple_topics.js
[nodejs-admin@nodejs nodejs]$ node json_nodejs_multiple_topics.js

Step 3 : Execute curl command to send the JSON to nodejs.

 [nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"xyz","password":"xyz"}' http://localhost:8125/upload/topic/A
[nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"abc","password":"xyz"}' http://localhost:8125/upload/topic/B
[nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"efg","password":"xyz"}' http://localhost:8125/upload/topic/C
[nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"efg","password":"xyz"}' http://localhost:8125/upload/topic/D

Step 4 : Output on nodejs console

 [nginx-admin@nginx nodejs]$ node json_nodejs_multiple_topics.js 
For Topic A
{"username":"xyz","password":"xyz"}
{ topic_a: { '0': 16 } }
For Topic B
{"username":"abc","password":"xyz"}
{ topic_b: { '0': 1 } }
For Topic C
{"username":"efg","password":"xyz"}
{ topic_c: { '0': 0 } }
ERROR: Could not Process this URL :/upload/topic/D
{"username":"efg","password":"xyz"}
{"username":"xyz","password":"xyz"} request from the curl command.
{ topic_a: { '0': 16 } } response from the kafka cluster that, it has received the json.

Step5 : Output on the kafka consumer side.

NOTE : Assuming that we have already created topics in kafka. using below command.
 [kafka-admin@kafka kafka]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic_a
[kafka-admin@kafka kafka]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic_b
[kafka-admin@kafka kafka]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic_c
[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-topics.sh --list --zookeeper localhost:2181
topic_a
topic_b
topic_c
[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$
Here is the output after running curl command on the nodejs server
 [kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topic_a --from-beginning
{"username":"xyz","password":"xyz"}

[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topic_b --from-beginning
{"username":"abc","password":"xyz"}

[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topic_c --from-beginning
{"username":"efg","password":"xyz"}
{"username":"xyz","password":"xyz"} data received from nodejs server.
{"username":"abc","password":"xyz"} data received from nodejs server.
{"username":"efg","password":"xyz"} data received from nodejs server.
 http://whatizee.blogspot.in/2015/02/installing-kafka-single-node-quick-start.html
http://whatizee.blogspot.in/2015/02/installing-nodejs-on-centos-66.html
http://whatizee.blogspot.in/2015/02/sending-json-nodejs-kafka.html
https://github.com/zubayr/kafka-nodejs/blob/master/send_json_multiple_topics/README.md
Categories: BigData, HOWTOs

Sending JSON -> NodeJS -> Kafka.

February 6, 2015 Leave a comment
What we are trying to achieve ?
  1. Send json from and browser/curl to nodejs.
  2. nodejs will redirect json data to kafka.
  3. Further processing is done on kafka.
  4. We can then see the json arrive on kafka-console-consumer.sh script.

Step 1 : Create a script called json_nodejs_kafka.js with below script.

/*
Getting some 'http' power
*/
var http=require('http');

/*
Setting where we are expecting the request to arrive.
http://localhost:8125/upload

*/
var request = {
hostname: 'localhost',
port: 8125,
path: '/upload',
method: 'GET'
};

/*
Lets create a server to wait for request.
*/
http.createServer(function(request, response)
{
/*
Making sure we are waiting for a JSON.
*/
response.writeHeader(200, {"Content-Type": "application/json"});

/*
request.on waiting for data to arrive.
*/
request.on('data', function (chunk)
{

/*
CHUNK which we recive from the clients
For out request we are assuming its going to be a JSON data.
We print it here on the console.
*/
console.log(chunk.toString('utf8'))

/*
Using kafka-node - really nice library
create a producer and connect to a Zookeeper to send the payloads.
*/
var kafka = require('kafka-node'),
Producer = kafka.Producer,
client = new kafka.Client('kafka:2181'),
producer = new Producer(client);

/*
Creating a payload, which takes below information
'topic' --> this is the topic we have created in kafka.
'messages' --> data which needs to be sent to kafka. (JSON in our case)
'partition' --> which partition should we send the request to.
If there are multiple partition, then we optimize the code here,
so that we send request to different partitions.

*/
payloads = [
{ topic: 'test', messages: chunk.toString('utf8'), partition: 0 },
];

/*
producer 'on' ready to send payload to kafka.
*/
producer.on('ready', function(){
producer.send(payloads, function(err, data){
console.log(data)
});
});

/*
if we have some error.
*/
producer.on('error', function(err){})

});
/*
end of request
*/
response.end();

/*
Listen on port 8125
*/
}).listen(8125);

Step 2 : Start above script on the nodejs server.

[nodejs-admin@nodejs nodejs]$ vim json_nodejs_kafka.js
[nodejs-admin@nodejs nodejs]$ node json_nodejs_kafka.js

Step 3 : Execute curl command to send the JSON to nodejs.

[nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"xyz","password":"xyz"}' http://localhost:8125/upload

Step 4 : Output on nodejs console

[nodejs-admin@nodejs nodejs]$ node json_nodejs_kafka.js 
{"username":"xyz","password":"xyz"}
{ test: { '0': 29 } }
{"username":"xyz","password":"xyz"} request from the curl command.
{ test: { '0': 29 } } response from the kafka cluster that, it has received the json.

Step5 : Output on the kafka consumer side.

[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
{"username":"xyz","password":"xyz"}
{"username":"xyz","password":"xyz"} data received from nodejs server.
Categories: BigData, HOWTOs