Archive

Archive for February, 2015

Not enough physical memory is Available – VMware Workstation 10

February 11, 2015 Leave a comment

VMware Workstation 10 Error: Not enough physical memory is available to power this virtual machine

This issues starts after an update.

Issue on Windows 8.1

You attempt to start a virtual machine running on Workstation 10 and you get a message that you have no memory available to run the vm.
Message : Not enough physical memory is available to power this virtual machine with its configured settings.

Solution

  • Shut down all running virtual machines
  • Close VMware Workstation.
  • Open Command prompt in Admin mode.
    opening config.ini file using nodepad from command prompt.
    c:\ProgramData\VMware\VMware Workstation> notepad config.ini
  • Open config.ini located at C:\ProgramData\VMware\VMware Workstation
  • Insert this line: vmmon.disableHostParameters = TRUE
  • Save and close file.
  • Reboot Windows.

Error Message

alt text
Categories: HOWTOs, Internet

`zabbix-java-gateway` on Centos 6.5

February 9, 2015 Leave a comment

Installing zabbix-java-gateway on Centos 6.5

How does zabbix-java-gateway work?

  1. First we configure system which needs to be monitored using the JAVA_OPTS.
  2. Next we add a JMX Interface in Zabbix server UI under hosts.
  3. zabbix-server will communicate with zabbix-java-gateway which intern communicates to the system/server where we need to get all the JMX data.
  4. JMX is set using the JAVA_OPTS.
 [Zabbix-Server] --(port:10053)--> [zabbix-java-gateway] --(port:12345)--> [JMX enabled server, Example:Tomcat/WebServer]

Step 1 : Install zabbix-java-gateway on zabbix-server

 [ahmed@ahmed-server ~]$ sudo yum install zabbix-java-gateway

Step 2 : Configure the host with JMX which needs to be monitored.

Setting Tomcat/JMX Options. Add the below lines to setenv.sh and save it under apache-tomcat-7/bin/
So when the start.sh is started then these JMX options will be added to tomcat server.
NOTE: To make the monitoring secure use ssl and authentication options. You can find more information in the links at the end of this post.
IMPORTANT Lines are below. We will be getting data from port 12345.
  -Dcom.sun.management.jmxremote\
-Dcom.sun.management.jmxremote.port=12345\
-Dcom.sun.management.jmxremote.authenticate=false\
-Dcom.sun.management.jmxremote.ssl=false"
But here are the complete JAVA_OPTS. you can ignore the first few lines which sets the Heap Memory size.
 export JAVA_OPTS="$JAVA_OPTS\
-server\
-Xms1024m\
-Xmx2048m\
-XX:MaxPermSize=256m\
-XX:MaxNewSize=256m\
-XX:NewSize=256m\
-XX:SurvivorRatio=12\
-Dcom.sun.management.jmxremote\
-Dcom.sun.management.jmxremote.port=12345\
-Dcom.sun.management.jmxremote.authenticate=false\
-Dcom.sun.management.jmxremote.ssl=false"

Step 3 : Configuring zabbix-server.

  1. Here we configure the zabbix server to let it know where the zabbix-java-gateway is running.
  2. Since we are running the zabbix-java-gateway in the same server as zabbix-server, so we will be using the same ip for both.
  3. Only difference is that zabbix-java-gateway will be running on port 10053
Configuration in zabbix-server.conf. Add the below line. Rather un-comment them and add the IP/ports
 ### Option: JavaGateway
# IP address (or hostname) of Zabbix Java gateway.
# Only required if Java pollers are started.
#
# Mandatory: no
# Default:
JavaGateway=10.10.18.27

### Option: JavaGatewayPort
# Port that Zabbix Java gateway listens on.
#
# Mandatory: no
# Range: 1024-32767
# Default:
JavaGatewayPort=10053

### Option: StartJavaPollers
# Number of pre-forked instances of Java pollers.
#
# Mandatory: no
# Range: 0-1000
# Default:
StartJavaPollers=5

Step 4 : Configuring zabbix-java-gateway.

  1. We now set where the zabbix-java-gateway will be running and which port it will be listing on.
  2. Configuration in zabbix-java-gateway, here same ip as the zabbix-server and port 10053.
 ### Option: zabbix.listenIP
# IP address to listen on.
#
# Mandatory: no
# Default:
LISTEN_IP="10.10.18.27"

### Option: zabbix.listenPort
# Port to listen on.
#
# Mandatory: no
# Range: 1024-32767
# Default:
LISTEN_PORT=10053

### Option: zabbix.pidFile
# Name of PID file.
# If omitted, Zabbix Java Gateway is started as a console application.
#
# Mandatory: no
# Default:
# PID_FILE=

PID_FILE="/var/run/zabbix/zabbix_java.pid"

### Option: zabbix.startPollers
# Number of worker threads to start.
#
# Mandatory: no
# Range: 1-1000
# Default:
START_POLLERS=5
 https://www.zabbix.com/documentation/2.4/manual/concepts/java
https://www.zabbix.com/documentation/2.4/manual/config/items/itemtypes/jmx_monitoring
Categories: HOWTOs

`haproxy` Setup on Centos 6.5, Kernel 2.6, CPU x86_64

February 9, 2015 Leave a comment

How to setup HAProxy

HAProxy is the Reliable, High Performance TCP/HTTP Load Balancer and it works nicely with Deveo Cluster setup.

Follow these steps to install on CentOS:

 [ahmed@ahmed-server ~]$ sudo yum install make gcc wget
[ahmed@ahmed-server ~]$ wget http://www.haproxy.org/download/1.5/src/haproxy-1.5.11.tar.gz
[ahmed@ahmed-server ~]$ tar -zxvf haproxy-1.5.11.tar.gz -C /opt
[ahmed@ahmed-server ~]$ cd /opt/haproxy-1.5.11
[ahmed@ahmed-server haproxy-1.5.11]$ sudo make TARGET=linux26 CPU=x86_64
[ahmed@ahmed-server haproxy-1.5.11]$ sudo make install

Follow these steps to create init script:

 [ahmed@ahmed-server ~]$ sudo ln -sf /usr/local/sbin/haproxy /usr/sbin/haproxy
[ahmed@ahmed-server ~]$ sudo cp /opt/haproxy-1.5.11/examples/haproxy.init /etc/init.d/haproxy
[ahmed@ahmed-server ~]$ sudo chmod 755 /etc/init.d/haproxy

Follow these steps to configure haproxy:

 [ahmed@ahmed-server ~]$ sudo mkdir /etc/haproxy
[ahmed@ahmed-server ~]$ sudo cp /opt/haproxy-1.5.11/examples/examples.cfg /etc/haproxy/haproxy.cfg
[ahmed@ahmed-server ~]$ sudo mkdir /var/lib/haproxy
[ahmed@ahmed-server ~]$ sudo touch /var/lib/haproxy/stats
[ahmed@ahmed-server ~]$ sudo useradd haproxy

Finally start the service and enable on boot:

 [ahmed@ahmed-server ~]$ sudo service haproxy check
[ahmed@ahmed-server ~]$ sudo service haproxy start
[ahmed@ahmed-server ~]$ sudo chkconfig haproxy on

Configuration sample haproxy.cfg.

 global
log /dev/log local0
log /dev/log local1 notice
log 127.0.0.1 local2
#chroot /var/lib/haproxy
#stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon

# Default SSL material locations
#ca-base /etc/ssl/certs
#crt-base /etc/ssl/private

# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
#ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL

defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
#errorfile 400 /etc/haproxy/errors/400.http
#errorfile 403 /etc/haproxy/errors/403.http
#errorfile 408 /etc/haproxy/errors/408.http
#errorfile 500 /etc/haproxy/errors/500.http
#errorfile 502 /etc/haproxy/errors/502.http
#errorfile 503 /etc/haproxy/errors/503.http
#errorfile 504 /etc/haproxy/errors/504.http

frontend localnodes
bind *:9002
mode http
default_backend nodes

backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server web01 127.0.0.1:9090 check
server web02 127.0.0.1:9091 check
server web03 127.0.0.1:9092 check

listen stats *:9001
stats enable
stats uri /
stats hide-version
stats auth someuser:password

Configuring Logging

If you look at the top of /etc/haproxy/haproxy.cfg, you will see something like below. If you dont see it then add the line in the beginning.

Here is how my conf looks like.

 global
log /dev/log local0
log /dev/log local1 notice
log 127.0.0.1 local2
If you dont have the below line then add it.
 global
log 127.0.0.1 local2
This means that HAProxy will send its messages to rsyslog on 127.0.0.1. But by default, rsyslog doesn’t listen on any address.

Let’s edit /etc/rsyslog.conf and uncomment these lines:

 $ModLoad imudp
$UDPServerRun 514
This will make rsyslog listen on UDP port 514 for all IP addresses. Optionally you can limit to 127.0.0.1 by adding:
 $UDPServerAddress 127.0.0.1

Now create a /etc/rsyslog.d/haproxy.conf file containing:

 local2.*    /var/log/haproxy.log

You can of course be more specific and create separate log files according to the level of messages:

 local2.=info     /var/log/haproxy/haproxy-info.log
local2.notice /var/log/haproxy/haproxy-allbutinfo.log

Then restart rsyslog and see that log files are created:

 # service rsyslog restart
Shutting down system logger: [ OK ]
Starting system logger: [ OK ]

# ls -l /var/log/haproxy | grep haproxy
-rw-------. 1 root root 131 3 oct. 10:43 haproxy-allbutinfo.log
-rw-------. 1 root root 106 3 oct. 10:42 haproxy-info.log
Now you can start your debugging session!

More Details.

 https://serversforhackers.com/haproxy/
http://support.deveo.com/knowledgebase/articles/409523-how-to-setup-haproxy
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html
http://www.percona.com/blog/2014/10/03/haproxy-give-me-some-logs-on-centos-6-5/
Categories: HOWTOs, Internet

Using `npm` behind a proxy

February 7, 2015 Leave a comment
To permanently set the configuration we can do this.
 npm config set proxy http://proxy.company.com:8080
npm config set https-proxy http://proxy.company.com:8080
If you need to specify credentials, they can be passed in the url using the following syntax.
 http://user_name:password@proxy.company.com:8080
To Redirect traffic over proxy.
 npm --https-proxy=http://proxy.company.com:8080 -g install npmbox
npm --https-proxy=http://proxy.company.com:8080 -g install kafka-node
More Details.
 http://wil.boayue.com/blog/2013/06/14/using-npm-behind-a-proxy/
Categories: HOWTOs, Internet

Sending JSON to NodeJS to Multiple Topics in Kafka.

February 7, 2015 Leave a comment
What we are trying to achieve ?
  1. Send json from and browser/curl to nodejs.
  2. nodejs will redirect json data based on url to each topic in kafka. Example : URL /upload/topic/A will send the json to topic_a in kafka
  3. Further processing is done on kafka.
  4. We can then see the json arrival in kafka, using kafka-console-consumer.sh script.

Step 1 : Get the json_nodejs_multiple_topics.js from git

Step 2 : Start above script on the nodejs server.

 [nodejs-admin@nodejs nodejs]$ vim json_nodejs_multiple_topics.js
[nodejs-admin@nodejs nodejs]$ node json_nodejs_multiple_topics.js

Step 3 : Execute curl command to send the JSON to nodejs.

 [nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"xyz","password":"xyz"}' http://localhost:8125/upload/topic/A
[nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"abc","password":"xyz"}' http://localhost:8125/upload/topic/B
[nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"efg","password":"xyz"}' http://localhost:8125/upload/topic/C
[nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"efg","password":"xyz"}' http://localhost:8125/upload/topic/D

Step 4 : Output on nodejs console

 [nginx-admin@nginx nodejs]$ node json_nodejs_multiple_topics.js 
For Topic A
{"username":"xyz","password":"xyz"}
{ topic_a: { '0': 16 } }
For Topic B
{"username":"abc","password":"xyz"}
{ topic_b: { '0': 1 } }
For Topic C
{"username":"efg","password":"xyz"}
{ topic_c: { '0': 0 } }
ERROR: Could not Process this URL :/upload/topic/D
{"username":"efg","password":"xyz"}
{"username":"xyz","password":"xyz"} request from the curl command.
{ topic_a: { '0': 16 } } response from the kafka cluster that, it has received the json.

Step5 : Output on the kafka consumer side.

NOTE : Assuming that we have already created topics in kafka. using below command.
 [kafka-admin@kafka kafka]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic_a
[kafka-admin@kafka kafka]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic_b
[kafka-admin@kafka kafka]$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic_c
[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-topics.sh --list --zookeeper localhost:2181
topic_a
topic_b
topic_c
[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$
Here is the output after running curl command on the nodejs server
 [kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topic_a --from-beginning
{"username":"xyz","password":"xyz"}

[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topic_b --from-beginning
{"username":"abc","password":"xyz"}

[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topic_c --from-beginning
{"username":"efg","password":"xyz"}
{"username":"xyz","password":"xyz"} data received from nodejs server.
{"username":"abc","password":"xyz"} data received from nodejs server.
{"username":"efg","password":"xyz"} data received from nodejs server.
 http://whatizee.blogspot.in/2015/02/installing-kafka-single-node-quick-start.html
http://whatizee.blogspot.in/2015/02/installing-nodejs-on-centos-66.html
http://whatizee.blogspot.in/2015/02/sending-json-nodejs-kafka.html
https://github.com/zubayr/kafka-nodejs/blob/master/send_json_multiple_topics/README.md
Categories: BigData, HOWTOs

Sending JSON -> NodeJS -> Kafka.

February 6, 2015 Leave a comment
What we are trying to achieve ?
  1. Send json from and browser/curl to nodejs.
  2. nodejs will redirect json data to kafka.
  3. Further processing is done on kafka.
  4. We can then see the json arrive on kafka-console-consumer.sh script.

Step 1 : Create a script called json_nodejs_kafka.js with below script.

/*
Getting some 'http' power
*/
var http=require('http');

/*
Setting where we are expecting the request to arrive.
http://localhost:8125/upload

*/
var request = {
hostname: 'localhost',
port: 8125,
path: '/upload',
method: 'GET'
};

/*
Lets create a server to wait for request.
*/
http.createServer(function(request, response)
{
/*
Making sure we are waiting for a JSON.
*/
response.writeHeader(200, {"Content-Type": "application/json"});

/*
request.on waiting for data to arrive.
*/
request.on('data', function (chunk)
{

/*
CHUNK which we recive from the clients
For out request we are assuming its going to be a JSON data.
We print it here on the console.
*/
console.log(chunk.toString('utf8'))

/*
Using kafka-node - really nice library
create a producer and connect to a Zookeeper to send the payloads.
*/
var kafka = require('kafka-node'),
Producer = kafka.Producer,
client = new kafka.Client('kafka:2181'),
producer = new Producer(client);

/*
Creating a payload, which takes below information
'topic' --> this is the topic we have created in kafka.
'messages' --> data which needs to be sent to kafka. (JSON in our case)
'partition' --> which partition should we send the request to.
If there are multiple partition, then we optimize the code here,
so that we send request to different partitions.

*/
payloads = [
{ topic: 'test', messages: chunk.toString('utf8'), partition: 0 },
];

/*
producer 'on' ready to send payload to kafka.
*/
producer.on('ready', function(){
producer.send(payloads, function(err, data){
console.log(data)
});
});

/*
if we have some error.
*/
producer.on('error', function(err){})

});
/*
end of request
*/
response.end();

/*
Listen on port 8125
*/
}).listen(8125);

Step 2 : Start above script on the nodejs server.

[nodejs-admin@nodejs nodejs]$ vim json_nodejs_kafka.js
[nodejs-admin@nodejs nodejs]$ node json_nodejs_kafka.js

Step 3 : Execute curl command to send the JSON to nodejs.

[nodejs-admin@nodejs nodejs]$ curl -H "Content-Type: application/json" -d '{"username":"xyz","password":"xyz"}' http://localhost:8125/upload

Step 4 : Output on nodejs console

[nodejs-admin@nodejs nodejs]$ node json_nodejs_kafka.js 
{"username":"xyz","password":"xyz"}
{ test: { '0': 29 } }
{"username":"xyz","password":"xyz"} request from the curl command.
{ test: { '0': 29 } } response from the kafka cluster that, it has received the json.

Step5 : Output on the kafka consumer side.

[kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
{"username":"xyz","password":"xyz"}
{"username":"xyz","password":"xyz"} data received from nodejs server.
Categories: BigData, HOWTOs

NodeJS Kafka Producer – Using `kafka-node`

February 6, 2015 Leave a comment
Now that we have Kafka and NodeJS ready. Lets some data to our Kafka Cluster.
Below is a basic producer code.
below are the Server Details.
  1. nodejs is the nodejs server.
  2. kafka is the kafka server (single node).

Step 1: Copy the below script in a file called producer_nodejs.js.

 /*
Basic producer to send data to kafka from nodejs.
More Information Here : https://www.npmjs.com/package/kafka-node
*/

// Using kafka-node - really nice library
// create a producer and connect to a Zookeeper to send the payloads.
var kafka = require('kafka-node'),
Producer = kafka.Producer,
client = new kafka.Client('kafka:2181'),
producer = new Producer(client);

/*
Creating a payload, which takes below information
'topic' --> this is the topic we have created in kafka. (test)
'messages' --> data which needs to be sent to kafka. (JSON in our case)
'partition' --> which partition should we send the request to. (default)

example command to create a topic in kafka:
[kafka@kafka kafka]$ bin/kafka-topics.sh \
--create --zookeeper localhost:2181 \
--replication-factor 1 \
--partitions 1 \
--topic test

If there are multiple partition, then we optimize the code here,
so that we send request to different partitions.

*/
payloads = [
{ topic: 'test', messages: 'This is the First Message I am sending', partition: 0 },
];


// producer 'on' ready to send payload to kafka.
producer.on('ready', function(){
producer.send(payloads, function(err, data){
console.log(data)
});
});

producer.on('error', function(err){}

Step 2 : Start the kafka cluster as we already did in Installation of Kafka. Assuming topic as test

Step 3 : Start the consumer service as in the below command.

 [kafka-admin@kafka kafka]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

Step 4 : Execute below command. This will send This is the First Message I am sending Message to the Kafka consumer.

 [nodejs-admin@nodejs nodejs]$ node producer_nodejs.js

Step 5 : Check on the consumer you will see the message sent from nodejs.

 [kafka-admin@kafka kafka_2.9.2-0.8.2.0]$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
This is a message
This is another message here
This is the First Message I am sending
Categories: BigData, HOWTOs