07 December 2013

Cloud Tip: Automate Deployment easily with Docker

Hi Guys,

After a long time another new post from my tips collections in the area of Automating the deployment with Docker.

Little Introduction about Docker:

Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.
Docker containers can encapsulate any payload, and will run consistently on and between virtually any server. The same container that a developer builds and tests on a laptop will run at scale, in production*, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.

Common use cases for Docker include:
  • Automating the packaging and deployment of applications
  • Creation of lightweight, private PAAS environments
  • Automated testing and continuous integration/deployment
  • Deploying and scaling web apps, databases and backend services
  • Please note Docker is currently under heavy developement. It should not be used in production (yet).


For about the platform introduction, please visit:
-   http://www.docker.io/gettingstarted/

An interactive command line tutorial also there on getting started page.

Hope this gives a little introduction about docker. I shall write more developer tips on this later. Keep looking for my updates.

Cheers,
Sajan

04 December 2012

General Tip: Gmail Search Tips - Gmail Master

Hi Folks,

Today Lets look into Gmail tips:

Google’s email service Gmail, or Google Mail, supports a variety of advanced search parameters which have not been documented that well until now. Recently it became known for instance that emails can be filtered by size using the size: parameter in the search form on the Gmail website.



30 November 2012

Cloud Tip: What do you mean by BigData and why?


Hi Folks,

BigData in simple words:

Some people think Big Data is simply more information than can be stored on a personal computer. Others think of it as overlapping sets of data that reveal unseen patterns, helping us understand our world—and ourselves—in new ways.
Still others think that our smart phones are turning each of us into human sensors and that our planet is developing a nervous system. Below, experience how Big Data is shaping your life



EACH OF US LEAVES A TRAIL OF DIGITAL EXHAUST, FROM TEXTS TO GPS DATA, WHICH WILL LIVE ON FOREVER.


A PERSON TODAY PROCESSES MORE DATA IN A SINGLE DAY THAN A PERSON IN THE 1500S DID IN A LIFETIME.

EVERY OBJECT ON EARTH WILL SOON BE GENERATING DATA, INCLUDING OUR HOMES, OUR CARS, AND YES, EVEN OUR BODIES.

More Infomation is available @ Student Face of BigData

Click on Lets go link and see how much data being generated day by day in with the help of Info-graphics.

Cheers! 

Courtesy goes to : Student Face of BigData

Dev Tip: Fastest Key-Value library from Google

Hi Folks,

Today let's look into Key-Value storage library called "LevelDB".

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values. Leveldb is based on LSM (Log-Structured Merge-Tree) and uses SSTable and MemTable for the database implementation. It's written in C++ and availabe under BSD license. LevelDB treats key and value as arbitrary byte arrays and stores keys in ordered fashion. It uses snappy compression for the data compression. Write and Read are concurrent for the db, but write performs best with single thread whereas Read scales with number of cores

Java package built with JNI wrapper available for LevelDB - Stable version (Forked for further development, updates may not be reflected on public GIT, because of security Issues).

Features:
  • Keys and values are arbitrary byte arrays.
  • Data is stored sorted by key.
  • Callers can provide a custom comparison function to override the sort order.
  • The basic operations are Put(key,value), Get(key), Delete(key).
  • Multiple changes can be made in one atomic batch.
  • Users can create a transient snapshot to get a consistent view of data.
  • Forward and backward iteration is supported over the data.
  • Data is automatically compressed using the Snappy compression library.
  • External activity (file system operations etc.) is relayed through a virtual interface so users can customize the operating system interactions.
  • Detailed documentation about how to use the library is included with the source code.
 Limitations:
  • This is not a SQL database. It does not have a relational data model, it does not support SQL queries, and it has no support for indexes.
  • Only a single process (possibly multi-threaded) can access a particular database at a time.
  • There is no client-server support builtin to the library. An application that needs such support will have to wrap their own server around the library. 
Performance:

Here is a performance report (with explanations) from the run of the included db_bench program. The results are somewhat noisy, but should be enough to get a ballpark performance estimate.

Setup

We use a database with a million entries. Each entry has a 16 byte key, and a 100 byte value. Values used by the benchmark compress to about half their original size.
   LevelDB:    version 1.1
   Date:       Sun May  1 12:11:26 2011
   CPU:        4 x Intel(R) Core(TM)2 Quad CPU    Q6600  @ 2.40GHz
   CPUCache:   4096 KB
   Keys:       16 bytes each
   Values:     100 bytes each (50 bytes after compression)
   Entries:    1000000
   Raw Size:   110.6 MB (estimated)
   File Size:  62.9 MB (estimated)
 

Write performance

The "fill" benchmarks create a brand new database, in either sequential, or random order. The "fillsync" benchmark flushes data from the operating system to the disk after every operation; the other write operations leave the data sitting in the operating system buffer cache for a while. The "overwrite" benchmark does random writes that update existing keys in the database.
   fillseq      :       1.765 micros/op;   62.7 MB/s     
   fillsync     :     268.409 micros/op;    0.4 MB/s (10000 ops)
   fillrandom   :       2.460 micros/op;   45.0 MB/s     
   overwrite    :       2.380 micros/op;   46.5 MB/s     
 
Each "op" above corresponds to a write of a single key/value pair. I.e., a random write benchmark goes at approximately 400,000 writes per second.
Each "fillsync" operation costs much less (0.3 millisecond) than a disk seek (typically 10 milliseconds). We suspect that this is because the hard disk itself is buffering the update in its memory and responding before the data has been written to the platter. This may or may not be safe based on whether or not the hard disk has enough power to save its memory in the event of a power failure.

Read performance

We list the performance of reading sequentially in both the forward and reverse direction, and also the performance of a random lookup. Note that the database created by the benchmark is quite small. Therefore the report characterizes the performance of leveldb when the working set fits in memory. The cost of reading a piece of data that is not present in the operating system buffer cache will be dominated by the one or two disk seeks needed to fetch the data from disk. Write performance will be mostly unaffected by whether or not the working set fits in memory.
   readrandom   :      16.677 micros/op;  (approximately 60,000 reads per second)
   readseq      :       0.476 micros/op;  232.3 MB/s    
   readreverse  :       0.724 micros/op;  152.9 MB/s    
 
LevelDB compacts its underlying storage data in the background to improve read performance. The results listed above were done immediately after a lot of random writes. The results after compactions (which are usually triggered automatically) are better.
   readrandom   :      11.602 micros/op;  (approximately 85,000 reads per second)   
   readseq      :       0.423 micros/op;  261.8 MB/s    
   readreverse  :       0.663 micros/op;  166.9 MB/s    
 
Some of the high cost of reads comes from repeated decompression of blocks read from disk. If we supply enough cache to the leveldb so it can hold the uncompressed blocks in memory, the read performance improves again:
   readrandom   :       9.775 micros/op;  (approximately 100,000 reads per second before compaction)
   readrandom   :       5.215 micros/op;  (approximately 190,000 reads per second after compaction) 
 
 
This Article may help someone in the future.

Cheers!

 Courtesy goes to "LevelDB" wikis

Dev Tip: My Experience of using Compression Library used by Google

Hi Folks,

While I was doing System Architecture and Development for a brand new Product of EduAlert - "Learn360". I went through a serious of challenges, lets go through one interesting challenge.

Note: I am not going to explain the product architecture; since the product rights goes to EduAlert only. Technical key challenges I faced during the development was the only concern.

A little introduction about the product "Learn360":

      'A proprietary learning platform that provides easy, coherent access to networks of people and resources'.

New generation LMS (Learning Management System) requires large amount of meta-data to be stored and to be processed. Since most of the contents pushing through LMS in Video/Textual format.

Here the problem is "What's the best way to store the redundant data on File System or in a Database without compromising efficiency?"

I thought of implementing a Dedupe File system at file system level by adding a custom linux kernel Module built with Fuse API wrapper. I written a simple python program using Fuse API, that works great. But lacks File Locking mechanism, severe performance issues. Since its a user level Kernel module, IO times are high. Hence its not a advisable one for me.

My thoughts went in different angle, "What about incorporating zlib library and compress / Decompress data during IO?". Finally I decided not to use this approach since the compression overhead is too high. Since our system is highly dependent on IO (metadata).

I thought of putting Memcached the frequently used decompressed metadata. But still the overhead is high and far better than the previous approach.

So my primary focus was to find an efficient way to compress and decompress metadata with less overhead. After a few hours of literature survey I found a  library "Snappy" used by Google and other companies. I incorporated this module into our the core framework.

This library may help someone in the near future, those who are working on similar problem related to compression.

More about Snappy:

Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more.

Snappy is widely used inside Google, in everything from BigTable and MapReduce to our internal RPC systems. (Snappy has previously been referred to as “Zippy” in some presentations and the likes.)

Supported libraries are:

Snappy is written in C++, but C bindings are included, and several bindings to other languages are maintained by third parties:

Cheers!



14 November 2012

Dev Tip: Application Server Profiler for Java & .NET

Hi Folks,

I am working on application profiling for Edualert Applications, developed using Spring running on JBoss App Server. I found an interesting solution for this. Likely someone may benefit this product.

AppDynamics Lite is the very first free product designed for troubleshooting Java/.NET performance while getting full visibility in production environments.



Supported Java Versions:

  • Java Virtual Machines:
    • Sun Java 1.5, 1.6, 1.7IBM JVM 1.5,1.6, 1.7JRockit 1.5,1.6., 1.7
  • Java Application Servers:
    • WebsphereWeblogicJBossTomcatGlassfishJettyOSGi
  • Java Programming Stacks:
    • SpringStrutsJSFWeb ServicesEJBsServletsJSPsJPAHibernateJMS
  • Supported .NET Versions:
    • .NET Framework 2.03.03.54.0
    • ASP .NET MVC 2 and 3
  • Supported IIS Versions:
    • Microsoft IIS 6.0 7.07.5Microsoft IIS Express 7.x     
  • Supported Versions of Windows operating systems in both 32- and 64-bit:
    • Windows XP, 2003, Vista, 2008, 2008 R2, Windows 7



What's the Difference Between a Profiler and AppDynamics
WHEN TO USE A PROFILERWHEN TO USE APPDYNAMICS LITE
You need to troubleshoot high CPU usage and high memory usageYou need to troubleshoot slow response times, slow SQL, high error rates, and stalls
Your environment is Development or QAYour environment is Production or performance load-test
15-20% overhead is okayYou can't afford more than 2% overhead

AppDynamics Lite consists of a Viewer and Agent. 


AppDynamics Lite Viewer System Requirements

The AppDynamics Lite Viewer is a lightweight Java process. It has an embedded web server that serves a Flash-based browser interface.

Hardware Requirements

  • Disk space (install footprint): 15 MB Java, 50 MB .NET
  • Disk space (run time footprint): 100 MB (data retention period = 2 hours)
  • Minimum recommended hardware: 2 GB RAM, Single CPU, 1 disk
  • Memory footprint: 100 MB

Operating System Requirements

AppDynamics Lite runs on any operating system that supports a JVM 1.5 runtime or .Net Framework (2.0, 3.0, 3.5, 4.0) and a web browser with Flash. 

Software Requirements

  • One of the following browsers:
    • Mozilla FireFox v 3.x, 4.x, 5.0
    • Internet Explorer 6.x, 7.x, 8.x, 9.0.
    • Safari 4.x, 5.x
    • Google Chrome 10.x, 11.x, 12.x
  • Adobe Flash Player 10 or above for your browser (get the latest version)
  • For better navigation, size your browser window to a minimum of 1020x450 pixels.

AppDynamics Lite Agent System Requirements

The Agent is a lightweight process that runs in your JVM or a module that runs in your CLR.

Software Requirements

  • Memory footprint: 
    • 10 MB Java
    • 20-30 MB .NET
  • For .NET you need:
    • IIS 6 or newer
    • .NET Framework 2.0, 3.0, 3.5, or 4.0

Lets focus on Java, Since I am working on Java platform.

Get Started With AppDynamics Lite for Java



Getting started with AppDynamics Lite is fast and easy! Here's how:

1. Ensure that your system meets the AppDynamics Lite System Requirements.

2. Download and extract the latest version of AppDynamicsLite.zip. After you extract AppDynamicsLite.zip you will have two zip files, LiteViewer.zip for the Viewer and AppServerAgentLite.zip for the Agent.

3. Read about the Runtime Options such as authentication and port configuration.

4. Follow the simple installation procedure shown in the following illustration. Alternatively, follow the detailed step-by-step instructions for Installing the Viewer and Installing the Application Server Agent.


Enjoy the Series!

Cloud Tip: Cloudify Paas Stack Introduction

Hi Folks,

Today Lets discuss on Installation, Sample deployment.

Cloudify - The Open Source PaaS Stack


Gain the on-demand availability, rapid deployment, and agility benefits of moving your app to the cloud, with our open source PaaS stack.

All with no code changes, no lock-in, and full control.

Cloudify Installation:



Cloudify  Enterprise Deployment:


Download it from:

http://www.cloudifysource.org/downloads/get_cloudify

Dont forgot to look into the getting started guide:

http://www.cloudifysource.org/guide/2.2/qsg/quick_start_guide_helloworld

Enjoy the cloud Series!