Saturday, December 7, 2013

memory usage of my app in Android

Memory usage on modern operating systems like Linux is an extremely complicated and difficult to understand area. Android apps have more memory available to them than ever before, but are you sure you're using it wisely? This talk will cover the memory management and explore tools and techniques for profiling the memory usage of Android apps.

first question is in my mind is how to know application heap size available to an Android app. ofcourse it may vary on the basis of phone device.

  1. How much heap can my app use before a hard error is triggered? And
  2. How much heap should my app use, given the constraints of the Android OS version and hardware of the user's device?
For item 1 above: maxMemory()
which can be invoked (e.g., in your main activity's onCreate() method) as follows:
Runtime rt = Runtime.getRuntime();
long maxMemory = rt.maxMemory();
Log.v("onCreate", "maxMemory:" + Long.toString(maxMemory));
This method tells you how many total bytes of heap your app is allowed to use.
For item 2 above: getMemoryClass()
which can be invoked as follows:
ActivityManager am = (ActivityManager) getSystemService(ACTIVITY_SERVICE);
int memoryClass = am.getMemoryClass();
Log.v("onCreate", "memoryClass:" + Integer.toString(memoryClass));
This method tells you approximately how many megabytes of heap your app should use if it wants to be properly respectful of the limits of the present device, and of the rights of other apps to run without being repeatedly forced into the onStop() / onResume() cycle as they are rudely flushed out of memory while your elephantine app takes a bath in the Android jacuzzi.
At runtime, the heap grows dynamically in size as the Dalvik VM requests system memory from the operating system. The Dalvik VM typically starts by allocating a relatively small heap. Then after each GC run it checks to see how much free heap memory there is. If the ratio of free heap to total heap is too small, the Dalvik VM will then add more memory to the heap (up to the maximum configured heap size).

Memory Anatomy

PSS : Proportional Set Size

Amount of memory shared with other processes, account in a way that the amount is divided evenly between the processes that share it. This is memory that would not be released if the process was terminated, but is indicative of the amount that this process is “contribution” to overall memory load.

USS : Unique Set Size

USS is the set of pages that are unique to a process. This is the amount of memory that would be freed if the application gets terminated.


Runtime memory available for allocation (Used by applications, services, daemons)

Dalvik Heap

The dalvik heap is preloaded with classes and data by zygote. 

Commands for run time memory usage

  • To spit out a bunch of information about the memory use of each JAVA process
$adb shell dumpsys meminfo 
  • To see memory for particular process: ( e.g. System)
$adb shell dumpsys meminfo system
  • Summary of the overall memory
$adb shell cat /proc/meminfo 

Run Time Memory: Gallery Use Case

Rum time memory usage will depends on kind of use case running on android system. Following are the context where – run time memory will be allocated by android system.
  • All java process will run with the instance of DVM. DVM will again have its own run time heap requirement based on java application code complexity.
  • Binder IPC will allocate run time memory for marshalling objects
  • Service or daemon will allocated run time memory for internal usage
  • Following example will show run time heap change with respect to gallery application use case.
    • Size : Total size of particular heap
    • Allocated : Portion of heap is allocated to process
    • Native Usage : Usage by application or service code
    • Dalvik Usage : Usage by Dalvik virtual machine (libdvm)

Android's inbult API ActivityManager.getProcessMemoryInfo return few more interesting numbers.

The Pss number is a metric the kernel computes that takes into account memory sharing -- basically each page of RAM in a process is scaled by a ratio of the number of other processes also using that page. This way you can (in theory) add up the pss across all processes to see the total RAM they are using, and compare pss between processes to get a rough idea of their relative weight.
The other interesting metric here is PrivateDirty, which is basically the amount of RAM inside the process that can not be paged to disk (it is not backed by the same data on disk), and is not shared with any other processes. Another way to look at this is the RAM that will become available to the system when that process goes away (and probably quickly subsumed into caches and other uses of it).
Android added this crucial topic in detail.
Here is nice youtube link

Friday, December 6, 2013

Five years at infoedge india ltd.

I would like to take this opportunity to talk about a great milestone here at InfoEdge. I complete 5 years of my journey at this company.A journey filled with stories of struggles,success,fights,fun & joy. This journey is made even more memorable,just because of the great co-operation that each one of you always gave. I take pride in THANKING each and every one of you here for the support.I have innumerous moments that make experiences with many ups and downs.But with each experience,whether good or bad,I strongly feel connected.Because the learning cycle never stops.
I have learned from everything,from my mistakes to my appreciation.Of course there were differences,fights & issues but thats all healthy in the quest for success.I intend to continue to support new upcoming leadership talent as we begin our new quest for a great new tomorrow.
Obviously,I intend to carry new burdens and responsibilities too.Just to learn more & get each moment with new excitement in this journey.A JOURNEY THAT WILL NEVER END.
Once again thanks alot!!!

Wednesday, November 6, 2013

R.I.P Codeigniter

See my tweet It's really sad that
EllisLab stop support for codeigniter. CI is very fantastic framework and need to take advantages of PHP's upcoming releases which are more mature and stable. Well i have stopped working on CI since last an year ( currently i am working on Android native app development) but i loved it very much and proudly says that i was involved with CI at core level. Here are few links which shows my contribution to codeigniter.

Even some developer made plugins using my CI code.

I wish that it should go with safe hand. Bye bye Codeigniter !!

Saturday, May 18, 2013

Google Developers Blog: Google I/O 2013: For the developers

Google Developers Blog: Google I/O 2013: For the developers: By Scott Knaster, Google Developers Blog Editor “Google I/O is an annual developer conference featuring highly technical, in-depth sessi...

Tuesday, April 9, 2013

Hybrid app development PhoneGap

Creating mobile apps is a trend nowadays. we've chosen PhoneGAP for creating mobile apps to look at more closely. So, let’s start. but big question is should we use native app or hybrid app or HTML5 based webapp.

PhoneGap application is a “native-wrapped” web application. Let’s explore how the web application is “wrapped”.

Many native mobile development SDKs provide a web browser widget (a “web view”) as a part of their UI framework (iOS and Android, for example). In purely native applications, web view controls are used to display HTML content either from a remote server, or local HTML packaged along with the native application in some way. The native “wrapper” application generated by PhoneGap loads the end developer’s HTML pages into one of these web view controls, and displays the resulting HTML as the UI when the application is launched.

If JavaScript files are included in a page loaded by a web view, this code is evaluated on the page as normal. However, the native application which creates the web view is able to (in different ways, depending on the platform) asynchronously communicate with JavaScript code running inside of the web view. This technology is usually referred to as “the bridge” in the context of PhoneGap architecture – the “bridge” means something slightly different in Titanium, as we will see later.

PhoneGap takes advantage of this to create a JavaScript API inside a web view which is able to send messages to and receive messages from native code in the wrapper application asynchronously. The way the bridge layer is implemented is different per platform, but on iOS, when you call for a list of contacts, your native method invocation goes into a queue of requests to be sent over the bridge. PhoneGap will then create an iframe which loads a URI scheme (“gap://”) that the native app is configured to handle, at which point all the queued commands will be executed. Communication back into the web view is done by evaluating a string of JavaScript in the context of the web view from native code.
There is much more to PhoneGap than that, but the messaging from web view to native code via the bridge implementation is the key piece of technology which allows local web applications to call native code.There’s quite a bit happening behind the scenes in a Titanium application. But basically, at runtime, your application consists of three major components – your JavaScript source code (inlined into a Java or Objective-C file and compiled as an encoded string), the platform-specific implementation of the Titanium API in the native programming language, and a JavaScript interpreter that will be used to evaluate your code at runtime (V8 (default) or Rhino for Android, or JavaScriptCore for iOS). Except in the browser, of course, where the built-in JavaScript engine will be used.

So major differences are:


  • JavaScript API that provides access to Native Functions
  • Supports HTML5/CSS3
  • Supports Web Standards & Re-use Across Enterprise Apps
  • Supports DOM based JavaScript Libraries/Frameworks
  • Supports the most platforms

Appcelerator Titanium:

  • JavaScript API that provides access to Native Functions
  • Compiles to Native Code
  • Could provide better performance.


Of course, both Titanium and Phonegap fall into the category of “hybrid” but the key difference is implementation: while Phonegap application runs inside browser, Titanium’s app runs inside javascript interpreter. Runtime performances are SLOWER than native code because it's using a javascript engine as a bridge. Especially with a big TableView, it's much more slower, and the feeling is just not the same.

I have build two diff applications based on phonegap for testing purpose and i found that both are very slow as compare with browser rendering. When i cross checked same application in my phone's browsers, it looks much responsive than in app. Might be current browsers are using hardware acceleration by default. I have used mobile Jquery & phone gap APIs for demo.

I know that jquery mobile can be a little slow by nature, imagine my surprise and disappointment when my compiled phonegap app UI is magnitudes slower running on my phone than running in the phone's native browser. How can the compiled app be so noticeably less responsive than the same app running in the phone's native browser? The responsiveness is completely unacceptable.

Even phonegap provide better solution to use JS Framework.

Saturday, March 23, 2013

Back to blog : understanding ppi,dpi,mdpi, hdpi, xhdpi

Sorry guys !!! was little bit busy with Quora :-)

Anyway A man who is losing his house in the morning and reach in evening in house you can not say him the loser :D
Actually in last one -two months i am working on mobile site and apps and found very interesting things. I will share my experience here in my coming posts. So enjoy reading and be sure to leave comments here.

In Feb 1993, Marc Andreessen had requested for new HTML tag, the proposal was proposal for image basically about html tag. We have seen drastically improvements in web & HTML since last few years. People ( , , ) are contributing to make web faster. Accenture like companies started campaign with slogan "High Performance. Delivered".

According to the HTTP Archive,the average web page is 1292 KB, with 801 KB of that page weight — more than 60% — being taken up by images. In shiksha its around 50 -60%. we have seen many algorithms for image optimization

Then i found WebP :-) really i appreciate Google Engg. who are trying to make our life easier ...

So here are my findings.

  • Lossy and lossless compression
  • Transparency (alpha channel)
  • Great compression for photos
  • Animation support
  • Metadata
  • Color profiles

Gmail, Drive, Picasa, Instant Previews, Play Magazines, Image Search, YouTube, ...) with WebP support. Most recently, Chrome Web Store switched to WebP, saw ~30% byte reduction on average, and is now saving several terabytes of bandwidth per day!

Back to basic:
A digital photograph is made up of millions of tiny dots called pixels. For example, one camera might produce photos that are 2272 pixels wide and 1704 pixels tall (2272 x 1704). Another camera will produce an image that is 4492 x 3328. You can find out the number of megapixels by multiplying the horizontal and vertical pixels. In the first example, the camera captures about 3.9 megapixels (2272 x 1704 = 3,871,488). In the second example, the camera captures about 15 megapixels (4492 x 3328 = 14,949,376).

Computer monitor generally displays images at 72 pixels per inch. This means that our 3.8 megapixel image is going to measure about 32 inches by 24 inches when viewed on a monitor. We can determine the display size of the image by dividing the horizontal and vertical pixels by 72.

In this case, 2272 / 72 = 31.6 and 1704 / 72 = 23.7. 
Use the 72ppi standard when you want to post an image to the Internet.

But printer needs to use more pixels per inch to produce a high-quality image than our monitor does. If print a photo at 100ppi, it is not going to look like a professional print. You will be able to see grain and fuzziness, the actual pixels that make up the digital photograph. If we print at 250ppi, it increase the pixels per inch, so it reduce the size of the printed photo. Let's say you print 3.9 megapixel photo at 100ppi. This isn't high-quality, but you can print a photo that measures 22.7 by 17 inches (2272 / 100 = 22.7 and 1704 / 100 = 17).

Now you print the same photo at 250ppi. You get a great looking photo, but the print size is 9.1 by 6.8 inches (2272 / 250 = 9.1 and 1704 / 250 = 6.8).

Hope you enjoyed little bit maths that we did above but hold on ... Now screens are changed ..

iOS devices measure density in PPI (pixels per inch) and Android in DPI (dots per inch). The more pixels or dots you fit in one square inch on a screen, the higher the density and resolution of it.
The original iPhones and iPads had a screen density that was classified as non-retina. The current generation of iOS devices sport higher density displays referred to as retina. Android devices have evolved from low density, ldpi, all the way to extra high density, xhdpi.

There are five widely used densities across iOS and Android devices, which fall into four progressively larger groups:

  1. non-retina (iOS) and mdpi (Android)
  2. hdpi (Android)
  3. retina (iOS)
  4. xhdpi (Android)

Scaling the UI elements of your design and understanding how an asset at one density would scale to another can be confusing at times.

According to the official Android “Supporting Multiple Screens” documentation:

  • xhdpi = 320dpi
  • hdpi = 240dpi
  • mdpi = 160dpi
  • ldpi = 120dpi
  • Retina iPhones = 326dpi (roughly equivalent to xhdpi)
  • Retina iPads = 264dpi (roughly equivalent to hdpi)

See more details here.

So we have seen here some basic ABC about images pixels, ppi and dpi. So image can be render with diff resolutions in diff screen density. 

Will back with more details about responsive image rendering with all possible options. Till then enjoy reading and happy learning !!!

Wednesday, November 7, 2012

Asynchronous JS loading without blocking onload

Asynchronous JS is best way to load js file in your HTML page but it still blocks window.onload event (except in IE before version 10).

checkout here how onload blocked even we use asyn js loading techniques

So here is another solution for same :-)
  1. create an iframe without setting src to a new URL. This fires onload of the iframe immediately and the whole thing is completely out of the way
  1. style the iframe to make it invisible
  1. get the last script tag so far, which is the snippet itself. This is in order to glue the iframe to the snippet that includes it.
  1. insert the iframe into the page
  1. get a handle to the document object of the iframe
  1. write some HTML into that iframe document
  1. this HTML includes the desired script

var iframe = document.createElement('iframe');
(iframe.frameElement || iframe).style.cssText = "width: 0; height: 0; border: 0";
var where = document.getElementsByTagName('script');
where = where[where.length - 1];
where.parentNode.insertBefore(iframe, where);
var doc = iframe.contentWindow.document;'<body onload="'+
'var js = document.createElement(\'script\');'+
'js.src = \''+ url +'\';'+


1. Avoid SSL warnings: iframe.src defaults to “about:blank” in IE6, which it then treats as insecure content on HTTPS pages. We found that initializing iframe.src to “javascript:false”.

2. Avoid crossdomain exceptions: anonymous iframe access will throw exceptions if the host page changed the document.domain value in IE. The original Meebo code falls back to a “javascript:” URL when this happens.

3. The script (asyncjs1.php) runs is in an iframe, so all document and window references point to the iframe, not the host page.There's an easy solution for that without changing the whole script. Just wrap it in an immediate function and pass the document object the script expects:

document.getElementById('r')... // all fine

4. The script works fine in Opera, but blocks onload. Opera is weird here. Even regular async scripts block DOMContentLoaded which is a shame.

seems below code solves our problem ..  try it and let me know the results ...

Tuesday, November 6, 2012

PHP's register_shutdown_function

PHP has approx. 5800 functions defined in global space .. Uff .. that is one cause that people does not consider it as thoughtful language but these APIs work as strong enough tool to accomplish various task and it provide tremendous functionality to end user.

Today we will discuss one magical function named "register_shutdown_function".

function allows you to execute a block of code whenever your script ends for any reason.
Whether your page exit()s or die()s or just finishes, a developer has a hook to run whatever code he/she deems necessary. And not just one function either… you can use this call to register as many shutdown functions as you want, and they will get executed in the order that they get applied. But of course, you must be careful: PHP will happily give you more rope than you will ever need to hang yourself. A lot of people may consider the use of this function to be magic, and you’ll want to be very clear that what you’re doing is documented.

Use of this function is very straight-forward.

I just tested with Apache, PHP being used as Apache module. I created an endless loop like this:

class X
    function __destruct()
        $fp = fopen("/var/www/dtor.txt", "w+");
        fputs($fp, "Destroyed\n");

$obj = new X();
while (true) {
    // do nothing

Here's what I found out:-

  1. pressing STOP button in Firefox does not stop this script
  2. If I shut down Apache, destructor does not get called
  3. It stops when it reaches PHP max_execution_time and destuctor does not get called

However, doing this:

function shutdown_func() {
    $fp = fopen("/var/www/htdocs/dtor.txt", "w+");
    fputs($fp, "Destroyed2\n");


while (true) {
    // do nothing

shutdown_func gets called. So this means that class destuctor is not that good as shutdown functions. :-)

Enjoy !! :-)

Tuesday, October 30, 2012

Things web developers must know

The idea here is that most of us should already know most of what is on this list. But there just might be one or two items you haven't really looked into before, don't fully understand, or maybe never even heard of.
Interface and User Experience
  • Be aware that browsers implement standards inconsistently and make sure your site works reasonably well across all major browsers. At a minimum test against a recent Gecko engine (Firefox), a WebKit engine (SafariChrome, and some mobile browsers), your supported IE browsers (take advantage of the Application Compatibility VPC Images), and Opera. Also consider how browsers render your site in different operating systems.
  • Consider how people might use the site other than from the major browsers: cell phones, screen readers and search engines, for example. — Some accessibility info: WAI and Section508, Mobile development: MobiForge.
  • Staging: How to deploy updates without affecting your users. Ed Lucas's answer has some comments on this.
  • Don't display unfriendly errors directly to the user.
  • Don't put users' email addresses in plain text as they will get spammed to death.
  • Add the attribute rel="nofollow" to user-generated links to avoid spam.
  • Build well-considered limits into your site - This also belongs under Security.
  • Learn how to do progressive enhancement.
  • Redirect after a POST if that POST was successful, to prevent a refresh from submitting again.
  • Don't forget to take accessibility into account. It's always a good idea and in certain circumstances it's a legal requirementWAI-ARIA and WCAG 2 are good resources in this area.
  • Implement caching if necessary, understand and use HTTP caching properly as well as HTML5 Manifest.
  • Optimize images - don't use a 20 KB image for a repeating background.
  • Learn how to gzip/deflate content (deflate is better).
  • Combine/concatenate multiple stylesheets or multiple script files to reduce number of browser connections and improve gzip ability to compress duplications between files.
  • Take a look at the Yahoo Exceptional Performance site, lots of great guidelines including improving front-end performance and their YSlow tool. Google page speed is another tool for performance profiling. Both require Firebug to be installed.
  • Use CSS Image Sprites for small related images like toolbars (see the "minimize HTTP requests" point)
  • Busy web sites should consider splitting components across domains. Specifically...
  • Static content (i.e. images, CSS, JavaScript, and generally content that doesn't need access to cookies) should go in a separate domain that does not use cookies, because all cookies for a domain and its subdomains are sent with every request to the domain and its subdomains. One good option here is to use a Content Delivery Network (CDN).
  • Minimize the total number of HTTP requests required for a browser to render the page.
  • Utilize Google Closure Compiler for JavaScript and other minification tools.
  • Make sure there’s a favicon.ico file in the root of the site, i.e. /favicon.icoBrowsers will automatically request it, even if the icon isn’t mentioned in the HTML at all. If you don’t have a/favicon.ico, this will result in a lot of 404s, draining your server’s bandwidth.
SEO (Search Engine Optimization)
  • Use "search engine friendly" URLs, i.e. use instead
  • When using # for dynamic content change the # to #! and then on the server$_REQUEST["_escaped_fragment_"] is what googlebot uses instead of #!. In other words,./#!page=1 becomes ./?_escaped_fragments_=page=1. Also, for users that may be using FF.b4 or Chromium, history.pushState({"foo":"bar"}, "About", "./?page=1"); Is a great command. So even though the address bar has changed the page does not reload. This allows you to use ? instead of #! to keep dynamic content and also tell the server when you email the link that we are after this page, and the AJAX does not need to make another extra request.
  • Don't use links that say "click here". You're wasting an SEO opportunity and it makes things harder for people with screen readers.
  • Have an XML sitemap, preferably in the default location /sitemap.xml.
  • Use  when you have multiple URLs that point to the same content, this issue can also be addressed from Google Webmaster Tools.
  • Use Google Webmaster Tools and Bing Webmaster Tools.
  • Install Google Analytics right at the start (or an open source analysis tool like Piwik).
  • Know how robots.txt and search engine spiders work.
  • Redirect requests (using 301 Moved Permanently) asking for to the other way round) to prevent splitting the google ranking between both sites.
  • Know that there can be badly-behaved spiders out there.
  • If you have non-text content look into Google's sitemap extensions for video etc. There is some good information about this in Tim Farley's answer.
  • Understand HTTP and things like GET, POST, sessions, cookies, and what it means to be "stateless".
  • Write your XHTML/HTML and CSS according to the W3C specifications and make sure theyvalidate. The goal here is to avoid browser quirks modes and as a bonus make it much easier to work with non-standard browsers like screen readers and mobile devices.
  • Understand how JavaScript is processed in the browser.
  • Understand how JavaScript, style sheets, and other resources used by your page are loaded and consider their impact on perceived performance. It may be appropriate in some cases to move scripts to the bottom of your pages.
  • Understand how the JavaScript sandbox works, especially if you intend to use iframes.
  • Be aware that JavaScript can and will be disabled, and that AJAX is therefore an extension, not a baseline. Even if most normal users leave it on now, remember that NoScript is becoming more popular, mobile devices may not work as expected, and Google won't run most of your JavaScript when indexing the site.
  • Learn the difference between 301 and 302 redirects (this is also an SEO issue).
  • Learn as much as you possibly can about your deployment platform.
  • Consider using a Reset Style Sheet.
  • Consider JavaScript frameworks (such as jQueryMooToolsPrototypeDojo or YUI 3), which will hide a lot of the browser differences when using JavaScript for DOM manipulation.
  • Taking perceived performance and JS frameworks together, consider using a service such as theGoogle Libraries API to load frameworks so that a browser can use a copy of the framework it has already cached rather than downloading a duplicate copy from your site.
  • Don't reinvent the wheel. Before doing ANYTHING search for a component or example on how to do it. There is a 99% chance that someone has done it and released an OSS version of the code.
Bug fixing
  • Understand you'll spend 20% of your time coding and 80% of it maintaining, so code accordingly.
  • Set up a good error reporting solution.
  • Have a system for people to contact you with suggestions and criticisms.
  • Document how the application works for future support staff and people performing maintenance.
  • Make frequent backups! (And make sure those backups are functional) Ed Lucas's answer has some advice. Have a restore strategy, not just a backup strategy.
  • Use a version control system to store your files, such as SubversionMecurial or Git.
  • Don't forget to do your Acceptance Testing. Frameworks like Selenium can help.
  • Make sure you have sufficient logging in place using frameworks such as log4jlog4net or log4r. If something goes wrong on your live site, you'll need a way of finding out what.
  • When logging make sure you're capture both handled exceptions, and unhandled exceptions. Report/analyse the log output, as it'll show you where the key issues are in your site.
Lots of stuff omitted not necessarily because they're not useful answers, but because they're either too detailed, out of scope, or go a bit too far for someone looking to get an overview of the things they should know. If you're one of those people you can read the rest of the answers to get more detailed information about the things mentioned in this list. If I get the time I'll add links to the various answers that contain the things mentioned in this list if the answers go into detail about these things. Please feel free to edit this as well, I probably missed some stuff or made some mistakes.

original source: