Tuesday, July 11, 2017

Name Selector

Signup Form

Male Female

Remember me

By creating an account you agree to our Terms & Privacy.

Monday, April 18, 2016

Testing Challenge

Can you find more than 20 defects in the below image?

Write your defects in the comments section. 

An "Ideal" Interview with the Tester for Performance Engineering Position Part 2

For part 1, click here

Interviewer: What do you do when you are asked to start performance testing on a web application. 

Candidate: First of all, I try to understand the application, its main functionalities, its architecture and technologies used.
Of course, this information will be given by developers. We will also need developers to help in the later stages of performance testing. 

After that, I normally follow 5 steps to do performance testing. 

The first step is to identify the performance test environment. 

The good rule of thumb is that our test environment should be exactly similar to the production environment. But in many organizations, this could not be the situation due to cost. So we try to create an environment which is as close to production as possible. The Test environment could be a Virtual machine in which we can increase or decrease the RAM and processing power when needed.

Some critical factors to consider are:
Network Limitations, Hardware Configurations, Load Generation Tool, Logging mechanism, Licensing constraints etc. We do need the help of Network/IT team in designing the test environment. 

The second and most important step is to identify the performance acceptance criteria. This will be provided by business analysts, product owners who understand the business side of the application. If acceptance criteria are not clearly defined, the whole performance testing activity will become haphazard and inconclusive. 

Interviewer: So can you give some examples of some performance acceptance criteria?

Candidate: Performance criteria are highly subject to the context of the web application. But I can give some examples. Normally performance characteristics include 

Response Time:
For example, Response times for all business operations during normal and peak load should not exceed 6 seconds.

For example, the system must support 25 book orders per second. 

Resource utilization: 
For example, No server should have sustained processor utilization above 80 percent under any anticipated load or No single requested report is permitted to lock more than 20 MB of RAM and 15-percent processor utilization on the Data Cube Server.

There could be any number of performance acceptance criteria, but they should all be quantifiable and correlate to user satisfaction. 

Interviewer: Good. You mention throughput, can you define what is it?

Candidate: It is the unit number of work server can handle per time unit. You can measure throughput in terms of
requests per second or
reports per year or
hits per second or
calls per day or any other number per unit time.
The higher the throughput, the higher the performance of the server is.

Interviewer: Great. So what if we don't have any idea for user expectation. What strategy should we use?

Candidate: You can just ask the user that what performance you are expecting?

Interviewer: (Laughing...) no no. I meant to say that suppose we are building a product and until now we don't have any user for that product because we have not launched that product. So how would we define the performance criteria for that?

Candidate: In that case you should follow benchmarking or baselining.

Interviewer: What is benchmarking and baselining?

Candidate: Benchmark tests are the process of comparing the performance of your system against industry standards given by other organizations. One example of benchmarking is that you can see how your competitor's application performance is. Another example is that you can read the research papers from top performance engineers and see what they propose for ideal benchmarks for your domain.

Baseline, on the other hand, is a comparison with your previous releases. You can set one particular release as your baseline. All future releases performances will be compared to that baseline. If results of any release are much degraded from the baseline that means something is wrong with the performance. if any release is performing better, you can change your baseline and set this release as your baseline. 

Interviewer: Good Answer. So what is the next step after identifying performance acceptance criteria?

Candidate: Next step is to design tests. When designing tests, we should identify key usage scenarios, determining appropriate variability across users, identifying and generating test data and specifying the metrics to be collected.  

When designing tests, our goal should be to create real-world simulations in order to get results which help stakeholders to make informed business decisions. We should consider 
Most common usage scenarios, 
Business-critical usage scenarios, 
Performance intensive business scenarios 
And High visibility usage scenarios.

It is useful to identify the metrics related to the performance acceptance criteria during test design so that the method of collecting those metrics can be integrated into the tests when implementing the test design

Interviewer: What other considerations you should follow while designing tests?

Candidate: When we design realistic test scenarios, we should incorporate realistic simulations of user delays and think times which are crucial to the accuracy of the test. Secondly, we should not allow the tool capabilities to influence our test design decisions. Better tests almost always result from designing tests on the assumption that they can be executed. After that, we should see that what tool can do. Thirdly we should involve the developers and network administrators in the process of determining which metrics are likely to add value and which method best integrates the capturing of those metrics into the test. For example, if we want to know the CPU utilization, we should take help from network administrators that how we would capture CPU utilization in our test. 

Interviewer: Good. So up till now, we have discussed 3 steps while starting performance testing. Up till 3rd step, what do you feel some biggest challenges are?

Candidate: I think the biggest challenge is to correctly identify the performance acceptance criteria. I cannot force enough is to how important this step is and how important is that for every stakeholder to involve in this step. Developers, Testers, Product Managers, Network Engineer and Performance Engineers they all should be part of this decision. 

The second biggest challenge is getting our first relatively realistic test implemented with users generally being simulated in such a way that the application under test cannot legitimately tell the difference between the simulated users and real users. This takes significantly longer time and again input from all stakeholders is very necessary for this step. 

Interviewer: Right. I agree. So what is the next step?

Candidate: Next 2 steps are remaining. Execution of the tests and Analysis of results. 

To be continued.....

Monday, April 11, 2016

An "Ideal" Interview with the Tester for Performance Engineering Position Part 1

Interviewer: Hello Mr. Tester. How are you?

Candidate: Hello Sir, Thank you for asking. I am really good. How are you?

Interviewer: I am fine. Thanks. Did you find the office conveniently?

Candidate: Yes sir, address was elaborated very clearly and the map helped a lot.

Interviewer: Great. What would you like to drink? Coffee, Tea?

Candidate: Sir a glass of water will be fine.

Interviewer: Sure. (Rang the bell and asked peon to deliver a glass of water. Peon delivered it and candidate drank it)

Interviewer: So Mr. Tester, Let me introduce myself. I am Mr. QA Manager. I am working here since last 5 years. I have a team of 6 Manual testers and 2 automation engineers. We are now looking to expand our team and lately we are trying to hire a performance engineer. Your resume showed you are well versed with performance testing and you have listed different tools that you are using in your current organization. We will come to that later, but first I really want to hear your introduction from you. So can you please tell me briefly about yourself?

Candidate: Sure sir (nervously folding the hands and leaning forward a little bit). My name is Mr. Tester and I am working in the software quality assurance field since last 3 years. I graduated in 2012. I was fortunate to be given performance testing chance since very early in my career. I have done performance testing on around 10 Web applications. The user load varies from 25 users to 1 million users. I got a chance to work on difference performance testing tools in my tenure. Now I am looking for better opportunity and to expand my learning and growth.

Interviewer: That's great. You mentioned user load in your answer. That is interesting. I always get confused between performance testing and load testing. Can you help me a little and highlight the difference between performance and load testing.

Candidate: (With a confident voice). That is very easy sir. In performance testing, we test about non-functional aspects like speed and like like ... scalability and ....stability. We test about…mmm response times, throughput and resource utilization levels that should meet the performance objectives of our application. You can say that performance testing is the superset of all other subcategories of performance-related testing.

Load testing is the subcategory of performance testing. In Load testing, we validate that when the application is subjected to expected user load, how it will behave in terms of performance.

Interviewer: And what is stress testing?

Candidate: Stress testing is also a subcategory of performance testing. In Stress testing, we test the application under user load which is beyond our normal and peak load expectation.

Interviewer: Can you tell the difference between load testing and stress testing by giving an example?

Candidate: Sure sir. For example, we want to test the performance of an e-commerce application. We were expecting that when we launch that e-commerce store, maximum around 500 users will access that website in 1 hour. We designed the website with that maximum number in mind.

500 users is the peak load condition for our website and 100 to 300 users is the normal load condition of our website. So we should test the application for at least what we are expecting. So we will use a tool and simulate 100 users, then 200 users, then 300 users up to 500 virtual users and we will validate that the response time of web pages, CPU and Memory utilization of server should be in acceptable limits at each user range. In simple words, we verify the behavior of our application under normal and peak load conditions. This is Load Testing.

When we want to stress test our application, we will increase the user load to more than 500 users and we will see how our application reacts. For stress, it is not necessary to increase just the user load. We can limit the memory of server or we can make the disk space insufficient just to see that how our application reacts. Does it crash? And does it crash gracefully or not? What will be the response times of web page in these stressful conditions? Etc.

Interviewer: Good example and nice answer. It seems that your concepts are clear in these terminologies. Let me ask you about some more terminologies of performance testing. Tell me about endurance testing and spike testing.

Candidate: Endurance testing is the subset of Load Testing. When we put our application under normal and peak conditions over an extended period of time it becomes endurance testing.

Spike testing is the subset of Stress Testing. When we repeatedly put stress on our application for short period of time, it becomes spike testing.

Interviewer: Great. What about capacity testing?

Candidate: Capacity testing is done in conjunction with capacity planning. With capacity testing, we determine the ultimate breaking point of the application. With capacity planning we plan by knowing how much additional resources (such as memory, processing power) are necessary to support that much load. Capacity testing helps us to determine a scaling strategy as to whether we should scale up or scale out.

Interviewer: What is the difference between scale-up and scale-out?

Candidate: Scale up is also knows as vertical scaling in which we add more resources to the single server such as more RAM and more CPU power etc. When we scale out, it means we add another server and we make our environment distributed. Now the load will be distributed to 2 machines. Scaling out is also called horizontal scaling. 

..to be continued......
For Part 2, Click here

Note: Please give your feedback and other interview questions for which you wish to see the answers here.

Monday, April 4, 2016

A Tester's Letter to The Developer

Hi Mr. Developer,
I hope you are doing fine.
You may know me very well. I am Mr. Tester.

I know you don't like me much and I can understand that. You create something and I normally point out defects in your creation. It is natural to not feel good about that.

I am not here to convince you to like me. I am not discussing either about Tester vs. Developer debate here. It has been discussed time and again and I know you get that point and you now treat me as a fellow and partner and not an enemy. And.... thank you for that. I treat you as my partner as well.

I am writing this letter for something else.

I know you are very knowledgeable and what I am about to tell you, you probably heard it many times as well. So I probably will not increase your knowledge.

There are 2 types of testing. 

White-box testing and Black-box testing. (I know you know that)

When I was a fresh graduate, I was told Black Box testing was done by Testers and White Box was done by ....ahem ahem ...Developers.

Let me rephrase testing types again for you.

There are two types of testing. 

Testing from the code side and Testing from the user side.

We, as testers take care of the user side of the testing. There are many stakeholders in the project and each one, to some extent, tests the application from the user side. No other stakeholder will test from the code side except for one which is you. The Developer. Everybody assumes (and rightly assumes) that unit testing will be done by you.

Not every developer ignores Testing from the Code Side.

I found out that this testing is routinely done by other developers in big organizations like Google, Microsoft and Facebook.  

You can object that these are big organizations and their developers can do it. But even developers who develop open source software, they are doing it.

I was reading a tutorial on Django Framework and that paragraph got my attention:

“You might have created a brilliant piece of software, but you will find that many other developers will simply refuse to look at it because it lacks tests; without tests, they won’t trust it.”

Jacob Kaplan-Moss, one of Django’s original developers, says “Code without tests is broken by design."

So then why you are not doing it?

Maybe you are unaware of the benefits. (I am making good assumptions about you)

So let me state some benefits of Unit Testing for you.

Tests will save you time.

Your first objection on not doing unit testing can be the lack of time. You have so many features to develop, tasks to do and bugs to fix. Adding another weight of writing unit tests may seem time-consuming. But, in reality, Tests will save you time.

You develop sophisticated applications. (Yeah) You might have dozens of complex interactions between components. A change in any of those components could have unexpected consequences (read bugs) on the application's behavior. If a problem occurs, you will spend hours manually trying to identify the cause. 

If you have written unit tests, these tests could execute in seconds and quickly point out which piece of code is the real culprit. (Before release)

Sometimes it may seem boring to tear yourself away from your productive and creative programming work to face the unglamorous and unexciting business of writing tests, particularly when you know your code is working properly. But once you accept its benefit, it will save you a whole lot of time in debugging. (And also, a lot of time of testers who have to test the defected build and then find the bug for you which could have been identified in unit tests.)

Tests will just not identify the problem, they will prevent them.

The presence of unit tests makes sure that any new change does not bring any regression to existing code. If any test case fails, you can know before release that where the actual problem lies. Bugs which are found earlier in the process are lot easier to fix. Also, it gives confidence that major functionalities are still working after new changes. When you develop a feature and write unit tests for it, you will feel a lot more confidence about this feature because you know that this feature will not break in the future due to other code changes.

Tests will make your code more maintainable.

When you begin to write tests, you will feel that much of your code is not testable at a unit level. This will force to break the existing functions into smaller functions which are more modular and generalized in nature. This automatically makes your code more maintainable. 

Tests help teams code together seamlessly.

The previous points are written from the point of view of a single developer maintaining an application. Complex applications will be maintained by teams. Tests guarantee that colleagues don’t inadvertently break your code (and that you don’t break theirs without knowing). 

By now you must have been convinced that you should write unit tests. But I will not stop there. I would like you to show some strategies to start doing unit testing right now. After all, I am your friend Right?


1. Sometimes it’s difficult to figure out where to get started with writing tests. If you have already written several thousand lines of code, choosing something to test might not be easy. In such a case, it’s fruitful to write your first test the next time you make a change, either when you add a new feature or fix a bug.

2. Some programmers follow a discipline called “test-driven development”; they actually write their tests before they write their code. But if you are not comfortable with that, you can code first and then write tests for it later.

3. It might seem that your tests will grow out of control. At this rate there will soon be more code in your tests than in your application.
It doesn’t matter.
Let them grow. For the most part, you can write a test once and then forget about it. It will continue performing its useful function as you continue to develop your program. At worst, as you continue developing, you might find that you have some tests that are now redundant. Even that’s not a problem; in testing redundancy is a good thing.

4. As long as your tests are sensibly arranged, they won’t become unmanageable. Good rules-of-thumb include having:
  • A separate TestClass for each module.
  • A separate test method for each set of conditions you want to test
  • Test method names that describe their function
5. Sometimes tests will need to be updated. In that case, many of our existing tests will fail - telling us exactly which tests need to be corrected to bring them up to date, so to that extent, tests help look after themselves.

Mr. Developer, I respect your work. The above is a friendly suggestion. This suggestion will not only benefit your organization but will also make you a better developer. After all, quality is not just a list of features, it is an attitude. 

I hope you will not take this letter as an offense on your development practice. Take it as a brotherly suggestion, given out in the love of quality.

Thanks for reading this letter.
Feel free to write me a reply (and telling me that you have started writing unit tests)

With Love and respect,
From your friend and partner, 
Mr. Tester. 

Credit: This letter would not be possible if I hadn't read the excellent tutorial on Django Website about unit testing. Full credit to their documentation team 

If this letter can convince only 1 developer to start unit testing, I think the purpose will be fulfilled.

Sunday, March 27, 2016

Test Automation on Android using Appium, C# and MSTest Part 2

This is the second part of the series "Test automation on Android using Appium, C# and MSTest". If you missed the first part, Click here.

Coding Time

Now it is time to code.
Open Visual Studio Ultimate (2010 or above) (I am using VS2012)
Create a unit Test project.
Install "Appium Web Driver" and "Selenium WebDriver"  using NuGet package manager. (The easiest way)
If you don't want to use NuGet package manager, you can manually download appium dot net driver and selenium web driver c# libraries and add them to your solution
Your reference section should look like this.

Now open your UnitTest1.cs file and add following namespaces in your using section
using System;
using System.Threading;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium;
using OpenQA.Selenium.Appium;
using OpenQA.Selenium.Appium.Android;
using OpenQA.Selenium.Remote;
In your [TestInitialize] section, write this code.
    public void BeforeAll()

        DesiredCapabilities capabilities = new DesiredCapabilities();
        capabilities.SetCapability("device", "Android");
        capabilities.SetCapability(CapabilityType.Platform, "Windows");     
        capabilities.SetCapability("deviceName", "H30-U10");
        capabilities.SetCapability("platformName", "Android");
        capabilities.SetCapability("platformVersion", "4.3");
        capabilities.SetCapability("appPackage", "com.android.calculator2");
        capabilities.SetCapability("appActivity", "com.android.calculator2.Calculator");

        driver = new  AndroidDriver(new Uri(""), capabilities, TimeSpan.FromSeconds(180));

in your [TestMethod], write this code
    public void TestCalculator()

        var two = driver.FindElement(By.Name("2"));
        var plus = driver.FindElement(By.Name("+"));
        var four = driver.FindElement(By.Name("4"));
        var equalTo = driver.FindElement(By.Name("="));

      var results = driver.FindElement(By.ClassName("android.widget.EditText"));

        Assert.AreEqual("6", results.Text);
The complete code should look like this.
using System;
using System.Threading;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using OpenQA.Selenium;
using OpenQA.Selenium.Appium;
using OpenQA.Selenium.Appium.Android;
using OpenQA.Selenium.Remote;

namespace AppiumSample
    public class UnitTest1
        public AndroidDriver driver;

    public void BeforeAll()


        DesiredCapabilities capabilities = new DesiredCapabilities();
        capabilities.SetCapability("device", "Android");
        capabilities.SetCapability(CapabilityType.Platform, "Windows");        
        capabilities.SetCapability("deviceName", "H30-U10");
        capabilities.SetCapability("platformName", "Android");
        capabilities.SetCapability("platformVersion", "4.3");
        capabilities.SetCapability("appPackage", "com.android.calculator2");
        capabilities.SetCapability("appActivity", "com.android.calculator2.Calculator");
        driver = new  AndroidDriver(new Uri(""), capabilities, TimeSpan.FromSeconds(180));


    public void AfterAll()

    public void TestCalculator()

        var two = driver.FindElement(By.Name("2"));
        var plus = driver.FindElement(By.Name("+"));
        var four = driver.FindElement(By.Name("4"));
        var equalTo = driver.FindElement(By.Name("="));

        var results = driver.FindElement(By.ClassName("android.widget.EditText"));

        Assert.AreEqual("6", results.Text);
To run this test case, click on Test Menu > Windows > Test Explorer. A window will open on your left and it will list down the test cases. Select your desired test case and click run.

Now watch your android phone, A calculator window will open on your android phone, button 2 will be tapped, then button + , then button 4 and then button =.
After that result will show 6 and your test case should ideally be passed. Make sure your Appium server is running. Appium window will show you all the logs.
Congratulations, you have just written your first automated test case in Appium and it is running on a real android device.

Some Explanation

Most of the code is self explanatory. But I want your attention on these two lines.
  capabilities.SetCapability("appPackage", "com.android.calculator2");
        capabilities.SetCapability("appActivity", "com.android.calculator2.Calculator");

For every application you want to test, you must know its package name and app activity name. So to know about these attributes, you need to download a little android app on your phone. It is called apkInfo. It will show you the package name and activityname of any android app installed on your phone. Just pass these parameters here and that app will launch on your phone by automation code.


If you are familiar with Selenium WebDriver, coding in Appium is not so different. Only problem was the configuration. If you have done this right, you should be able start writing scripts for your android. I tried to explain this as simply as possible. If you still find problems in setting up the environment, you can ask questions in comments

Test Automation on Android using Appium, C# and MSTest

This is a two part series of test automation on android using Appium, C# and MSTest.
for part 2 click here.

Prerequisites for this tutorial:

  1. Visual Studio Ultimate (2010 or above) (Because MSTest is present in that)
  2. Android SDK For Windows (Download Link) (Website link)
  3. Appium For Windows (Download Link) (Website Link)
  4. A Real Android Device running Android 4.2 or Above (I am using an Android Phone running Android 4.3)
  5. A USB Cable to attach your Android Phone to Your PC.
  6. ADB Interface Drivers for your Phone (Link on how to get that ) (very important Step)

Some Configurations for Android SDK and Appium:

  • When you have installed Android SDK, go to My Computer, right click , click Properties, click Advanced System Settings, click Environment Variables.
  • Create a new User Variable with the name "ANDROID_HOME". Give path to your sdk folder in the value. The default path is  C:Program Files (x86)Androidandroid-sdk

  • Edit the PATH variable in "System Variable" Section. Append the path to your tools folder and platform-tools folder. Separated with ";"
The paths are
C:Program Files (x86)Androidandroid-sdktools
C:Program Files (x86)Androidandroid-sdkplatform-tools
See the image below.

  •  Connect your Android Phone with USB Cable. To make sure your android phone is connected with your PC, we have to do following.
Go to C:Program Files (x86)Androidandroid-sdktools. Click on "uiautomatorviewer.bat". A window will open. See Image.

 You can use this window to inspect the elements of your app in android. Open the calculator app in your android device and click on "Device ScreenShot" button on this screen. If you receive this error message "No Android devices were found by adb"

That means adb interface drivers are not installed on your system. You have to read again point number 6 in the Prerequisites section. If android device is successfully connected, you should see a snapshot along with Object Map in this window like this.

  • Unzip AppiumforWindows.zip in a folder. Open Appium.exe, you should see a window like this.

Click on the android Icon on the top left of this window. You will see a window in which you can configure the platform and version of your android on which you want to test. I have filled the following configurations there.

Now click on the Play button on the top right corner of the window. Appium server will start with the configurations you have provided.

  • Developer options should be enabled on your android phone, with these two options.
Usb Debugging should be enabled.
StayAwake should be enabled.
If you reach till here, Congratulations, You have successfully configured all the required prerequisites .
In the second part, I will show how to code and run this script on your real android device.