Monday, April 18, 2016
An "Ideal" Interview with the Tester for Performance Engineering Position Part 2
Interviewer: What do you do when you are asked to start performance testing on a web application.
Candidate: First of all, I try to understand the application, its main functionalities, its architecture and technologies used.
Of course, this information will be given by developers. We will also need developers to help in the later stages of performance testing.
After that, I normally follow 5 steps to do performance testing.
After that, I normally follow 5 steps to do performance testing.
The first step is to identify the performance test environment.
The good rule of thumb is that our test environment should be exactly similar to the production environment. But in many organizations, this could not be the situation due to cost. So we try to create an environment which is as close to production as possible. The Test environment could be a Virtual machine in which we can increase or decrease the RAM and processing power when needed.
Some critical factors to consider are:
Network Limitations, Hardware Configurations, Load Generation Tool, Logging mechanism, Licensing constraints etc. We do need the help of Network/IT team in designing the test environment.
Network Limitations, Hardware Configurations, Load Generation Tool, Logging mechanism, Licensing constraints etc. We do need the help of Network/IT team in designing the test environment.
The second and most important step is to identify the performance acceptance criteria. This will be provided by business analysts, product owners who understand the business side of the application. If acceptance criteria are not clearly defined, the whole performance testing activity will become haphazard and inconclusive.
Interviewer: So can you give some examples of some performance acceptance criteria?
Candidate: Performance criteria are highly subject to the context of the web application. But I can give some examples. Normally performance characteristics include
Response Time:
For example, Response times for all business operations during normal and peak load should not exceed 6 seconds.
Throughput:
For example, the system must support 25 book orders per second.
Resource utilization:
For example, No server should have sustained processor utilization above 80 percent under any anticipated load or No single requested report is permitted to lock more than 20 MB of RAM and 15-percent processor utilization on the Data Cube Server.
There could be any number of performance acceptance criteria, but they should all be quantifiable and correlate to user satisfaction.
Interviewer: Good. You mention throughput, can you define what is it?
Candidate: It is the unit number of work server can handle per time unit. You can measure throughput in terms of
requests per second or
reports per year or
hits per second or
calls per day or any other number per unit time.
The higher the throughput, the higher the performance of the server is.
Interviewer: Great. So what if we don't have any idea for user expectation. What strategy should we use?
Candidate: You can just ask the user that what performance you are expecting?
Interviewer: (Laughing...) no no. I meant to say that suppose we are building a product and until now we don't have any user for that product because we have not launched that product. So how would we define the performance criteria for that?
Candidate: In that case you should follow benchmarking or baselining.
Interviewer: What is benchmarking and baselining?
Candidate: Benchmark tests are the process of comparing the performance of your system against industry standards given by other organizations. One example of benchmarking is that you can see how your competitor's application performance is. Another example is that you can read the research papers from top performance engineers and see what they propose for ideal benchmarks for your domain.
Baseline, on the other hand, is a comparison with your previous releases. You can set one particular release as your baseline. All future releases performances will be compared to that baseline. If results of any release are much degraded from the baseline that means something is wrong with the performance. if any release is performing better, you can change your baseline and set this release as your baseline.
Interviewer: Good Answer. So what is the next step after identifying performance acceptance criteria?
Candidate: Next step is to design tests. When designing tests, we should identify key usage scenarios, determining appropriate variability across users, identifying and generating test data and specifying the metrics to be collected.
When designing tests, our goal should be to create real-world simulations in order to get results which help stakeholders to make informed business decisions. We should consider
Most common usage scenarios,
Business-critical usage scenarios,
Performance intensive business scenarios
And High visibility usage scenarios.
It is useful to identify the metrics related to the performance acceptance criteria during test design so that the method of collecting those metrics can be integrated into the tests when implementing the test design
Interviewer: What other considerations you should follow while designing tests?
Candidate: When we design realistic test scenarios, we should incorporate realistic simulations of user delays and think times which are crucial to the accuracy of the test. Secondly, we should not allow the tool capabilities to influence our test design decisions. Better tests almost always result from designing tests on the assumption that they can be executed. After that, we should see that what tool can do. Thirdly we should involve the developers and network administrators in the process of determining which metrics are likely to add value and which method best integrates the capturing of those metrics into the test. For example, if we want to know the CPU utilization, we should take help from network administrators that how we would capture CPU utilization in our test.
Interviewer: Good. So up till now, we have discussed 3 steps while starting performance testing. Up till 3rd step, what do you feel some biggest challenges are?
Candidate: I think the biggest challenge is to correctly identify the performance acceptance criteria. I cannot force enough is to how important this step is and how important is that for every stakeholder to involve in this step. Developers, Testers, Product Managers, Network Engineer and Performance Engineers they all should be part of this decision.
The second biggest challenge is getting our first relatively realistic test implemented with users generally being simulated in such a way that the application under test cannot legitimately tell the difference between the simulated users and real users. This takes significantly longer time and again input from all stakeholders is very necessary for this step.
Interviewer: Right. I agree. So what is the next step?
Candidate: Next 2 steps are remaining. Execution of the tests and Analysis of results.
To be continued.....
Monday, April 11, 2016
An "Ideal" Interview with the Tester for Performance Engineering Position Part 1
Interviewer: Hello Mr. Tester. How are you?
Candidate: Hello Sir, Thank you for asking. I am really good. How are you?
Interviewer: I am fine. Thanks. Did you find the office conveniently?
Candidate: Yes sir, address was elaborated very clearly and the map helped a lot.
Interviewer: Great. What would you like to drink? Coffee, Tea?
Candidate: Sir a glass of water will be fine.
Interviewer: Sure. (Rang the bell and asked peon to deliver a glass of water. Peon delivered it and candidate drank it)
Interviewer: So Mr. Tester, Let me introduce myself. I am Mr. QA Manager. I am working here since last 5 years. I have a team of 6 Manual testers and 2 automation engineers. We are now looking to expand our team and lately we are trying to hire a performance engineer. Your resume showed you are well versed with performance testing and you have listed different tools that you are using in your current organization. We will come to that later, but first I really want to hear your introduction from you. So can you please tell me briefly about yourself?
Candidate: Sure sir (nervously folding the hands and leaning forward a little bit). My name is Mr. Tester and I am working in the software quality assurance field since last 3 years. I graduated in 2012. I was fortunate to be given performance testing chance since very early in my career. I have done performance testing on around 10 Web applications. The user load varies from 25 users to 1 million users. I got a chance to work on difference performance testing tools in my tenure. Now I am looking for better opportunity and to expand my learning and growth.
Interviewer: That's great. You mentioned user load in your answer. That is interesting. I always get confused between performance testing and load testing. Can you help me a little and highlight the difference between performance and load testing.
Candidate: (With a confident voice). That is very easy sir. In performance testing, we test about non-functional aspects like speed and like like ... scalability and ....stability. We test about…mmm response times, throughput and resource utilization levels that should meet the performance objectives of our application. You can say that performance testing is the superset of all other subcategories of performance-related testing.
Load testing is the subcategory of performance testing. In Load testing, we validate that when the application is subjected to expected user load, how it will behave in terms of performance.
Interviewer: And what is stress testing?
Candidate: Stress testing is also a subcategory of performance testing. In Stress testing, we test the application under user load which is beyond our normal and peak load expectation.
Interviewer: Can you tell the difference between load testing and stress testing by giving an example?
Candidate: Sure sir. For example, we want to test the performance of an e-commerce application. We were expecting that when we launch that e-commerce store, maximum around 500 users will access that website in 1 hour. We designed the website with that maximum number in mind.
500 users is the peak load condition for our website and 100 to 300 users is the normal load condition of our website. So we should test the application for at least what we are expecting. So we will use a tool and simulate 100 users, then 200 users, then 300 users up to 500 virtual users and we will validate that the response time of web pages, CPU and Memory utilization of server should be in acceptable limits at each user range. In simple words, we verify the behavior of our application under normal and peak load conditions. This is Load Testing.
When we want to stress test our application, we will increase the user load to more than 500 users and we will see how our application reacts. For stress, it is not necessary to increase just the user load. We can limit the memory of server or we can make the disk space insufficient just to see that how our application reacts. Does it crash? And does it crash gracefully or not? What will be the response times of web page in these stressful conditions? Etc.
Interviewer: Good example and nice answer. It seems that your concepts are clear in these terminologies. Let me ask you about some more terminologies of performance testing. Tell me about endurance testing and spike testing.
Candidate: Endurance testing is the subset of Load Testing. When we put our application under normal and peak conditions over an extended period of time it becomes endurance testing.
Spike testing is the subset of Stress Testing. When we repeatedly put stress on our application for short period of time, it becomes spike testing.
Interviewer: Great. What about capacity testing?
Candidate: Capacity testing is done in conjunction with capacity planning. With capacity testing, we determine the ultimate breaking point of the application. With capacity planning we plan by knowing how much additional resources (such as memory, processing power) are necessary to support that much load. Capacity testing helps us to determine a scaling strategy as to whether we should scale up or scale out.
Interviewer: What is the difference between scale-up and scale-out?
Candidate: Scale up is also knows as vertical scaling in which we add more resources to the single server such as more RAM and more CPU power etc. When we scale out, it means we add another server and we make our environment distributed. Now the load will be distributed to 2 machines. Scaling out is also called horizontal scaling.
Interviewer: I am fine. Thanks. Did you find the office conveniently?
Candidate: Yes sir, address was elaborated very clearly and the map helped a lot.
Interviewer: Great. What would you like to drink? Coffee, Tea?
Candidate: Sir a glass of water will be fine.
Interviewer: Sure. (Rang the bell and asked peon to deliver a glass of water. Peon delivered it and candidate drank it)
Interviewer: So Mr. Tester, Let me introduce myself. I am Mr. QA Manager. I am working here since last 5 years. I have a team of 6 Manual testers and 2 automation engineers. We are now looking to expand our team and lately we are trying to hire a performance engineer. Your resume showed you are well versed with performance testing and you have listed different tools that you are using in your current organization. We will come to that later, but first I really want to hear your introduction from you. So can you please tell me briefly about yourself?
Candidate: Sure sir (nervously folding the hands and leaning forward a little bit). My name is Mr. Tester and I am working in the software quality assurance field since last 3 years. I graduated in 2012. I was fortunate to be given performance testing chance since very early in my career. I have done performance testing on around 10 Web applications. The user load varies from 25 users to 1 million users. I got a chance to work on difference performance testing tools in my tenure. Now I am looking for better opportunity and to expand my learning and growth.
Interviewer: That's great. You mentioned user load in your answer. That is interesting. I always get confused between performance testing and load testing. Can you help me a little and highlight the difference between performance and load testing.
Candidate: (With a confident voice). That is very easy sir. In performance testing, we test about non-functional aspects like speed and like like ... scalability and ....stability. We test about…mmm response times, throughput and resource utilization levels that should meet the performance objectives of our application. You can say that performance testing is the superset of all other subcategories of performance-related testing.
Load testing is the subcategory of performance testing. In Load testing, we validate that when the application is subjected to expected user load, how it will behave in terms of performance.
Interviewer: And what is stress testing?
Candidate: Stress testing is also a subcategory of performance testing. In Stress testing, we test the application under user load which is beyond our normal and peak load expectation.
Interviewer: Can you tell the difference between load testing and stress testing by giving an example?
Candidate: Sure sir. For example, we want to test the performance of an e-commerce application. We were expecting that when we launch that e-commerce store, maximum around 500 users will access that website in 1 hour. We designed the website with that maximum number in mind.
500 users is the peak load condition for our website and 100 to 300 users is the normal load condition of our website. So we should test the application for at least what we are expecting. So we will use a tool and simulate 100 users, then 200 users, then 300 users up to 500 virtual users and we will validate that the response time of web pages, CPU and Memory utilization of server should be in acceptable limits at each user range. In simple words, we verify the behavior of our application under normal and peak load conditions. This is Load Testing.
When we want to stress test our application, we will increase the user load to more than 500 users and we will see how our application reacts. For stress, it is not necessary to increase just the user load. We can limit the memory of server or we can make the disk space insufficient just to see that how our application reacts. Does it crash? And does it crash gracefully or not? What will be the response times of web page in these stressful conditions? Etc.
Interviewer: Good example and nice answer. It seems that your concepts are clear in these terminologies. Let me ask you about some more terminologies of performance testing. Tell me about endurance testing and spike testing.
Candidate: Endurance testing is the subset of Load Testing. When we put our application under normal and peak conditions over an extended period of time it becomes endurance testing.
Spike testing is the subset of Stress Testing. When we repeatedly put stress on our application for short period of time, it becomes spike testing.
Interviewer: Great. What about capacity testing?
Candidate: Capacity testing is done in conjunction with capacity planning. With capacity testing, we determine the ultimate breaking point of the application. With capacity planning we plan by knowing how much additional resources (such as memory, processing power) are necessary to support that much load. Capacity testing helps us to determine a scaling strategy as to whether we should scale up or scale out.
Interviewer: What is the difference between scale-up and scale-out?
Candidate: Scale up is also knows as vertical scaling in which we add more resources to the single server such as more RAM and more CPU power etc. When we scale out, it means we add another server and we make our environment distributed. Now the load will be distributed to 2 machines. Scaling out is also called horizontal scaling.
..to be continued......
For Part 2, Click here
Note: Please give your feedback and other interview questions for which you wish to see the answers here.
Monday, April 4, 2016
A Tester's Letter to The Developer
Hi Mr. Developer,
I hope you are doing fine.
You may know me very well. I am Mr. Tester.
I know you don't like me much and I can understand that. You create something and I normally point out defects in your creation. It is natural to not feel good about that.
I am not here to convince you to like me. I am not discussing either about Tester vs. Developer debate here. It has been discussed time and again and I know you get that point and you now treat me as a fellow and partner and not an enemy. And.... thank you for that. I treat you as my partner as well.
I am writing this letter for something else.
I know you are very knowledgeable and what I am about to tell you, you probably heard it many times as well. So I probably will not increase your knowledge.
There are 2 types of testing.
White-box testing and Black-box testing. (I know you know that)
When I was a fresh graduate, I was told Black Box testing was done by Testers and White Box was done by ....ahem ahem ...Developers.
Let me rephrase testing types again for you.
There are two types of testing.
Testing from the code side and Testing from the user side.
We, as testers take care of the user side of the testing. There are many stakeholders in the project and each one, to some extent, tests the application from the user side. No other stakeholder will test from the code side except for one which is you. The Developer. Everybody assumes (and rightly assumes) that unit testing will be done by you.
Not every developer ignores Testing from the Code Side.
I found out that this testing is routinely done by other developers in big organizations like Google, Microsoft and Facebook.
You can object that these are big organizations and their developers can do it. But even developers who develop open source software, they are doing it.
I was reading a tutorial on Django Framework and that paragraph got my attention:
“You might have created a brilliant piece of software, but you will find that many other developers will simply refuse to look at it because it lacks tests; without tests, they won’t trust it.”
Jacob Kaplan-Moss, one of Django’s original developers, says “Code without tests is broken by design."
So then why you are not doing it?
Maybe you are unaware of the benefits. (I am making good assumptions about you)
So let me state some benefits of Unit Testing for you.
Tests will save you time.
Your first objection on not doing unit testing can be the lack of time. You have so many features to develop, tasks to do and bugs to fix. Adding another weight of writing unit tests may seem time-consuming. But, in reality, Tests will save you time.
You develop sophisticated applications. (Yeah) You might have dozens of complex interactions between components. A change in any of those components could have unexpected consequences (read bugs) on the application's behavior. If a problem occurs, you will spend hours manually trying to identify the cause.
If you have written unit tests, these tests could execute in seconds and quickly point out which piece of code is the real culprit. (Before release)
Sometimes it may seem boring to tear yourself away from your productive and creative programming work to face the unglamorous and unexciting business of writing tests, particularly when you know your code is working properly. But once you accept its benefit, it will save you a whole lot of time in debugging. (And also, a lot of time of testers who have to test the defected build and then find the bug for you which could have been identified in unit tests.)
Tests will just not identify the problem, they will prevent them.
The presence of unit tests makes sure that any new change does not bring any regression to existing code. If any test case fails, you can know before release that where the actual problem lies. Bugs which are found earlier in the process are lot easier to fix. Also, it gives confidence that major functionalities are still working after new changes. When you develop a feature and write unit tests for it, you will feel a lot more confidence about this feature because you know that this feature will not break in the future due to other code changes.
Tests will make your code more maintainable.
When you begin to write tests, you will feel that much of your code is not testable at a unit level. This will force to break the existing functions into smaller functions which are more modular and generalized in nature. This automatically makes your code more maintainable.
Tests help teams code together seamlessly.
The previous points are written from the point of view of a single developer maintaining an application. Complex applications will be maintained by teams. Tests guarantee that colleagues don’t inadvertently break your code (and that you don’t break theirs without knowing).
By now you must have been convinced that you should write unit tests. But I will not stop there. I would like you to show some strategies to start doing unit testing right now. After all, I am your friend Right?
Strategies:
1. Sometimes it’s difficult to figure out where to get started with writing tests. If you have already written several thousand lines of code, choosing something to test might not be easy. In such a case, it’s fruitful to write your first test the next time you make a change, either when you add a new feature or fix a bug.
2. Some programmers follow a discipline called “test-driven development”; they actually write their tests before they write their code. But if you are not comfortable with that, you can code first and then write tests for it later.
3. It might seem that your tests will grow out of control. At this rate there will soon be more code in your tests than in your application.
It doesn’t matter.
Let them grow. For the most part, you can write a test once and then forget about it. It will continue performing its useful function as you continue to develop your program. At worst, as you continue developing, you might find that you have some tests that are now redundant. Even that’s not a problem; in testing redundancy is a good thing.
4. As long as your tests are sensibly arranged, they won’t become unmanageable. Good rules-of-thumb include having:
- A separate TestClass for each module.
- A separate test method for each set of conditions you want to test
- Test method names that describe their function
5. Sometimes tests will need to be updated. In that case, many of our existing tests will fail - telling us exactly which tests need to be corrected to bring them up to date, so to that extent, tests help look after themselves.
Mr. Developer, I respect your work. The above is a friendly suggestion. This suggestion will not only benefit your organization but will also make you a better developer. After all, quality is not just a list of features, it is an attitude.
I hope you will not take this letter as an offense on your development practice. Take it as a brotherly suggestion, given out in the love of quality.
Thanks for reading this letter.
Feel free to write me a reply (and telling me that you have started writing unit tests)
With Love and respect,
From your friend and partner,
Mr. Tester.
Credit: This letter would not be possible if I hadn't read the excellent tutorial on Django Website about unit testing. Full credit to their documentation team
https://docs.djangoproject.com/en/1.9/intro/tutorial05/
Note:
If this letter can convince only 1 developer to start unit testing, I think the purpose will be fulfilled.
Subscribe to:
Posts (Atom)
Testing Challenge
Can you find more than 20 defects in the below image? Write your defects in the comments section.

-
Selenium WebDriver with all its advantages has some pain points. One of them is to reuse the existing window of your opened browser so th...
-
This was the very first challenge I encounter when I begin to do scripting in Selenium. Sometimes my scripts failed in between due to any ...
-
There are many instances when you click on a link and it opens a new window or tab. Selenim WebDriver provides a way to switch to that new...