Thursday, April 12, 2018

BlazeMeter and its Significance


Blaze Meter is one of the best cloud performance execution tool, where we can do the performance executions for JMeter scripts.
Scalability of the JMeter scripts using automatic load generators in cloud is very flexible for heavy load users in the Blaze Meter.

Cloud model amplifies elasticity in the application platform. This implies that the actual resources used by the application may grow or shrink based on the application load

Problem Statement:

Open source JMeter tool and its scripts are difficult to do the distributed executions for performance testing in order to keep load on applications and Also not easy to put heavy load on application with the help of local or remote machines or Load generators for concurrent users test.
SOLUTION:
1.     Blaze Meter is used to execute the JMeter scripts easily in cloud based environment even for massive user’s application.
2.     Load can be pumped into the cloud automatically by selecting the LG's in cloud itself
3.     Also flexible to apply the load on cloud using blaze meter having capability to shrink and expand capacity of handling users without any issues.
4.     Horizontal scaling and vertical scaling in terms of users can be possible in Blaze Meter.
5.     No conflicts for booking the controller machines as the Blaze Meter will take care the slots allocation for each individual executions to isolate for fetching accurate results.
BUSINESS BENEFIT
1) Easily handle the large number of users in a single repository by using the cloud
2) Flexible to handle users for open source applications by using Blaze Meter
3) Cost for license for virtual users in cloud is very low when compared to Load generators



Thursday, March 22, 2018

Mobile Performance Testing Using JMeter


Mobile Performance Testing Using JMeter:

Mobile applications are very rapidly developed but only few provide ultimate services to end users. Any app that is can be readily deployed into the mobile can be termed as mobile app with respective supported operating system IOS or Android OS.

Problem: 
In order to launch a new mobile app into market it is mandatory for any the organization to stabilize the performance of the mobile app should be consistent across the different network speeds as end user will be using across the networks.

Pre-requisite: JMeter tool and mobile app should be connected to same network (WIFI or Normal net) so that while navigating the app in the mobile it records traffic in JMeter tool by using the common proxy.

SOLUTION:
This can be done in JMeter, in order to achieve in the JMeter need to do the below setup in jmeter.properties file for covering all the available networks across the different operations of mobile network.
Note: cps (characters per second)
socket.http.cps=0
socket.https.cps=0
You can use these properties to limit the bandwidth in following ways:
In the bin folder of your JMeter installation, open the JMeter properties file and add these two lines and restart JMeter.
socket.http.cps=21888
socket.https.cps=21888
So different bandwidths can be implemented by using the below table for using cps values in JMeter.
Bandwidth                            cps Value
GPRS                                       21888
3G                                            2688000
4G                                            19200000
WIFI 802.11a/g                     6912000
ADSL                                       1024000
100 Mb LAN                           12800000
Gigabit Lan                             128000000
Benefits:
1) Mobile apps performance can be determined for the estimated number of users using JMeter before releasing into market.
2) Easy to fix the problems of the mobile apps in any network once the issue is identified.
3) Highly valuable and trusted products in terms of performance



Sunday, March 11, 2018

Sonar Tool and its Significance

In the Performance Engineering process every project need to tune the code for making the application performance optimized according to industry standards.
       Just to bring an information of using some of the tools for optimal coding techniques, here it seems SonarQube and SonarLint are meant to very good for it.
SonarQube is an open platform to manage code quality it covers 7 qualities of code optimization.
1)Getting Best Architecture and Design of the application
2)Reducing Complexity of the applications in the area of coding
3)Potential bugs will be reduced
4)Complexity of the application will be reduced
5)Coding standards and rules will be make application more standardized.
6)Unit tests will be given good results as following best coding techniques
7)Comments in the application code will be providing much understanding with standard information
SonarQube is a web-based application, rules, alerts, threshold other important settings can be configured to its databases which can shared across the teams and different locations too very flexible for using it.
SonarQube not only allows to combine metrics altogether but also mix them with historical measures for further usage of techniques and problem solving procedures easily for same project in further releases , mostly in Agile methodologies.
SonarLint provides complete information on the particular instance of time with proper feedback to developers on new bugs and quality issues injected into java, JavaScript and PHP code while developing to avoid the wrong assumptions in coding techniques.
 Features of SonarLint:
 1) Developers can identify accurately the sonar potential issues straight from SonarQube server at the time of real time implementation.
2) Less load on sonar servers because of using the SonarLint.
3) Users can view the description of the details in user friendly manner to understand for even layman for using it.
4)Sonar Lint provides multiple IDE like Eclipse , Visual Studio , IntelliJ ..etc. 

Tuesday, March 6, 2018

Data Base Tuning - Performance Tuning


Data Base Tuning -Performance Tuning
Generally Database is the repository where we can store the data in different forms like tables, stored procedures, and triggers...etc
Most of the times we are came across the database performance problems as listed
1)     Dead locks (Using Sql Profiling we can fix the issues of dead lock)
2)     Queries consuming high response times (Using AWR report we can identify bottleneck in Queries high time by CPU and wait time can fix the issues.)
We can fix the most of the problems using following sql tuning techniques.
Indexes should be applied for tables which involves the joins and where clauses for corresponding columns of data.
  •  Avoid the use of loops instead use the cursors for the requirement to handle.
  • Execution plan should be optimized for any query.
  •  Do not use Update Instead of Trigger which leads to performance degradation for particular trigger.
  • Display number of columns instead of using wildcard (*) character for displaying all columns
  •  Replace in and not in with Exits and Not Exists which will improve the performance of query
  •  Use with no lock for tables when creating indexes which helps for supporting concurrent users.
  •  Avoid using functions on the right hand side, it is always best practice to use the functions at beginning of the query.
  • Avoid using correlated sub-queries instead use Joins and normal sub queries.
  • Remove unnecessary temp tables at end of the each procedures, sometimes temp data will get full up and causes performance issues.
  • Use the select statements limited, it is always best practice to get the required data in single select statement instead using multiple select statements for retrieving the data.
Index:
An index is used to speed up the performance of queries. It does this by reducing the number of database data pages that have to be visited/scanned.
Clustered Index:
 A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
  • It is present on primary key of the table.
  • It follows binary structure.
  • The leaf node contains key and actual data.
Non-Clustered Index:
A Non-clustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a Non-clustered index does not consist of the data pages.
  •  It also follows Binary Structure.
  • Leaf node contains Key and reference data.
Defragmentation generally makes the index in exact working state by making it active.
  • Rebuild index: it will recreate the existing index by dropping it, to make sure index should be working properly for corresponding columns.
  • Reorganize index: it will just refresh the existing index without dropping them, it also helps to make the index working properly.
  • Whenever a query is written in the database by the developer. While executing the load test we may observe that query will consume more response time for its execution.
How do the database manage the execution process for different queries and stored procedures?
  • Data reads and writes happen in the SGA(Shared Global Area) are known as logical reads and logical writes.
  • Data reads and writes happen in the PGA(Programmable Global Area) are known as physical reads and physical writes.
  • For a most optimizing maintaining of the database servers, we need to maintain the high percentage of logical reads and writes than the physical.
Parsing:
Parsing is one stage in the processing of a SQL statement. When an application issues a SQL statement, the application makes a parse call to Database.
During the parse call, Database
1.     Checks the statement for syntactic and semantic validity.
2.     Determines whether the process issuing the statement has privileges to run it.
3.     Allocates a private SQL area for the statement.
Also tuning can be done using following tools or commands or options too
  • Sp_who2 – Used to identify any process or stored procedure which is halting performance at that particular point of time, using the stored procedure id or corresponding operation id we can kill the halting process.
  •  Profiler –Using profiler we can identify the particular tables or stored procedures performance issue or dead lock kind of problems.
  • Tuning Advisor – It automatically suggests us for corresponding query where to add indexes for making query optimized performance.
  • Activity Monitor – It monitors the database level server monitoring when load test is in progress, for viewing the database related issues.
  • Execution plan – It is one of the key component to makes sure the execution plan should be optimized by seeing the cost operation of query by re-writing it in best way.
  • Dynamic Management views –Properly managing the dynamic management views will add the performance improvement for any database.

Monday, February 26, 2018

Dynatrace Application Performance Monitoring Tool (APM)

Introduction:
As the information technology industry is rapid development and most optimized performance applications needs to be delivered into the market using agile methodologies. To meet end users expectations we have many application performance monitoring tools in the market.
Dynatrace is one of the best industry standard application monitoring tool.

Dynatrace Architecture:
Dynatrace Architecture is client server based architecture which involves the monitoring of the client side metrics, application server based metrics using dynatrace agents and automatically monitors database server based metrics.



 












Everything is agent based collections interms of metrics for
dynatrace, so there is high accuracy for correctness of data for any trusted customers using the application which is highly scalable in the any competitive market.
Dynatrace Architecture has four important components
Dynatrace client
Dynatrace collector
Dynatrace server
Dynatrace Agents

Dynatrace client will give all the metrics of the server health and application performance.
· It is the front end component which will show the end user point of view experience of using the application.

Dynatrace collector will collect all the information from the application server, web server and Database server for sending all the traffic related information.

Dynatrace Server in turn receive all the information from Dynatrace collector to send the information of servers and application health to Dynatrace Client

Dynatrace Agents are the main important to monitor the application server and web server, Database server for bringing out the application level metrics both client side and server side including the application level code issues.

Working Mechanism of Dynatrace

The main focus on working with Dynatrace tool will be on the two key factors

Pure stacks

Pure paths

Pure stacks: In the Pure stacks will follow the bottom to top approach to find out the root cause of the problem in the application.

Bottom to top approach involves identifying the application issue from the server level which will be majorly on the hardware level such as memory, CPU, disk level problems before going into code level application problems.

Important thing we will see in the pure stack is in the application server we will check

(a) Host health: It will show the health of the hardware details of application server

1) CPU Utilization: Always CPU Utilization should be under threshold limit (80%). If we found the CPU Utilization is greater than or equal to 80% then we will analyze the Thread dump for finding out the reason for taking the high CPU utilization.

If you find in any specific period observe high CPU but all the remaining resources are looking acceptable limit, also if no issues found in the thread dump.

We need to go for the CPU Sampling.

In the CPU sampling need to analyze in that specific time period which particular CPU threads are causing the CPU wait or block for making application to consume high CPU utilization.

2) Memory: Memory consumed from the total available memory. Acceptable level should be less than 80% of total memory.

3) Network Utilization: Network utilization should not be having spikes during the load test.

It includes the network parameters like network latency and bandwidth for making the proper channel for maximum bandwidth.

4) Disk space: How the Disk reads and writes are going on in the database can be identified here for proper maintaining file related operations.

5) In the start center , go to the Analyze performance and Analyze the load test results for getting the how the load test or corresponding performance test is performed.

We will get the metrics of Response time and Load vs Response time. To analyze the performance of the test.


(b) Application Health:

In the application Health of all the application server JVM overview, we can see the Heap Memory, it is performed throughout the performance test.

We will also get the GC suspended time details, Thread count.

In the Purepath will follow top to bottom approach to find out the root cause of the problem in the application.

Top to bottom approach involves identifying the application issue from the code level such as which particular methods in the application is taking more response time, finding out the method hotspots causing the application to consume more time to load the pages.

PurePath:

This is multidimensional data related to every transaction which gives the clear visibility across Browser, web apps, web server, app server and database server.

Gives the entire information about a transactions regardless where they originated (Browser/server) from end to end covering all web requests, calls, methods, and database calls etc.

UserActionPurePath:

This is a Dashlet which captures all purepaths which are generated in the browser (or) which are generated based on the user actions (ex: clicking on button, selecting a link).

For this user action generated in browser, there consist of many web requests from end to end which can be traced using the option User Action Pure Path.