Technology

Posted by on in Technology

Software Requirement is detail description of the system under implementations. It outlines the practical usage of product or Service, Condition or capability to which a system must conform. Requirements can range from high-level abstract statements of services or system constraints to detailed mathematical functional specifications. Here we will be discussing about requirement analysis and its consideration w.r.t. QA.

Software development life cycle (SDLC) models describe different phases of the software cycle and the order in which those phases are executed - Requirements gathering and analysis, Design, Implementation or coding, Testing, Deployment, Maintenance.

What is Requirement Analysis: It is the process of determining user expectations for a system under consideration. These should be quantifiable and detailed.

Requirement Analysis:

  • Serves as a foundation for test plans and project plan
  • Serves as an agreement between developer and customer
  • Process to make stated and unstated requirements clear
  • Process to validate requirement for completeness, unambiguity and feasibility.

 

Below picture depicting consequence of poor requirement analysis and its impact on Software development life cycle. 

Requirements Analysis

Here we can clearly see if the requirement analysis is not done in early phase of the SDLC then its impact is huge to fix it in later phases. Few consequences of poor requirement analysis are like incorrect feature delivery, poor product quality, number of change control to fix system flaws, extension of project deadlines etc.  More we delay in analysing the requirement more it costs and which impacts project delivery and quality.

 

Listed challenges in Requirement Analysis phase in QA:

  • In early stage of SDLC the scope is not defined clearly.
  • Many times there is ambiguous understanding of processes.
  • Communication between project team and stakeholders plays important role.
  • Insufficient inputs from customer leads to assumptions and those are not accepted in UAT.
  • Inconsistency within single process in multiple users
  • Conflicting customer views.
  • Frequent new requirements.

Tools and techniques used for analyzing the requirements are,

  1. Use Cases: It is a methodology used in requirement analysis to identify, clarify, and organize the requirements. It is set of possible sequences of interactions between systems and users in a particular environment and related to a particular goal.
  2. Requirement Understanding Document (RUD) - Document covers details of Requirement understanding pertaining below points: 
    • Assumptions
    • System Details
    • Logical System Requirements
    • System Entity
    • Hardware
    • Acceptance Criteria
  1. Prioritize each requirement
  2. Discuss with team and identify testing scope
  3. Break down requirements in tasks and user stories.

How to Analyse Requirements?

  • Find out what software has to do.
  • Identify requirements by questioning like, Why, What, Who, How etc.
  • Find out how complex application would be and its impact on testing.
  • Which all things would need to be tested.

Requirement validation: Validate requirements based on below points so that at the end of the requirement analysis phase all required information available.

  1. Correctness: find out incorrect statement/requirement.
  2. Completeness: find missing requirement.
  3. Feasibility: find what all features are possible to test and which are beyond the scope.
  4. Testability: Different testing applicable.
  5. Ambiguity: find single interpretation of requirements (statement not clear due to multiple meanings).
  6. Consistency: find out requirement consistency and pointing to single requirement.

After validating whole requirement go ahead and categorize it into 3 types, functional, non-functional and special requirements. This categorization will help in creating detailed Test cases for different testing types.

 

QA Role:

QA involved in requirement analysis activity to ensure that the requirements identified by the BA and accepted by the customer are measurable. Also this activity provides inputs to various stages of SDLC to identify the availability of resources, scheduling and testing preparation. Below activities QA need to perform.

  • Analyze each and every requirement from specification document, use cases.
  • List down high level scenarios.
  • Clarify queries and functionality from stakeholders.
  • Promote suggestions to implement the features or any logical issues.
  • Raise defect or clarification against the specification document.
  • Track the defect or clarification rose against the specification document.
  • Create high level Test Scenarios.
  • Create Traceability Matrix.

Outcome of the Requirement Analysis Phase: 

  • Requirement Understanding Document.
  • High level scenarios.
  • High level test strategy and testing applicability. 

  

With all above mentioned techniques and checklist of requirement analysis, Tester is ready Sign off from Requirement Analysis phase.

Last modified on
Hits: 21
Rate this blog entry:
0

Jarvis: [while Tony is wearing the Mark II Armor] Test complete. Preparing to power down and begin diagnostics... 

Tony Stark: Uh, yeah, tell you what. Do a weather and ATC check, start listening in on ground control. 

Jarvis: Sir, there are still terabytes of calculations required before an actual flight is... 

Tony Stark: Jarvis... sometimes you gotta run before you can walk. 

If you are an ardent fan of Mr. Tony Stark (like I am), then you would have guessed that this is from Iron man (2008).

Well, this might not be a fiction or fantasy anymore; this is already happening! Bots like Jarvis are taking over the world. Siri, Cortana, Google Assistant are all living (are they living beings really?? Anyway, moving on!) examples of one form of such bots. Now, various organizations are deploying bots as their first point of interaction for consumers/customers. Does this really make sense or it's just another technology fad? Let's find out.

Chatbot or simply a bot, is actually a way of using (or reusing) an existing channel to interact with users. Conversation-based interface is being used by us in many ways in our day to day life - be it for discussing requirements with customer over Skype or for resolving a query related to your phone bill through chat or IVR. So boti-fying your business or app does make whole lot sense. Some clear and visible benefits are:

1) Always-connected customer experience: Bots can ensure that your customers are always attended to, always responded to and hence would provide certain level of assured customer experience.

2) Reaching out to customer - really: Given that bots can operate on various channels like Skype, Slack, FB messenger, SMSes, emails - you name it, you are also able to cover a wide variety of types of customers. This particularly is relevant for B2C kind of businesses.

3) Better use of human-capital: For a business that needs a good support infrastructure to handle customer queries & complaints, a bot based solution would create the opportunity to move personnel to more meaningful tasks. For instance, the initial conversation with the customer can be handled by bot and after qualifying the need to talk to an expert, a human expert would get involved. This would also save some costs and may improve efficiency.

4) 24x7 availability at low cost: Bots don't need holidays or breaks :) Once you build them, train them and have right technology set up to have them available at all times, you are all set. You will never lose any customer or any lead.

Following are some business scenarios where bots are already being deployed/used or can be used:

1) You log onto your bank app on smartphone and you are greeted by a bot that could answer all your basic queries like account balance, last 3 transactions, your credit card amount that is due and so on.

2) You visit an online shopping site and start talking to a bot. It would suggest right products for you to buy based on your chat history or shopping history. More so, it will complete the transaction end-to-end till payment.

3) [This is developer special] You want to check out the latest changes that have gone in the build that was deployed just now. You can talk to a Slack bot and get all these details.

4) [This could be too futuristic] You can talk to your home to keep itself ready to welcome you with rightly set AC temperature, correct lighting and may be hot food in oven too (thanks to IoT).

5) And many more..

Ok; so you get the point. You've got to build bots. Here is how.

A typical bot-based architecture would look like following:

1) Channels: These are essentially the apps that users are already familiar with or are using. It could be Skype, Slack, Facebook Messenger, WhatsApp. You can even have an email or voice recognition system as a channel. The key aspect to keep in mind here is the discovery of the bot through channels. You would need to publish your bots through these channels so that users can add them into their IMs.

2) Bot Framework: This is the key component that makes a bot - bot! This is the brain of bot basically. You have plenty of options here including Microsoft Bot Framework, api.ai, wit.ai and so on. Most of these are based on NLP (natural language processing) concept and you will have to train them so that they "understand" your business well. This is crucial and yet a tricky part.

3) Peripheral Services: You would typically need peripheral services and integrations to take care of things like authentication (Using Active Directory or Google/Facebook Sign on services), data management (databases), analytics (insights about usage), scheduling (calendar/email integration) etc.

Based on your business requirements, time-to-market, cost of operation and maturity of bot frameworks, you can finalize on the right bot framework. Most frameworks support most of the channels.

Too good to be true? Well, there could be following challenges with bot-based solutions:

1) Data privacy and security: This could be critical for bots that are dealing with financial data, healthcare information etc. You don't want your bots to be hacked or compromised. So you need to ensure that your bots are having right security mechanisms in place.

2) Maturity of underlying technologies: I am referring to mainly the NLP aspect of bot frameworks here. While there is a lot of euphoria around bots, the bot frameworks are required to be tested well in terms of scalability, reliability with respect to robustness of NLP aspect.

3) The "Human" angle: Well, a machine can never be human. So, while we have bots conversing with humans, there will be instances where human intervention is needed to pacify a really angry customer or to handle a complex scenario. While designing bot-based solutions, we need to be cognizant of this aspect.

Needless to say, the idea of having bot do the talking (literally) has some merit. It's up to us humans to make right use of them :)

Last modified on
Hits: 117
Rate this blog entry:

While NoSQL and NewSQL systems are maturing as high-performance data store options and being adopted increasingly, relational databases are based on a proven and solid model. Several scalable products still use them and need sharding, caching, routing when their databases grow into large clusters to protect their investment without significant re-engineering and operate 24×7.

 

Lack of adequate caching is often one of the most common performance problems that performance engineers come across when investigating performance issues. Many performance problems can be solved by the effective application of caching to reduce the frequency of expensive operations like database accesses or fetching web pages or reducing execution count of expensive functions by memoization. Caching can thus be leveraged at all layers from processors to disks, CDNs for web applications to web servers to databases, filesystems etc. They can be as simple as dictionaries/ hash tables provided by programming languages as a data structure or complex in nature like distributed hash tables (DHT) or enterprise grids. However, we use caching when evidence of a bottleneck demands it and not as a golden hammer or a band-aid. Between various caching solutions, It seems that relational caches or caching database middlewares are not too uncommon (see the ‘transparent sharding middleware’ section in this paper on NewSQL systems).

 

ScaleArc is a database load balancing middleware software, having a long roster of customers and impressive features including zero downtime and real-time monitoring. Of all these features, GS Lab’s performance engineering team thought ScaleArc’s transparent caching could be particularly helpful to improve the performance of products using relational databases and help them meet high scalability goals when combined with the other features. GS Lab engineering came up with a reproducible benchmark as a proof of the pudding to assess its promise.

 

Tools in the SysBench benchmark suite are widely used to measure the performance of various subsystems. In a recent case investigating suboptimal IO performance, GS Lab used the SysBench I/O benchmark to find that an improper RAID level was being used for a relational database running on expensive hardware. Fixing this to use the correct RAID level led to a big speedup without the need for end-to-end performance testing. SysBench is widely available and suitable as an independently reproducible benchmark. Therefore, we used the SysBench OLTP benchmark in this study to measure the performance of a MySQL NDB cluster with ScaleArc's ACID compliant cache.

Test_Environment.jpg

Scalearc has published a similar benchmarking exercise by Percona on Percona's variant of MySQL. We used NDB cluster since it is used by one of our customers and no such study exists for NDB cluster as far as we know. Also, we only evaluated results of caching for a subset of SysBench OLTP workload consisting of read-only queries (and skipped the read-write workload) to find an upper limit on performance gains through caching.

Avg_ResponseTime_ms.jpg

The results show a big improvement (up to 9x) in throughput of cached read-only queries and a great reduction in response times.

RequestperSec.jpg

Though the speedup will not be as spectacular for typical OLTP workloads consisting of a mix of reads and writes (compared to analytical workloads with a high percentage of reads), the results are highly promising given that systems can get a big performance boost with zero change in the application code or database.

 

We are publishing the results of this study as a white paper. All the artifacts required to reproduce the exercise including the test environment configuration, load generator code, supporting scripts, raw results and summary data etc. are available in this GitHub repository.

Last modified on
Hits: 219
Rate this blog entry:

Posted by on in Technology

The impact of technology evolution encompasses advances in sensor technologies, connectivity, analytics and cloud environments that will expand the impact of data on enterprise performance management and pose challenges for system integrations for most companies.

As industries are transitioning from analog to digitalized PLCs and SCADA, they would have to leverage sensor-based data to optimize control and design their assets and processes – both in real time and over time for faster decision making as well as embedding software in traditional industrial equipment.

Developing and deploying these systems securely and reliably represents one of the biggest challenges.

Going far beyond the current definition of networks, the most complicated and powerful network yet is now being built. In it, devices embedded in power lines, waterlines, assembly-lines, household appliances, industrial equipment, and vehicles will increasingly communicate with one another without the need for any human involvement.

The reach of these integration capabilities will go far beyond infrastructure and manufacturing. Today, for example, clinicians diagnose health conditions through a lengthy assessment. But simply matching historical pathological patterns, lifestyle patterns and matching those to live diagnostics collections systems provides for a more accurate diagnostic approach to serious ailments or early-warning signal. To make the most of such opportunities, health-care companies must figure out how to integrate systems far beyond the hospital. Much like in-memory big data analyses, this presents a problem of data collection closer to the source of the data. 

You may wonder collecting and transmitting data from several industrial machines and devices is not a new concept. Since the early 80s, data from industrial assets has been captured, stored, monitored and analysed to help improve key business impacts. In this era of digitization, as the industrial sensors and devices create hybrid data environments, systems integration will propagate more data from more locations, in more formats and from more systems than ever before. Data management and governance challenges that have pervaded operations for decades will now become a pressing reality. Strategies to manage the volume and variety of data, would need to be administered now to harness the opportunity IoT and BigData promises.

Despite of the above stated challenges, some strategies incorporated in core operations can help increase the odds to success:

  • Multiple Protocols

As the number of sensors and devices grow, increase in the number of data acquisition ‘protocols’ are creating a greater need for new ‘interfaces’ for device networking and integration within the existing data ecosystems.

  • Data Variety

As devices and sensors are deployed to fill the existing information gaps and operationalize assets outside the traditional enterprise boundaries, centralizing data management systems must be able to integrate disparate data types in order to create a unified view of operations and align them with the business objectives.

  • New Data Silos

Systems built with a purpose produce data silos that create barriers to using data for multiple purposes, by multiple stakeholders. Without foresight connected devices solutions presents the new silo – undermining the intent to construct architectures that incorporate connected devices to build broader, interactive data ecosystems.

As discussed above, for more than 30 years industries across the globe have been leveraging sensor-based data to gain visibilities into operations, support continuous improvement as well as optimize overall enterprise performance.  As advances in the technology make it cost-effective to deploy connected solutions, industries would need to develop a strategic approach for integrating sensor data with pre-existing data environments. These advancements would traverse towards creating a seamless, extensible data ecosystem with the need for cooperation between multiple vendors, partners and system integrators.


Last modified on
Hits: 338
Rate this blog entry:
0

In testing, Test Summary report is an important deliverable.  It represents the quality of a product.  As automation testing is mostly carried out in the absence of human, I recommend that test results should be presented in a good way.

Automation test report should be useful to people of all levels like automation experts, manual tester who is not aware of a code, high-level management. 

 

In an ideal case test automation report should comprise of following:

  • Statistical data like number of test cases passed, failed, skipped
  • Cause of test failure
  • Evidence (like screenshots indicating success/failure conditions)

Additional to above if we have following things in our test report then it will be impressive and useful:

  • Pass and fail percentage of tests
  • Test execution time for individual test case and a test suite
  • Test environment details
  • Representation of statistical data in the form of charts
  • Grouping of test cases as per the type like Functional, Regression etc.

TestNG or JUnit does not provide good reporting capabilities. TestNG default reports are not attractive. So for that we have to develop the customized reports.

I suggest using ExtentReport for automation test reporting will be more effective. This library allows us to accomplish the above mentioned things.

About ExtentReport:

It is an open-source test automation reporting API for Java and .NET  developers. The report is generated in HTML form.

Following are some features of ExtentReport:

  • Easy to use
  • Results are displayed in the form of pie charts
  • Provides passed test case percentage
  • Displays test execution time
  • Environment details can be added in an easy way
  • Screenshots can be attached to the report
  • Test reports can be filtered out based on the test results (Pass/Fail/Skip etc.)
  • Filtering stepwise results like info/pass/fail etc.
  • Categorized report  for Regression/Functional etc. testing
  • Test step logs can be added
  • Can be used with JUnit/TestNG
  • It can be used as a listener for TestNG
  • We can create parallel runs as well. So single report can be created for the parallel runs
  • We can add the configuration to report
  • Results from multiple runs can be combined to single report

Downloading and installation:

Download ExtentReport jar from http://extentreports.relevantcodes.com/index.html and add it as a dependency to your java project.

 

ExtentX:

ExtentX is a report server and project-wise test analysis dashboard for ExtentReports.

 

How ExtentReport works:

To see how ExtentReport exactly works, here is a simple example – One test case will pass and another will fail.

 

import org.openqa.selenium.WebDriver;

import org.openqa.selenium.firefox.FirefoxDriver;

import org.testng.Assert;

import org.testng.annotations.AfterTest;

import org.testng.annotations.BeforeTest;

import org.testng.annotations.Test;

import com.relevantcodes.extentreports.ExtentReports;

import com.relevantcodes.extentreports.ExtentTest;

import com.relevantcodes.extentreports.LogStatus;

 

public class ExtentReportTest{

     private WebDriver driver;

     ExtentReports extent;

     ExtentTest test;

     StringBuffer verificationErrors = new StringBuffer();

    

     @BeforeTest

     public void testSetUp() {

           driver = new FirefoxDriver();

           extent = new ExtentReports(".\\TestAutomationReport.html", true);    //Report initializing

           extent.addSystemInfo("Product Version", "3.0.0")   //System or environment info

                 .addSystemInfo("Author", "Sachin Kadam");

     }

    

     @Test

     public void TC1() {

           test = extent

                     .startTest("Test case 1", "Check the google home page title")  //Start test case

                     .assignAuthor("Sachin Kadam")   

                     .assignCategory("Regression", "Functional");

           String appURL = "http://google.com";

           driver.get(appURL);

           test.log(LogStatus.INFO, "Navigating to URL : "+appURL);   //Log info

           customVerify(driver.getTitle(), "Google");

           extent.endTest(test);   //End test case

           checkForErrors();

     }

    

     @Test

     public void TC2() {

           test = extent

                   .startTest("Test case 2", "Check the wikipedia home page title") //Start test case

                   .assignCategory("Functional")

                   .assignAuthor("Sachin Kadam");

           String appURL = "https://www.wikipedia.org";

           driver.get(appURL);

           test.log(LogStatus.INFO, "Navigating to URL : "+appURL); //Log info

           customVerify(driver.getTitle(), "Google"); //Incorrect expected title to fail test case

           extent.endTest(test);   //End test case

           checkForErrors();

     }

    

     //custom assertion method for string comparison

     public void customVerify(String actual, String expected){

        try{

           Assert.assertEquals(actual, expected);

           //Log pass results

           test.log(LogStatus.PASS, "Expected title:"+expected + " :: Current title:" + actual); 

           }catch(Error e){

                 //Log fail results along with error

                   test.log(LogStatus.FAIL, "Expected title:"+expected + " :: Current title:" + actual +" :: "+ e.toString());                                                 

                verificationErrors.append(e);

          }

     }

 

     @AfterTest

     public void tearDown(){

           driver.quit();

           extent.flush();

     }

     

    //Method for logging correct results to TestNG report in case of failure

     public void checkForErrors(){ 

           if(!"".equals(verificationErrors.toString())){

           Assert.fail(verificationErrors.toString());

           verificationErrors = new StringBuffer();

           }

     }

}

 

Finally generated HTML report looks like:

 ExtentReport01

 

ExtentReport02

 

ExtentReport03

 

I hope you will find ExtentReport very useful, easy to use, impressive and productive.

For more reference: http://extentreports.relevantcodes.com/index.html

 

- Sachin Kadam

 

Last modified on
Hits: 854
Rate this blog entry:
Very low screen size go to mobile site instead

Click Here