Posted by on in Technology

The impact of technology evolution encompasses advances in sensor technologies, connectivity, analytics and cloud environments that will expand the impact of data on enterprise performance management and pose challenges for system integrations for most companies.

As industries are transitioning from analog to digitalized PLCs and SCADA, they would have to leverage sensor-based data to optimize control and design their assets and processes – both in real time and over time for faster decision making as well as embedding software in traditional industrial equipment.

Developing and deploying these systems securely and reliably represents one of the biggest challenges.

Going far beyond the current definition of networks, the most complicated and powerful network yet is now being built. In it, devices embedded in power lines, waterlines, assembly-lines, household appliances, industrial equipment, and vehicles will increasingly communicate with one another without the need for any human involvement.

The reach of these integration capabilities will go far beyond infrastructure and manufacturing. Today, for example, clinicians diagnose health conditions through a lengthy assessment. But simply matching historical pathological patterns, lifestyle patterns and matching those to live diagnostics collections systems provides for a more accurate diagnostic approach to serious ailments or early-warning signal. To make the most of such opportunities, health-care companies must figure out how to integrate systems far beyond the hospital. Much like in-memory big data analyses, this presents a problem of data collection closer to the source of the data. 

You may wonder collecting and transmitting data from several industrial machines and devices is not a new concept. Since the early 80s, data from industrial assets has been captured, stored, monitored and analysed to help improve key business impacts. In this era of digitization, as the industrial sensors and devices create hybrid data environments, systems integration will propagate more data from more locations, in more formats and from more systems than ever before. Data management and governance challenges that have pervaded operations for decades will now become a pressing reality. Strategies to manage the volume and variety of data, would need to be administered now to harness the opportunity IoT and BigData promises.

Despite of the above stated challenges, some strategies incorporated in core operations can help increase the odds to success:

  • Multiple Protocols

As the number of sensors and devices grow, increase in the number of data acquisition ‘protocols’ are creating a greater need for new ‘interfaces’ for device networking and integration within the existing data ecosystems.

  • Data Variety

As devices and sensors are deployed to fill the existing information gaps and operationalize assets outside the traditional enterprise boundaries, centralizing data management systems must be able to integrate disparate data types in order to create a unified view of operations and align them with the business objectives.

  • New Data Silos

Systems built with a purpose produce data silos that create barriers to using data for multiple purposes, by multiple stakeholders. Without foresight connected devices solutions presents the new silo – undermining the intent to construct architectures that incorporate connected devices to build broader, interactive data ecosystems.

As discussed above, for more than 30 years industries across the globe have been leveraging sensor-based data to gain visibilities into operations, support continuous improvement as well as optimize overall enterprise performance.  As advances in the technology make it cost-effective to deploy connected solutions, industries would need to develop a strategic approach for integrating sensor data with pre-existing data environments. These advancements would traverse towards creating a seamless, extensible data ecosystem with the need for cooperation between multiple vendors, partners and system integrators.

Last modified on
Hits: 250
Rate this blog entry:

In testing, Test Summary report is an important deliverable.  It represents the quality of a product.  As automation testing is mostly carried out in the absence of human, I recommend that test results should be presented in a good way.

Automation test report should be useful to people of all levels like automation experts, manual tester who is not aware of a code, high-level management. 


In an ideal case test automation report should comprise of following:

  • Statistical data like number of test cases passed, failed, skipped
  • Cause of test failure
  • Evidence (like screenshots indicating success/failure conditions)

Additional to above if we have following things in our test report then it will be impressive and useful:

  • Pass and fail percentage of tests
  • Test execution time for individual test case and a test suite
  • Test environment details
  • Representation of statistical data in the form of charts
  • Grouping of test cases as per the type like Functional, Regression etc.

TestNG or JUnit does not provide good reporting capabilities. TestNG default reports are not attractive. So for that we have to develop the customized reports.

I suggest using ExtentReport for automation test reporting will be more effective. This library allows us to accomplish the above mentioned things.

About ExtentReport:

It is an open-source test automation reporting API for Java and .NET  developers. The report is generated in HTML form.

Following are some features of ExtentReport:

  • Easy to use
  • Results are displayed in the form of pie charts
  • Provides passed test case percentage
  • Displays test execution time
  • Environment details can be added in an easy way
  • Screenshots can be attached to the report
  • Test reports can be filtered out based on the test results (Pass/Fail/Skip etc.)
  • Filtering stepwise results like info/pass/fail etc.
  • Categorized report  for Regression/Functional etc. testing
  • Test step logs can be added
  • Can be used with JUnit/TestNG
  • It can be used as a listener for TestNG
  • We can create parallel runs as well. So single report can be created for the parallel runs
  • We can add the configuration to report
  • Results from multiple runs can be combined to single report

Downloading and installation:

Download ExtentReport jar from and add it as a dependency to your java project.



ExtentX is a report server and project-wise test analysis dashboard for ExtentReports.


How ExtentReport works:

To see how ExtentReport exactly works, here is a simple example – One test case will pass and another will fail.


import org.openqa.selenium.WebDriver;

import org.openqa.selenium.firefox.FirefoxDriver;

import org.testng.Assert;

import org.testng.annotations.AfterTest;

import org.testng.annotations.BeforeTest;

import org.testng.annotations.Test;

import com.relevantcodes.extentreports.ExtentReports;

import com.relevantcodes.extentreports.ExtentTest;

import com.relevantcodes.extentreports.LogStatus;


public class ExtentReportTest{

     private WebDriver driver;

     ExtentReports extent;

     ExtentTest test;

     StringBuffer verificationErrors = new StringBuffer();



     public void testSetUp() {

           driver = new FirefoxDriver();

           extent = new ExtentReports(".\\TestAutomationReport.html", true);    //Report initializing

           extent.addSystemInfo("Product Version", "3.0.0")   //System or environment info

                 .addSystemInfo("Author", "Sachin Kadam");




     public void TC1() {

           test = extent

                     .startTest("Test case 1", "Check the google home page title")  //Start test case

                     .assignAuthor("Sachin Kadam")   

                     .assignCategory("Regression", "Functional");

           String appURL = "";


           test.log(LogStatus.INFO, "Navigating to URL : "+appURL);   //Log info

           customVerify(driver.getTitle(), "Google");

           extent.endTest(test);   //End test case





     public void TC2() {

           test = extent

                   .startTest("Test case 2", "Check the wikipedia home page title") //Start test case


                   .assignAuthor("Sachin Kadam");

           String appURL = "";


           test.log(LogStatus.INFO, "Navigating to URL : "+appURL); //Log info

           customVerify(driver.getTitle(), "Google"); //Incorrect expected title to fail test case

           extent.endTest(test);   //End test case




     //custom assertion method for string comparison

     public void customVerify(String actual, String expected){


           Assert.assertEquals(actual, expected);

           //Log pass results

           test.log(LogStatus.PASS, "Expected title:"+expected + " :: Current title:" + actual); 

           }catch(Error e){

                 //Log fail results along with error

                   test.log(LogStatus.FAIL, "Expected title:"+expected + " :: Current title:" + actual +" :: "+ e.toString());                                                 






     public void tearDown(){





    //Method for logging correct results to TestNG report in case of failure

     public void checkForErrors(){ 



           verificationErrors = new StringBuffer();





Finally generated HTML report looks like:







I hope you will find ExtentReport very useful, easy to use, impressive and productive.

For more reference:


- Sachin Kadam


Last modified on
Hits: 583
Rate this blog entry:

The recent massive distributed denial of service (DDoS) attack on 21st October 2016 affected numerous cloud service providers (Amazon, Twitter, GitHub, Netflix, etc.). It is interesting to note that this attack leveraged hundreds of thousands of internet connected consumer devices (aka IOT devices) which were infected with malware called Mirai. Who would have suspected that the attackers involved were essentially consumer devices such as cameras and DVRs?

A Chinese electronics component manufacturer (Hangzhou Xiongmai Technology) admitted that its hacked products were behind the attack (reference: ComputerWorld). Our observation is that the security vulnerabilities involving weak default passwords in vendor’s products were partly to blame. These vulnerable devices were first infected with Mirai botnet and subsequently these Mirai infected devices launched an assault to disrupt access to popular websites by flooding Dyn, a DNS service provider, with an overwhelming amount of internet traffic. Mirai botnet is capable of launching multiple types of DDoS attacks, including TCP SYN-flooding, UDP flooding, DNS attack, etc. Dyn mentioned in a statement – “we observed 10s of millions of discrete IP addresses associated with the Mirai botnet that were part of the attack” – such is the sheer volume of the attack by leveraging millions of existing IOT devices out there.

Subsequently Xiongmai shared that it had already patched the flaws in its products in September 2015, which ensures that the customers have to change the default username and password when used for the first time. However, products running older versions of the firmware are still vulnerable.

This attack reveals several fundamental problems with IOT devices in the way things stand today:

  • Default username and passwords
  • Easily hackable customer-chosen easy-to-remember (read as “weak”) passwords
  • Challenges with over-the-air (OTA) updates etc.

The first two problems are age old issues and it is surprising to see these come up with newer technologies involving IOT devices as well. Vendors have still not moved away from these traditional techniques of default username and passwords, nor have customers adopted strong passwords. Probably it is time, we simply accept the latter will not happen and remove the onus from customer having to set strong passwords (it is just not going to happen!).

One-time passwords (OTP) can be quite helpful here. One-time password, as the name suggests, is a password that is valid for only one login session. It is a system generated password which is essentially not vulnerable to replay attacks. There are two relevant standards for OTP – HOTP [HMAC-based One-Time Password] and TOTP [Time-based One-Time Password]. Both standards require a shared secret between the device and authentication system along with a moving factor, which is either counter-based (HOTP) or time-based (TOTP).

GS Lab’s OTP-based device authentication system presents a novel approach which helps address the challenges faced by IOT device manufacturers today. It provides unstructured device registry which is flexible enough to include information on various types of devices and an authentication sub-system which caters to authenticating IOT devices tracked in the device registry via OTP. The authentication sub-system is built on top of existing OTP standards (HOTP and TOTP) and helps alleviate the need for static (presumably weak) passwords in IOT devices. It provides support for MQTT and REST protocols which are quite prevalent in the IOT space. More support for additional protocols (like CoAP, etc.) is already planned and in the works. OTP-based device authentication system is built on top of our open source OTP Manager library.

Here are some of the advantages of using GS Lab’s OTP-based device authentication system:

  • Strong passwords – system generated based on shared secret key
  • Not vulnerable to replay attacks – passwords are for one-time use only
  • Freedom from static user-defined passwords
  • Standards based solution – HOTP and TOTP standards
  • Relevant for resource constrained devices – crypto algorithms used by HOTP and TOTP standards work with devices with limited CPU, memory capabilities.
  • Ability to identify malicious devices – rogue devices can be identified using HOTP counter value
  • Provides device registry for simplified management



Last modified on
Hits: 484
Rate this blog entry:


Customer provides a complete suite of events and video management solutions using cloud server. This server will enable the client devices (mobiles, web) to configure, control and view media from the enabled cloud camera. The server will host a web application, which functions as the intermediary for communication and authentication between the client and the camera.


The GS Lab engagement involved feature development, QA, DevOps and test automation development. The test automation team has developed functional and performance test suites to test the product.

Field Requirement

  • The customer wanted a test framework which can simulate the event/video surveillance scenarios of different end customers.

  • The test framework should test the timely delivery of the audio/video events to the event tracking web portal/mobile app.

  • The test framework should benchmark various internal cloud servers in the audio video surveillance solution/product.

Solution Provided by GS Lab




GS Lab has developed an audio/video surveillance test framework (tool) using Python and Selenium. Following are the major features provided by this framework:

  • The test framework can test 500 video & audio live streams across 500 cameras (1 audio/video stream per camera). This is a customer product limitation for live streaming.

  • The test framework can test the video surveillance controlling app (Android as well as iOS) and web portal (across Chrome, Firefox, Internet Explorer and Safari).

  • The test framework can start and stop the live camera video/audio stream on the fly.

  • The framework can test the operation specific notifications and logs across different servers in the surveillance solution certifying the successful completion of the operation.

  • The test framework supports:

    1. Complete functional, regression & performance testing of all the event/video management scenarios

    2. On the fly addition/deletion of audio/video stream in the surveillance solution

    3. Testing of 24/7 recording of the live streaming to be stored on Amazon S3 cloud storage

    4. Testing the notifications for any (audio / video) camera event

    5. Checking the timely delivery of the events to the portal or mobile apps (Android & iOS)


Value Addition

Following are the major benefits of the test framework for an audio video surveillance product:

  • The real world scenarios of the end customer can be simulated using this framework.

  • It supports Continuous Integration (CI) with all available open source tools.

  • The framework can save 60% bandwidth of the QA team in every production release.

  • The framework can readily be used for the performance testing after the completion of regression testing with minimal changes.

  • It can benchmark different servers involved in the video surveillance solution.

  • The framework helped the development team to identify the performance issues due to crucial parameters (CPU, memory etc.) of the backend servers of the surveillance solution.

Last modified on
Hits: 142
Rate this blog entry:

Posted by on in Technology

On a mundane February afternoon, as I headed for lunch, I remember getting a phone
call from within my company, and with it an opportunity to participate in an IoT
training program! Little did I know that the training sessions were supposed to be
on-line, live, interactive but early in the morning. I'm not a morning person, and
was hesitant a little, but somehow, 'curious me' prevailed over 'hesitant me' and
I subscribed. Having heard quite a bit about Internet of Things (IoT), I wanted to
get a taste of it. And this training program presented that opportunity. It not only
talked about learning, but also about making hands dirty to build something!
    Right after the introductory session, it was clear that we could reap the
benefits in a much better way if participated as a team. So, we formed a team with
developers carrying experience in different areas such as UI, server side, native
applications, hardware devices, etc. Then on-wards, we embarked on a journey in a
quest to learn what it means & takes to build an IoT project using an IoT platform.
What follows here is an account of our experiences.

Learning an IoT platform
This was as good as it could get. We got to learn an IoT platform, an Atomic domain
language (TQL that is), ways to integrate with hardware devices, sensors, actuators.
There were well organized set of sessions, which took us on a tour of the platform
and how to use it. The course contained advanced features like clustering, macros
which made it even more 'pragmatic'.

Hands-on is the key, and you get to do plenty of it
One of the best part of this program is : you get to do hands on. In fact, you are
kinda forced to make your hands dirty. I think it's not w/o a reason that the philosophy
of 'learning by doing' exists! We played a lot with raspberry pi, arduino uno, sensors,
actuators and of course TQL system itself. This rendezvous did present us with it's
fare share of issues, but it was all worth.

Technically enriching discussions
One of the reasons for me to subscribe to this training program was to hear about the
IoT platform, directly from the creators of it. It is a big deal!
This was evident from the interactions which we or the community used to have
during as well after the sessions. e.g. Why a particular feature is implemented
in a certain way, why are certain things restricted on the platform, etc. This helped
participants, especially those who were developers/architects, learn about what goes
into making of an IoT platform.

Vibrant support forum
When you open the slack web app for TQL team, you get a random but nice message
to start with. One of the Slack messages that struck the chord with me instantly
was : We're all in this together. This message sums up the kind of support the
Atomiton folks are committed to providing. The questions are answered to depth
with minute details, with the reason explained as well as available alternative/work-around.

Mutually rewarding community
As the participants are required to build projects, they naturally get to showcase it
to the community. This helps everyone understand how the platform can be put to use
to solve real-life problems, how others in the community are using it in an innovative
and creative way, and in much larger context, what IoT is all about.

When you are doing something over and above your regular work, you need high
levels of commitment. And you also need a great deal of motivation!
There was enough of it, at right times, to keep us going. And it rightly came
with tips & suggestions for improvement.

Improvement areas : What can possibly be done to make this even better?

Developer is king!
Developer is the king, and he needs to be pampered. ;) More the developer-friendly
features in the TQL studio, the better it is. Hover-for-help-msg, auto-completion,
templates-at-fingertips (for queries, macros, usage of javascript, in-line-comments)
are some of our suggestions to enhance the TQL studio experience.

Auto-generation of basic queries from models
This will save some work for the developer. Also, it will serve as a guide for
writing custom/complex queries. I would go a step further, and suggest auto-generation
of code for UI : to access data over web-sockets as well as over http.

Highlight security aspects
Make this a must in the training program. Let this be a differentiator.
Following are the aspects which are worth giving a thought :

    • Can h/w devices be given fingerprints (unique identities)?
    • If a web app is being served using app-attachment feature, then how to expose it over https?
    • How to invoke an external service over https?
    • Security in built-in protocol handlers

Hardware bottlenecks

One of the observations our team made after the completion of the final project was :
Working with 'things' is not the same as working with pure software!
We then thought, what would make working with 'things' easier? We realized,
it would be knowledge of setting this h/w up, knowledge of integrating with it,
would make working with it easier. Suggestion here is to make it a child's play.
Crowd-sourcing could well be utilized here. Making this easy and simple would make
participants focus more on the project and utilizing TQL System's features in full glory.
Items to focus here :
Raspberry pi - n/w connectivity, mainly, a list of FAQs with respect
to n/w connectivity, especially, what are the many different ways to do it.
Basic sensors and their connections with Arduino Uno and/or raspberry pi.

A step further, it would be great to share notes on comparison of
off the shelf hardware Vs. specialized high-end hardware. e.g. Raspberry Vs Libelium.
Can Raspberry be used in production environment?

Session prerequisites
It would help if the prerequisites are mentioned for each of the sessions, and the
content is also made available for these prerequisites.
For ex. right from the first session, the participants need to have an understanding
of raspberry pi & Arduino Uno. If they have already gone through it, then the first
session becomes a hello-world purely to TQL system rather than a hello-world to all
of h/w devices and then TQL system.


Tagged in: IoT TQL
Last modified on
Hits: 341
Rate this blog entry:
Very low screen size go to mobile site instead

Click Here