INDUSTRIES WE SERVE
OST Team
Careers
Locations
Alliances
Featured Customers
Awards
Manufacturing & Distribution
Healthcare
Finance & Insurance
Public Sector
Cloud & Data Providers
Enterprise Technology
Design
Advisory Services
Application Development
Data Analytics
Internet of Things
Managed Services
Data Center Modernization
Configuration Services
News & Events
Blog

Part IV: Utilizing a layered testing approach is key in building an IoT platform

July 13, 2016

This post is Part IV of the “IoT… are we all doing it wrong?” series. Check out Part I,  Part II, and Part III then come back next week for Part V, a post about appropriately scaling your IoT projects for success.

 

“Every large system that works started as a small system that worked.” – Anonymous

IoT product teams have a vast array of technologies at their disposal to create the “next big thing”. These systems are complex and have many various layers to their architectures. The solutions are made up of edge devices, which are comprised of hardware and firmware. These edge devices are connected to a cloud platform, either directly or via gateway connections.  Often times a mobile application is in the mix that connects directly to the edge device via a Bluetooth connection, as well as through cloud connectivity.  The cloud architecture is a combination of technologies for handling device / mobile / connectivity, device management, authentication, web applications and data analytics. As you can see from this diagram, these systems can have numerous different components and interaction models. 

IoT part IV

The expectation is that a small team of individuals with a broad set of skills will build, deploy, and maintain these globally deployed IoT systems over a period of years. Consumer’s expectations of systems that are both physical and digital are higher, as well. 

So how do we build and maintain a much more complex product in an environment that is much more demanding, with less resources?

Start with a test.

Yes, that is right.

Take a layered testing approach

It will seem like additional work that is taking velocity away from actual code production, but you will thank me soon. 

Start with test driven development. Turn the requirements into actual tests that can be run via a test harness, then write the code that passes those tests. As the code evolves, keep the existing tests, write new tests and then modify the code.  Teams will find that the code will be easier to maintain and validate as time goes by.

Testing will take many forms.  Create unit tests for your smallest unit of code. Things start to get interesting in the IoT world when teams want to take it to the next level and develop integration tests.  Think of this as writing code that validates how logical systems or layers in your architecture would interact with each other.  If you recall the above diagram, how do teams develop integration tests for all these pieces of software and hardware components? That is where things get a bit complex. Different strategies can be employed depending on the layer the team needs to test. 

The edge device firmware can be tested via a hardware-in-the-loop simulation. Software device emulators can be developed so that cloud and mobile software teams can build their systems before edge devices are available. These emulators can also be utilized for integration tests for validation. 

Most modern IoT cloud platforms allow teams to script out their complete environments. Integration tests can be built so that environments and states required to run the tests can be deployed at the time of the test and deconstructed after the tests are complete. 

Having a good foundation of unit tests and integration tests take the burden off of system testing.  This test coverage will find issues before the they make it into QA.  This will reduce the amount of bug fix churn that can occur during the latter portions of a release cycle.  Having these mechanisms in place, however, does not replace the need to perform good system testing. System testing can take several forms including the following: ad-hoc testing, scripted UI testing, performance testing, regression testing, and other testing techniques. 

So, when should teams test?  

Teams need to test often or at least every time they want to commit code.  The sooner a bug is found, the cheaper it is to fix. If a developer finds an issue and fixes it before a commit, it is going to be much cheaper than if it is found in production by an end user. Unit tests and integration tests should be runnable against code that is uncommitted by developers.   

Continuous build systems should also be used so that a team development branch of code can be validated at each commit.  This automated system performs the builds, runs unit tests, runs integration tests and deploys newly committed codebase to a common development environment.

How do you deal with multiple mobile platforms?

Mobile applications present new challenges to developers.  Often times, developers are expected to target both the Android and iOS platforms.  They are also expected to support multiple form factors/hardware revisions, as well as OS revisions.  The testing matrix in these scenarios is nightmarish at best.  Fortunately, there are options to address these issues. Xamarin’s Test Cloud allows development teams to automatically deploy their mobile apps to an almost endless combination of mobile platforms and run automated system testing. This will uncover any issues that may arise across a broad range of mobile configurations. 

Why is it important to set up testing procedures from the very beginning of a project?

Developing testing approaches, processes and automations are key for teams to reliably develop and maintain IoT solutions. Testing coverage acts as a force multiplier and mitigates the increasing risk of code / system complexity.

One can imagine a scenario where a system has been in production over a year.  The team is informed that an 3rd party API that their cloud business logic relies on, is changing.  The developer gets the latest version of code and is a bit uneasy to make the change.  She hasn’t worked with the section of code that requires the change.  However, she looks at the code, the change seems simple enough, and determines with analysis that it is probably a five minute change.

Just before committing the change, the developer dutifully runs the unit testing and the integration testing locally.  The process runs for about two minutes. As the progress bar runs, each of the unit tests come back as green, then the process moves on to the integration testing. The first few tests are green, but suddenly there is a red indicator that shows a failure. At that point the developer quickly jumps into action and discovers that the new code change has to be implemented slightly differently to ensure that the system as a whole runs reliably. A two minute tweak, all the code passes the unit tests, integration tests and a shake down testing finally the code is committed. Having the testing infrastructure in place just saved everyone a ton of effort and heart ache.

 

Aaron Kamphuis

Principal – Data Analytics & IoT

Aaron Kamphuis has spent 20+ years in data analytics, application development, cloud architectures and software testing with a background leading development teams in the use of cutting-edge technologies to satisfy unique business/end user requirements.

Aaron spent several years with Sagestone Consulting Inc., where he worked with his partners to build out 65+ people for a multi-million dollar application development services organization.

At OST, Aaron and his teammates have worked with clients to build global scale IoT solutions, Data Analytics solutions for packaged software, and companies that range from large multi-national enterprises to small businesses.

This post is Part IV of the “IoT… are we all doing it wrong?” series. Check out Part I,  Part II, and Part III then come back next week for Part V, a post about appropriately scaling your IoT projects for success.

 

“Every large system that works started as a small system that worked.” – Anonymous

IoT product teams have a vast array of technologies at their disposal to create the “next big thing”. These systems are complex and have many various layers to their architectures. The solutions are made up of edge devices, which are comprised of hardware and firmware. These edge devices are connected to a cloud platform, either directly or via gateway connections.  Often times a mobile application is in the mix that connects directly to the edge device via a Bluetooth connection, as well as through cloud connectivity.  The cloud architecture is a combination of technologies for handling device / mobile / connectivity, device management, authentication, web applications and data analytics. As you can see from this diagram, these systems can have numerous different components and interaction models. 

IoT part IV

The expectation is that a small team of individuals with a broad set of skills will build, deploy, and maintain these globally deployed IoT systems over a period of years. Consumer’s expectations of systems that are both physical and digital are higher, as well. 

So how do we build and maintain a much more complex product in an environment that is much more demanding, with less resources?

Start with a test.

Yes, that is right.

Take a layered testing approach

It will seem like additional work that is taking velocity away from actual code production, but you will thank me soon. 

Start with test driven development. Turn the requirements into actual tests that can be run via a test harness, then write the code that passes those tests. As the code evolves, keep the existing tests, write new tests and then modify the code.  Teams will find that the code will be easier to maintain and validate as time goes by.

Testing will take many forms.  Create unit tests for your smallest unit of code. Things start to get interesting in the IoT world when teams want to take it to the next level and develop integration tests.  Think of this as writing code that validates how logical systems or layers in your architecture would interact with each other.  If you recall the above diagram, how do teams develop integration tests for all these pieces of software and hardware components? That is where things get a bit complex. Different strategies can be employed depending on the layer the team needs to test. 

The edge device firmware can be tested via a hardware-in-the-loop simulation. Software device emulators can be developed so that cloud and mobile software teams can build their systems before edge devices are available. These emulators can also be utilized for integration tests for validation. 

Most modern IoT cloud platforms allow teams to script out their complete environments. Integration tests can be built so that environments and states required to run the tests can be deployed at the time of the test and deconstructed after the tests are complete. 

Having a good foundation of unit tests and integration tests take the burden off of system testing.  This test coverage will find issues before the they make it into QA.  This will reduce the amount of bug fix churn that can occur during the latter portions of a release cycle.  Having these mechanisms in place, however, does not replace the need to perform good system testing. System testing can take several forms including the following: ad-hoc testing, scripted UI testing, performance testing, regression testing, and other testing techniques. 

So, when should teams test?  

Teams need to test often or at least every time they want to commit code.  The sooner a bug is found, the cheaper it is to fix. If a developer finds an issue and fixes it before a commit, it is going to be much cheaper than if it is found in production by an end user. Unit tests and integration tests should be runnable against code that is uncommitted by developers.   

Continuous build systems should also be used so that a team development branch of code can be validated at each commit.  This automated system performs the builds, runs unit tests, runs integration tests and deploys newly committed codebase to a common development environment.

How do you deal with multiple mobile platforms?

Mobile applications present new challenges to developers.  Often times, developers are expected to target both the Android and iOS platforms.  They are also expected to support multiple form factors/hardware revisions, as well as OS revisions.  The testing matrix in these scenarios is nightmarish at best.  Fortunately, there are options to address these issues. Xamarin’s Test Cloud allows development teams to automatically deploy their mobile apps to an almost endless combination of mobile platforms and run automated system testing. This will uncover any issues that may arise across a broad range of mobile configurations. 

Why is it important to set up testing procedures from the very beginning of a project?

Developing testing approaches, processes and automations are key for teams to reliably develop and maintain IoT solutions. Testing coverage acts as a force multiplier and mitigates the increasing risk of code / system complexity.

One can imagine a scenario where a system has been in production over a year.  The team is informed that an 3rd party API that their cloud business logic relies on, is changing.  The developer gets the latest version of code and is a bit uneasy to make the change.  She hasn’t worked with the section of code that requires the change.  However, she looks at the code, the change seems simple enough, and determines with analysis that it is probably a five minute change.

Just before committing the change, the developer dutifully runs the unit testing and the integration testing locally.  The process runs for about two minutes. As the progress bar runs, each of the unit tests come back as green, then the process moves on to the integration testing. The first few tests are green, but suddenly there is a red indicator that shows a failure. At that point the developer quickly jumps into action and discovers that the new code change has to be implemented slightly differently to ensure that the system as a whole runs reliably. A two minute tweak, all the code passes the unit tests, integration tests and a shake down testing finally the code is committed. Having the testing infrastructure in place just saved everyone a ton of effort and heart ache.

 

Aaron Kamphuis

Principal – Data Analytics & IoT

Aaron Kamphuis has spent 20+ years in data analytics, application development, cloud architectures and software testing with a background leading development teams in the use of cutting-edge technologies to satisfy unique business/end user requirements.

Aaron spent several years with Sagestone Consulting Inc., where he worked with his partners to build out 65+ people for a multi-million dollar application development services organization.

At OST, Aaron and his teammates have worked with clients to build global scale IoT solutions, Data Analytics solutions for packaged software, and companies that range from large multi-national enterprises to small businesses.

Back to Top