System integration testing

From Wikipedia, the free encyclopedia

System integration testing (SIT) involves the overall testing of a complete system of many subsystem components or elements. The system under test may be composed of hardware, or software, or hardware with embedded software, or hardware/software with human-in-the-loop testing.

SIT consists, initially, of the "process of assembling the constituent parts of a system in a logical, cost-effective way, comprehensively checking system execution (all nominal & exceptional paths), and including a full functional check-out."[1] Following integration, system test is a process of "verifying that the system meets its requirements, and validating that the system performs in accordance with the customer or user expectations."[1]

In technology product development, the beginning of system integration testing is often the first time that an entire system has been assembled such that it can be tested as a whole. In order to make system testing most productive, the many constituent assemblies and subsystems will have typically gone through a subsystem test and successfully verified that each subsystem meets its requirements at the subsystem interface level.

In the context of software systems and software engineering, system integration testing is a testing process that exercises a software system's coexistence with others. With multiple integrated systems, assuming that each have already passed system testing,[2] SIT proceeds to test their required interactions. Following this, the deliverables are passed on to acceptance testing.

Software system integration testing[]

For software SIT is part of the software testing life cycle for collaborative projects. Usually, a round of SIT precedes the user acceptance test (UAT) round. Software providers usually run a pre-SIT round of tests before consumers run their SIT test cases.

For example, if an integrator (company) is providing an enhancement to a customer's existing solution, then they integrate the new application layer and the new database layer with the customer's existing application and database layers. After the integration is complete, users use both the new part (extended part) and old part (pre-existing part) of the integrated application to update data. A process should exist to exchange data imports and exports between the two data layers. This data exchange process should keep both systems up-to-date. The purpose of system integration testing is to ensure all parts of these systems successfully co-exist and exchange data where necessary.

There may be more parties in the integration, for example the primary customer (consumer) can have their own customers; there may be also multiple providers.

Data driven method[]

A simple method of SIT which can be performed with minimum usage of software testing tools. Data imports and exports are exchanged before the behavior of each data field within each individual layer is investigated. After the software collaboration, there are three main states of data flow.

Data state within the integration layer[]

Integration layer can be a middleware or web service(s) which acts as a medium for data imports and data exports. Data imports and exports performance can be checked with the following steps:[citation needed]

  1. Cross-checking of the data properties within the Integration layer with technical/business specification documents.
    • For web service involvement with the integration layer, WSDL and XSD can be used against web service request for the cross check.
    • Middleware involvement with the integration layer allows for data mappings against middleware logs for the cross check.
  2. Execute some unit tests. Cross check the data mappings (data positions, declarations) and requests (character length, data types) with technical specifications.
  3. Investigate the server logs/middleware logs for troubleshooting.

Reading knowledge of WSDL, XSD, DTD, XML, and EDI might be required for this.

Data state within the database layer[]

System integration testing of a database layer might proceed as follows:[citation needed]

  1. First check whether all the data have committed to the database layer from the integration layer.
  2. Then check the data properties with the table and column properties with relevant to technical/business specification documents.
  3. Check the data validations/constrains with business specification documents.
  4. If there are any processing data within the database layer then check Stored Procedures with relevant specifications.
  5. Investigate the server logs for troubleshooting.

Knowledge in SQL and reading knowledge in [stored procedures] might be required for this[according to whom?]

Data state within the application layer[]

There is not that much to do with the application layer when we perform a system integration testing:[citation needed]

  1. Mark all the fields from business requirement documents which should be visible in the UI.
  2. Create a data map from database fields to application fields and check whether necessary fields are visible in UI.
  3. Check data properties by some positive and negative test cases.

There are many combinations of data imports and export which we can perform by considering the time period for system integration testing. Testers must select best combinations to perform with the limited time and when repeat some of the steps to test those combinations.

References[]

  1. ^ Jump up to: a b Houser, Pete (November 2011). "Best Practices for Systems Integration" (PDF). dtic.mil. Retrieved 15 March 2016.
  2. ^ What is System integration testing?

See also[]


Retrieved from ""