VP Products & Center Head, Progress Software
Ramesh is a VP (Products) and Center Head of Progress Software in India, is responsible for leadi... more>>
In complex software, end solutions (projects) or software products, typically there is an elaborate sequence to be followed for writing tests, running tests, running pre-release regression suites and more. They are by and large top-down. With dedicated test machines, that has all underlying dependent software configurations pre-setup. Like say a specific version of OSs, DBMS servers, App servers and web servers. And then there are the tests. Built on a test framework or harness. Tests are in a repository. And the harness runs the chosen tests and collates results and generates the reports. Often with ability to drill down on failures thru inspecting logs and other tracing and debugging information as made available by the software being tested. In this typically, there is little flexibility in terms of what tests are done. Especially when it comes to continuous integration and nightly/ regression suites. Creating new plans is rarely doable on the fly. Will involve some plan setup files in the least, and modifying scripts and more if using not-so-well-engineered harnesses.
Now enter cloud. As software engineering groups explore, the first step is often the tentative one trying to test the cloud infra using private clouds. Basically, hardware and OS is virtualized. Instead of using physical machines, dedicated virtual machines (VM) are used. The good thing about this is that very little needs to be changed to get this working. Same test environment that was working on dedicated physical test servers, can now be easily setup on virtual servers on platforms like VMWare or Citrix. Essentially, get a virtual machine, and setup all needed platforms and configurations on it. Then get the test harness an deniable access to test repository. With this we should be all set to use this virtual machine just we would the dedicated machine. Just that, this is not a physical machine and from one server we may have got multiple such virtual machines as needed for testing different parts of (or different platforms for) the software being tested.
Just using a virtual instance is barely scratching the surface of value possible from clouds. The more leveraged use case is where the whole setup happens dynamically. No dedicated virtual machines. Instead have a machine image of the complete setup- OS, patches, platforms, app/web/DB servers, harness, access to repository, et al. And whenever needed, a machine is provisioned and this image is loaded/ booted up. And the complete test setup is now up and running dynamically. Get the needed build, run the needed tests, save the results/logs/ reports and release the virtual machine. Now, the visualization (private cloud) infra is much better utilized. As no VMs are setup, started and sitting idle. A VM is started just in time, when some tests tare needed to be run. These could be sanity tests needed as part of Continuous Build Integration, or could be nightly or could be any other adhoc testing.
While we have fine a step up in value by using on-demand provisioning of machines, the setup is still fairly rigid and static. For every platform/environment combination & configuration, a dedicated machine image will be needed. And testing a new combination will need either manual setup or yet another image to be created and saved. So now, taking the flexibility and leverage a step further, would be to get on-demand testing. Where even the configurations (OS, DB, app/web server et al) are "assembled" as needed. The test plan or suite to be run will determine what is needed, and a cloud-aware super-harness will start from a plain vanilla OS image and then setup all additional platforms configurations. This will need a new wrapper harness that does this setup, before running the actual tests that the software's harness enables. This does give a lot of flexibility in solutions that have a lot of platforms and combinations to test under. A common requirement for the product companies.
True virtualization though, is a step further. Where not only is the platform configuration (OS, app server, web server, Db et al) stitched together on the fly, but more importantly these can dynamically be determined by the tests being run. Each test or suite will have additional meta-data that would describe what all the test needs; and, what all are dynamic (like say, that it needs a DB. But any relational DB can be picked). Now when a tester needs to run a set of tests, a cloud-aware super-harness is used that will take the list of tests given, look at meta-data, understand what machines and configurations needed, provision and setup the same, run the tests, collate results across machines, and release all machines.
Now this enables much more than just better using virtual infrastructure. Implicitly it has enabled parallel testing. As each test describes the configuration it needs, and the wrapper-harness can provision and setup as needed and run the test, implicitly this can be parallelized. Maybe even massively, where each test is run on virtual machine by itself. Thus very steeply reducing the time it takes time run a large test plan or suite. More in this in the future articles.
Experts on QA