Open Source Blog: My another Blog

Its finally started! It was really sitting at the back of my mind – since the time I created RapidBlog, and now finally it is there – Open Source Blog. My recent entry OPEN SOURCE AND OUTSOURCING describes some points on open source and the outsourcing world and how they are coming together closer and better with everyday of the internet passing by.

Ah! Open Source.

UPDATE (June 05/2004): This blog was merged later on with the existing blog. Unfortunately due to sudden site crash I was only able to save very few write-ups. You can find those in this blog with ‘Open Source’ subject.

© Manoj Khanna 2003 – 2012.



Workflow Systems

Workflow systems can be described according to the type of process they are designed to deal with. Thus we define three types of workflow systems: …

Workflow systems can be described according to the type of process they are designed to deal with. Thus we define three types of workflow systems:

Image-based Workflow Systems are designed to automate the flow of paper through an organization, by transfering the paper to digital “images”. These were the first workflow systems that gained wide acceptance. These systems are closely associated with “imaging” technology, and empathize the routing and processing of digitized images.

Workflow was initially closely associated with imaging, where workflow software helped to automate image routing. In a typical scenario, incoming mail (consisting of forms to be processed) is digitized and stored on optical discs. The workflow software manages queues of pending documents, automatically balancing the workloads of individual workers that are processing the incoming forms.

A case study of automating correspondence response to US X Telecom’s customers.

X Telecom receives correspondence from 80,000 to 100,000 of its phone customers each month. Prior to implementing a workflow system, it took 20-30 days to respond to a letter. After the implementation of the system, the response time was cut to 5 days, with a 50% reduction in staff and productivity improvements of 70%. X Telecom not only digitized and routed the incoming correspondence, but accessed networked databases and presented relevant information in CRM with the image. In some cases the workflow system can place the appropriate information in a form letter with no operator intervention.

Form-based Workflow Systems are designed to intelligently route forms thoughout an organization. These forms, unlike images, are text-based and consist of editable fields. Forms are automatically routed according to the information entered on the form. In addition, these form-based systems can notify or remind people when action is due. This can provide a higher level of capability than image-based workflow systems.

Formed-based workflow takes image-baed workflow one step further. Rather than simply routing images to workers, forms are routed. Since forms contain data that is accessible to the workflow system, conditional decisions can be made automatically by the workflow system. Thus routine forms might have much of their data automatically filled in, while exceptions could have complex rules for their processes.

An example of how this form processing is done, and a comparison to imaging systems is the following:

(Diogo Teixeira and Jeff Thompson’s, 1993)

Banks are a natural application area for image-based workflow systems, since they still process enormous amounts of paper. This article gives an overview of how banks are starting to use workflow software for the routing and control of documents in the form of images. Workflow systems are described that are concerned with the very high volume of clerical type work in banks that is not routine enough to be processed automatically, but falls into clearly definable categories so that there are a controllable number of cases, outputs and options. It is clear that banks are starting to see the advantage of transitioning to form-based workflow as well.

The authors point out that it is almost impossible to make use of imaged documents without implementing workflow software at the same time. Workflow software is viewed as primarily a means for tracking and controlling documents.

The discussion on workflow benefits from numerous references to how workflow is being applied in collection systems, mail tracking systems and credit card processing, among others.

Five benefits are given for adopting workflow software:

1). There is faster processing of work, since the total transaction time is generally much greater than the time to complete the work steps

2) Workflow systems are usually based on the client-server architecture, as opposed to mainframes

3) The information processes of the bank (the “work flows”) are made explicit and are more easily changed

4) Paper is eliminated

5) Financial losses from misprocessed paper are eliminated



Coordination-based Workflow Systems
are designed to facilitate the completion of work by providing a framework for coordination of action. The framework is aimed to address the domain of human concerns (business processes), rather than the optimization of information or material processes. Such systems have the potential to improve organizational productivity by addressing the issues necessary for customer satisfaction, rather than automating proceedures that are not closely related to customer satisfaction.

Coordination-based workflow is grounded in the theory of communication and coordination developed by Fernando Flores and Terry Winograd beginning in the late 1970s (Flores 1979, Winograd and Flores 1987). Having proved successful in a series of case studies, this theory is starting to emerge as the basis of the new understanding of work.

Most human coordination occurs in the requesting, making, and fulfillment of commitments between people, and he proposed that the importance of the computer lies in facilitating this kind of coordination rather than simply in data processing. The basic cycle of coordination reappears at many levels of an organization, not just between individuals, and that the organization itself could be seen as a network of recurring workflow loops. In an accumulating series of case studies, it has become clear that the workflow-loop map is the basis for measurable and significant improvements in productivity and in satisfaction of customers and employees Although the workflow notation was invented for a commercial business context, it is much more general. It can be used to map coordinative processes among humans in any domain.

The adjacent figure shows the generic structure of a coordination loop, called workflow. The notation supports an interpretation that work is a closed loop process in which a performer completes actions leading to the satisfaction of a customer’s or client’s request. During any phase, the participants may make requests of others, thus initiating secondary loops whose completion enables forward progress in the primary loop. This generates a network of connected loops: loop segments can be further refined, fractal-like, into more loops. A human coordinative process is a network of recurrent loops designed to carry out a specific function. An organization can be seen as a network of such processes that collectively carry out the organization’s mission.

The above figure shows, the map for a procurement process of an organization, illustrates expansion into secondary workflows; in this case, the primary performance phase is expanded into three sequential secondary loops. The figure also illustrates a new notation that shows how client-server computing structure interacts with the business process and affects its performance. In a case study at George Mason University, we found that the process of student advising cannot be made to have a fast turnaround unless the transcripts of individual students are available on a moment’s notice to a faculty advisor during an advising session; to achieve the required response time, the database must be mounted on a local server (Denning and Medina-Mora 1994).

The power of this notation derives from two complementary aspects. First it explicitly shows the actions leading to the satisfaction of an agreement between two parties. Second, it shows direct connections between incompletions of loops and breakdowns such as persistently dissatisfied customers, wasted effort in complaint loops, lack of trust, or poor market credibility. Figure 3 shows how a persistent incompletion in the primary loop of a procurement process can give rise to a new secondary loop for complaint resolution, which can further delay customer satisfaction and unnecessary load to supporting computing servers. Case studies show that organizations that persistently complete their loops on time will have many fewer of these problems. Business process re-engineering can be significantly facilitated with workflow-map notation that shows both the business process and client-server computing systems (Denning and Medina-Mora, 1994).

The workflow map is explicitly concerned with the making and fulfilling of commitments, with determining who is responsible to carry out the work and by when, and with the satisfaction of the person making a request. These concerns place the organizational processes at a higher level of abstraction than the business, material and information processes of an organization — the latter being the processes that move physical items and information items to various places where they are manipulated and combined. The more general organizational processes drive materiel and information processes. For this reason, tools for observing, measuring, and modeling material and information processes — e.g., IDEF1 and Queueing Network Models — are not powerful enough for building workflow systems oriented towards all organizational processes.

© Manoj Khanna 2003 – 2012.



Open Source Blog: My another Blog

Its finally started! It was really sitting at the back of my mind – since the time I created RapidBlog, and now finally it is there – Open Source Blog. My recent entry OPEN SOURCE AND OUTSOURCING describes some points on open source and the outsourcing world and how they are coming together closer and better with everyday of the internet passing by.

Ah! Open Source.

UPDATE (June 05/2004): This blog was merged later on with the existing blog. Unfortunately due to sudden site crash I was only able to save very few write-ups. You can find those in this blog with ‘Open Source’ subject.




Workflow Systems

Workflow systems can be described according to the type of process they are designed to deal with. Thus we define three types of workflow systems: …

Workflow systems can be described according to the type of process they are designed to deal with. Thus we define three types of workflow systems:

Image-based Workflow Systems are designed to automate the flow of paper through an organization, by transfering the paper to digital “images”. These were the first workflow systems that gained wide acceptance. These systems are closely associated with “imaging” technology, and empathize the routing and processing of digitized images.

Workflow was initially closely associated with imaging, where workflow software helped to automate image routing. In a typical scenario, incoming mail (consisting of forms to be processed) is digitized and stored on optical discs. The workflow software manages queues of pending documents, automatically balancing the workloads of individual workers that are processing the incoming forms.

A case study of automating correspondence response to US X Telecom’s customers.

X Telecom receives correspondence from 80,000 to 100,000 of its phone customers each month. Prior to implementing a workflow system, it took 20-30 days to respond to a letter. After the implementation of the system, the response time was cut to 5 days, with a 50% reduction in staff and productivity improvements of 70%. X Telecom not only digitized and routed the incoming correspondence, but accessed networked databases and presented relevant information in CRM with the image. In some cases the workflow system can place the appropriate information in a form letter with no operator intervention.

Form-based Workflow Systems are designed to intelligently route forms thoughout an organization. These forms, unlike images, are text-based and consist of editable fields. Forms are automatically routed according to the information entered on the form. In addition, these form-based systems can notify or remind people when action is due. This can provide a higher level of capability than image-based workflow systems.

Formed-based workflow takes image-baed workflow one step further. Rather than simply routing images to workers, forms are routed. Since forms contain data that is accessible to the workflow system, conditional decisions can be made automatically by the workflow system. Thus routine forms might have much of their data automatically filled in, while exceptions could have complex rules for their processes.

An example of how this form processing is done, and a comparison to imaging systems is the following:

(Diogo Teixeira and Jeff Thompson’s, 1993)

Banks are a natural application area for image-based workflow systems, since they still process enormous amounts of paper. This article gives an overview of how banks are starting to use workflow software for the routing and control of documents in the form of images. Workflow systems are described that are concerned with the very high volume of clerical type work in banks that is not routine enough to be processed automatically, but falls into clearly definable categories so that there are a controllable number of cases, outputs and options. It is clear that banks are starting to see the advantage of transitioning to form-based workflow as well.

The authors point out that it is almost impossible to make use of imaged documents without implementing workflow software at the same time. Workflow software is viewed as primarily a means for tracking and controlling documents.

The discussion on workflow benefits from numerous references to how workflow is being applied in collection systems, mail tracking systems and credit card processing, among others.

Five benefits are given for adopting workflow software:

1). There is faster processing of work, since the total transaction time is generally much greater than the time to complete the work steps

2) Workflow systems are usually based on the client-server architecture, as opposed to mainframes

3) The information processes of the bank (the “work flows”) are made explicit and are more easily changed

4) Paper is eliminated

5) Financial losses from misprocessed paper are eliminated



Coordination-based Workflow Systems
are designed to facilitate the completion of work by providing a framework for coordination of action. The framework is aimed to address the domain of human concerns (business processes), rather than the optimization of information or material processes. Such systems have the potential to improve organizational productivity by addressing the issues necessary for customer satisfaction, rather than automating proceedures that are not closely related to customer satisfaction.

Coordination-based workflow is grounded in the theory of communication and coordination developed by Fernando Flores and Terry Winograd beginning in the late 1970s (Flores 1979, Winograd and Flores 1987). Having proved successful in a series of case studies, this theory is starting to emerge as the basis of the new understanding of work.

Most human coordination occurs in the requesting, making, and fulfillment of commitments between people, and he proposed that the importance of the computer lies in facilitating this kind of coordination rather than simply in data processing. The basic cycle of coordination reappears at many levels of an organization, not just between individuals, and that the organization itself could be seen as a network of recurring workflow loops. In an accumulating series of case studies, it has become clear that the workflow-loop map is the basis for measurable and significant improvements in productivity and in satisfaction of customers and employees Although the workflow notation was invented for a commercial business context, it is much more general. It can be used to map coordinative processes among humans in any domain.

The adjacent figure shows the generic structure of a coordination loop, called workflow. The notation supports an interpretation that work is a closed loop process in which a performer completes actions leading to the satisfaction of a customer’s or client’s request. During any phase, the participants may make requests of others, thus initiating secondary loops whose completion enables forward progress in the primary loop. This generates a network of connected loops: loop segments can be further refined, fractal-like, into more loops. A human coordinative process is a network of recurrent loops designed to carry out a specific function. An organization can be seen as a network of such processes that collectively carry out the organization’s mission.

The above figure shows, the map for a procurement process of an organization, illustrates expansion into secondary workflows; in this case, the primary performance phase is expanded into three sequential secondary loops. The figure also illustrates a new notation that shows how client-server computing structure interacts with the business process and affects its performance. In a case study at George Mason University, we found that the process of student advising cannot be made to have a fast turnaround unless the transcripts of individual students are available on a moment’s notice to a faculty advisor during an advising session; to achieve the required response time, the database must be mounted on a local server (Denning and Medina-Mora 1994).

The power of this notation derives from two complementary aspects. First it explicitly shows the actions leading to the satisfaction of an agreement between two parties. Second, it shows direct connections between incompletions of loops and breakdowns such as persistently dissatisfied customers, wasted effort in complaint loops, lack of trust, or poor market credibility. Figure 3 shows how a persistent incompletion in the primary loop of a procurement process can give rise to a new secondary loop for complaint resolution, which can further delay customer satisfaction and unnecessary load to supporting computing servers. Case studies show that organizations that persistently complete their loops on time will have many fewer of these problems. Business process re-engineering can be significantly facilitated with workflow-map notation that shows both the business process and client-server computing systems (Denning and Medina-Mora, 1994).

The workflow map is explicitly concerned with the making and fulfilling of commitments, with determining who is responsible to carry out the work and by when, and with the satisfaction of the person making a request. These concerns place the organizational processes at a higher level of abstraction than the business, material and information processes of an organization — the latter being the processes that move physical items and information items to various places where they are manipulated and combined. The more general organizational processes drive materiel and information processes. For this reason, tools for observing, measuring, and modeling material and information processes — e.g., IDEF1 and Queueing Network Models — are not powerful enough for building workflow systems oriented towards all organizational processes.




The Radio Spectrum & Tomorrow’s Communication (Part II)

Sitting at Starbucks, enjoying my Soy Latte, I also enjoy the HotSpot. Sometimes, I spend a whole day sitting there, …

Sitting at Starbucks, enjoying my Soy Latte, I also enjoy the HotSpot. Sometimes, I spend a whole day sitting there, as I’m plugged into the rest of the world through the HotSpot. The “Wi-Fi” HotSpot. But as I’m sitting there and reading across the Wireless Local Loop (WLL) is something which makes me feel better and sad at the same time. I’m waiting to get to experience of what was to come – but for now its missed.

People who want the high speed internet connection, either go for DSL, or Broadband Cable. But are these the real sources required for surfing the internet. Is this we call the Internet, sitting at home, or office plugged through the cable or telehone. The imagination is equally weird – the 1950s telephone line carrying the broadband – which isn’t cheap and profitable at the same time to the telphone companies – abyss – the consumer.

WLL on the other hand is free from all those drawbacks. Radio waves reach everyones homes and offices, they do not require to dig up streets or shoehorn data into a system designed for voice. The frequencies at which the radio waves travel are in certain cases unlicensed, thus this opens up the market for many ones. WLL, on the technological front has improved a lot since its inception – not requiring a line of sight from the customers building to the service provider’s base station or outdoor installations like satellite dishes.

Today companies have developed terminals which are as easy as carrying a mobile telephone. Plug and play devices. You buy these terminals at a store and then plug into your PC, and rest the self-guided screen will take you through billing and selection of voice and data package. The speed on these terminals is around 12 megabits per second – 10 times faster than the broadband and 200 times faster than a dial-up. In other cases, the data speeds are higher than imagined. Also, creation of not only the so-called HotSpots, rather covering the entire city. Aiming for true mobility in this internet-age. Technologies such as third-generation(3G), Universal Mobile Telecommunications System (UMTS), Orthogonal Frequency Division Multiplexing (OFDM), and Voice over Internet Protocol (VoIP) [for carrying calls across internet and other networks] are not increasing comptetion rather they are increasing choices for the consumer. Choose what best works for you. Eliminating the so-called BIG players from the picture would certainly give a lot of breathing room to today’s suffocating consumer with the choices tomorrow.

So how far we are before we can use this technology? Depends where you live. In the event of regulations and the delay-adaptation and “the economy” – it all depends on the BIG players when they roll-out, though all long-distance carriers have WLL spectrum licenses. But as of now Broadband and DSL are the main selling points, and the wireless/mobility is the “little extra”.

© Manoj Khanna 2003 – 2012.



The Radio Spectrum & Tomorrow’s Communication (Part I)

Couple of weeks ago the country missed the opportunity of technological innovation in the internet and telecom spectrum. Innovation …

Couple of weeks ago the country missed the opportunity of technological innovation in the internet and telecom spectrum. Innovation which could have freed you from having to plug into telephone lines and cable – and having a blazing data connection that you could have ever imagined. Wireless Local Loop (WLL). This along with the on-going efforts of wireless LANs, ultrawide band transmissions and mesh networks. WLL has the capacity to deliver internet access ten times faster then the speediest broadband connection.

We all have seen in the past year the enormous growth of 802.11b – “the Wi-Fi” – standard – and how it has created a revolution in the home and office networks. And this all comes in at the time when the telecom sector was supposedly going under recession.

The technology so far has improved enormously too – the regulations which governed the radio spectrum 70 years ago are no longer representing the technical limitations of that time today. Today, Digital Signal Processor (DSP) chips – a radio burned into a chip – reconfigures itself on the fly – hopping from channel to channel – thousands of times per second. This clears some of the myths about – the transport – the traffic and the bandwidth limitatioons. Making most of them irrelevant to the today’s scenario.

Thus, the demand of the hour entails some very basic thoughts about – understanding technology as it stands today. The historic notions about radio frequecies and the spectrum have to be changed and more education concerning them is required.

We probably lost the opportunity of the usage of this immense power, but the for the rest of the geography, Asia and Europe – the option is still open. We’d need to learn the the lessons from what it’ll entail.

© Manoj Khanna 2003 – 2012.



The Radio Spectrum & Tomorrow’s Communication (Part II)

Sitting at Starbucks, enjoying my Soy Latte, I also enjoy the HotSpot. Sometimes, I spend a whole day sitting there, …

Sitting at Starbucks, enjoying my Soy Latte, I also enjoy the HotSpot. Sometimes, I spend a whole day sitting there, as I’m plugged into the rest of the world through the HotSpot. The “Wi-Fi” HotSpot. But as I’m sitting there and reading across the Wireless Local Loop (WLL) is something which makes me feel better and sad at the same time. I’m waiting to get to experience of what was to come – but for now its missed.

People who want the high speed internet connection, either go for DSL, or Broadband Cable. But are these the real sources required for surfing the internet. Is this we call the Internet, sitting at home, or office plugged through the cable or telehone. The imagination is equally weird – the 1950s telephone line carrying the broadband – which isn’t cheap and profitable at the same time to the telphone companies – abyss – the consumer.

WLL on the other hand is free from all those drawbacks. Radio waves reach everyones homes and offices, they do not require to dig up streets or shoehorn data into a system designed for voice. The frequencies at which the radio waves travel are in certain cases unlicensed, thus this opens up the market for many ones. WLL, on the technological front has improved a lot since its inception – not requiring a line of sight from the customers building to the service provider’s base station or outdoor installations like satellite dishes.

Today companies have developed terminals which are as easy as carrying a mobile telephone. Plug and play devices. You buy these terminals at a store and then plug into your PC, and rest the self-guided screen will take you through billing and selection of voice and data package. The speed on these terminals is around 12 megabits per second – 10 times faster than the broadband and 200 times faster than a dial-up. In other cases, the data speeds are higher than imagined. Also, creation of not only the so-called HotSpots, rather covering the entire city. Aiming for true mobility in this internet-age. Technologies such as third-generation(3G), Universal Mobile Telecommunications System (UMTS), Orthogonal Frequency Division Multiplexing (OFDM), and Voice over Internet Protocol (VoIP) [for carrying calls across internet and other networks] are not increasing comptetion rather they are increasing choices for the consumer. Choose what best works for you. Eliminating the so-called BIG players from the picture would certainly give a lot of breathing room to today’s suffocating consumer with the choices tomorrow.

So how far we are before we can use this technology? Depends where you live. In the event of regulations and the delay-adaptation and “the economy” – it all depends on the BIG players when they roll-out, though all long-distance carriers have WLL spectrum licenses. But as of now Broadband and DSL are the main selling points, and the wireless/mobility is the “little extra”.




The Radio Spectrum & Tomorrow’s Communication (Part I)

Couple of weeks ago the country missed the opportunity of technological innovation in the internet and telecom spectrum. Innovation …

Couple of weeks ago the country missed the opportunity of technological innovation in the internet and telecom spectrum. Innovation which could have freed you from having to plug into telephone lines and cable – and having a blazing data connection that you could have ever imagined. Wireless Local Loop (WLL). This along with the on-going efforts of wireless LANs, ultrawide band transmissions and mesh networks. WLL has the capacity to deliver internet access ten times faster then the speediest broadband connection.

We all have seen in the past year the enormous growth of 802.11b – “the Wi-Fi” – standard – and how it has created a revolution in the home and office networks. And this all comes in at the time when the telecom sector was supposedly going under recession.

The technology so far has improved enormously too – the regulations which governed the radio spectrum 70 years ago are no longer representing the technical limitations of that time today. Today, Digital Signal Processor (DSP) chips – a radio burned into a chip – reconfigures itself on the fly – hopping from channel to channel – thousands of times per second. This clears some of the myths about – the transport – the traffic and the bandwidth limitatioons. Making most of them irrelevant to the today’s scenario.

Thus, the demand of the hour entails some very basic thoughts about – understanding technology as it stands today. The historic notions about radio frequecies and the spectrum have to be changed and more education concerning them is required.

We probably lost the opportunity of the usage of this immense power, but the for the rest of the geography, Asia and Europe – the option is still open. We’d need to learn the the lessons from what it’ll entail.




Six Sigma and Software Engineering and Reliability

I recently finished reading the book “What is six-sigma?” by Peter Pande, and Larry Holpp. In terms of Software Engineering, Six Sigma is much more than a specific analysis of software reliability. It is a quality improvement framework, and mindset focused on the measurement of process variation as the culprit for lack of quality. I want to point out that the term “six sigma” when used in conjunction with software reliability, has little or nothing to do with statistics, with distributions, with their moments, etc. It is a buzzword and will remain a buzzword until such a time as it is defined in statistically correct ways.

The real Sense for Six Sigma



Six Sigma as the name implies stands for six standard deviations from the mean. Sigma is a statistical measure of variability around the average. The concept of Six Sigma comes from reliability engineering prediction of system or component failure probabilities. For example, the wear out time of a component may be normally distributed – that is meant – standard deviation. So, we want a component having a very small of failure before its design life. If, we set this at one sigma from the mean, we get ~80% reliability, and 2 sigmas gives us ~95%, and 3 sigmas ~99%, and so on. Six Sigma gives us ~99.9997% reliability – near perfect; or, in other ways 3.4 defects per million.

Six Sigma and Software Reliability.



In terms of software engineering, however, it is not so quite clear cut as compared to mechanical or electronic components. Also in case of software reliability, we don’t have very good predictive models, failure models, etc. As somebody suggested, that one approach to this could be to predict faults remaining as a function of faults found in earlier phases. In general terms, for software reliability, Six Sigma would mean that the software process will find ~99.9997% of all the faults before the software is put into service.

What do we need to do?



We need to adjust the design life accordingly. In common terms, the design life of shrink wrapped software is ten seconds before we open the package, and for the custom software ten seconds after the check clears.

In the language of Motorola official release:



”Motorola wants to be free of errors and defects 99.9997% of the time in all that it does. That means no more than 3.4 defects per million units.”

- ‘Electronic Business’, October 16, 1989

Statistical Tools – Improved Software Quality

Use of Statistical Tools to Improve Software Quality and some points to remember regarding this:

  • Today, the complexity and size of software has grown substantially, along

with the size and complexity of the silicon processors, perhaps exceeding

Moore’s Law (a doubling of processing power every 18 months).
  • The business risk of developing very large software systems has spurred

the development of a very large shrink wrapped software industry, primarily

because of the failure of many very large complex systems.
  • Software factories, of which the primary case would be Microsoft, flourish

by delivering very large, internally complex products, at prices consumers

can afford to bear, exclusively by delivering extremely large volumes of like

products. The only technique that has proven effective for quality assurance 

is using thousands of volunteer quality inspectors (beta testers) to report

the errors prior to the final release of the product. Because the cost of manufacturing

 beta copies is so low, it is far out weighed by the economic benefit the company

receives from this type of testing process.
  • Hence, can we ever assume that the software development industry will ever

achieve on standardized uniform measure of software quality, given that to

be relevant, the definition of a software standard must be reached between

the consumer of the software are and the producer of the software ? I would conjecture,

probably no.
  • The reason for this is due to the nature of software. An algorithm may

be provably correct but may be implemented in an inefficient manner. (A possible

 defect). It might be physically damaged in the duplication of a disk (a manufacturing 

problem), which might manifest itself by the consumer being unable to install

 and use the product. The cause of the problem, may remain the inefficient 

implementation of the algorithm, but it manifests itself in so many potential

ways, it will be in all likelihood, impossible for the consumer to identify

the defect, and unless a defect can be quantitatively measured it will be impossible

 to detect.
  • At the very core of the problem, the inefficient algorithm might be the

work of one designer or developer, being unaware that more efficient mechanisms 

might exist, or it may be the result of a specification error, or perhaps the 

algorithm subroutine was purchased from an outside supplier, who provided 

poor instructions regarding it’s limitations.
  • Statistical tools can be used to analyze overall system quality, such as

 a transaction failure. These tools are severely limited in the applicability

 to an individual software developer because the development task is typically

 to design and write single software modules, as opposed to a large scale software

 use.
  • We keep learning more and developing new insights, so things will change, 

most probably through the use of better software partitioning and packaging

 technology.

Conclusion


In the end, the people at large, the **users** does not understand why a concept that is worthy and meaningful in the hardware and manufacturing domain ***does not*** apply to software. Consequently, the **users** might be mislead and ill-served because they are led to believe “six sigma” software is somehow comparable to “six sigma” hardware. Is it?? Does it??

[I am convinced that others who have read the authoritative literature on six sigma and have attended the appropriate training could talk more intelligently about this technology.]

© Manoj Khanna 2003 – 2013.




Six Sigma and Software Engineering & Reliability

I recently finished reading the book “What is six-sigma?” by Peter Pande, and Larry Holpp. In terms of Software Engineering, Six Sigma is much more than a specific analysis of software reliability. It is a quality improvement framework and mindset focused on the measurement of process variation as the culprit for lack of quality. I want to point out that the term “six sigma” when used in conjunction with software reliability, has little or nothing to do with statistics, with distributions, with their moments, etc. It is a buzzword and will remain a buzzword until such a time as it is defined in statistically correct ways.



The real Sense for Six Sigma


Six Sigma as the name implies stands for six standard deviations from the mean. Sigma is a statistical measure of variability around the average. The concept of Six Sigma comes from reliability engineering prediction of system or component failure probabilities. For example: the wearout time of a component may be normally distributed – that is mean – standard deviation. So, we want a component having a very small of failure before its design life. If we set this at one sigma from the mean we get ~80% reliability, 2 sigmas gives us ~95%, 3 sigmas ~99%, and so on. Six Sigma gives us ~99.9997% relaibility – near perfect. Or in other ways 3.4 defects per million.

Six Sigma and Software Reiability

In terms of software engineering, however, it is not so quite clear cut as compared to mechanical or electronic components. Also in case of software reliability, we don’t have very good predictive models, failure models, etc. As somebody suggested that one approach to this could be to predict faults remaining as a function of faults found in earlier phases. In general terms for software reliability Six Sigma would mean that the software process will find ~99.9997% of all the faults before the software is put into service.

What do we need to do?

We need to adjust the design life accordingly. In common terms, the design life of shrink wrapped software is ten seconds before we open the package, and for the custom software ten seconds after the check clears.

In the language of Motorola official release:

Motorola wants to be free of errors and defects 99.9997% of the time in all that it does. That means no more than 3.4 defects per million units.

– ‘Electronic Business’, October 16, 1989

Use of Statistical Tools to Improve Software Quality and some points to remember regarding this:

  • Today, the complexity and size of software has grown substantially, along

    with the size and complexity of the silicon processors, perhaps exceeding

    Moore’s Law (a doubling of processing power every 18 months).
  • The business risk of developing very large software systems has spurred

    the development of a very large shrink wrapped software industry, primarily

    because of the failure of many very large complex systems.
  • Software factories, of which the primary case would be Microsoft, flourish

    by delivering very large, internally complex products, at prices consumers

    can afford to bear, exclusively by delivering extremely large volumes of like

    products. The only technique that has proven effective for quality assurance,

    is using thousands of volunteer quality inspectors (beta testers) to report

    the errors prior to final relase of the product. Because the cost of manufacturing

    beta copies is so low, it is far out weighed by the economic benefit the company

    receives from this type of testing process.
  • Hence, can we ever assume that the software development industry will ever

    achieve on standardized uniform measure of software quality, given that to

    be relevant, the definition of a software standard must be reached between

    the consumer of that software and the producer of the software ? I would conjecture,

    probably no.
  • The reason for this is due to the nature of software. An algorithm may

    be provably correct, but may be implemented in an inefficient manner. (A possible

    defect). It might be physically damaged in the duplication of a disk (a manufacturing

    problem), which might manifest itself by the consumer being unable to intsall

    and use the product.The root cause of the problem, may remain the inefficent

    implementation of the algorithm, but it manifests itself in so many potential

    ways, it will be in all likelyhood, impossible for the consumer to identify

    the defect. Unless a defect can be quantitatively measured it will be impossible

    to detect.
  • At the very core of the problem, the inefficient algorithm might be the

    work of one designer or developer, being unaware that more efficient mechanisms

    might exist, or it may be result of a specification error, or perhaps the

    algorithm subroutine was purchased from an outside supplier, who provided

    poor instructions regarding it’s limitations.
  • Statistical tools can be used to analyze overall system quality, such as

    a transaction failure. These tools are severly limited in the applicability

    to a individual software developer, because the development task is typically

    to design and write single software modules, as opposed to large scale software

    reuse.
  • We keep learning more and developing new insights, so things will change,

    most probably through the use of better software partitioning and packaging

    technology.

Conclusion

In the end, the people at large, the **users** does not understand why a concept that is worthy and meaningful in the hardware and manufacturing domain ***does not*** apply to software. Consequently, the **users** might be mislead and ill-served because they are led to believe that “six sigma” software is somehow comparable to “six sigma” hardware. Is it?? Does it??



[I am convinced that others who have read the authoritative literature on six sigma and have attended appropriate training could talk more intelligently about this technology.]