Quality in the age of quantum computing
I left India for America as a post-graduate student in the late 1980s. In those days, the issuance of an H-1B visa to employees of Indian firms to enter America to work without having first studied at an American university was unheard of.
Most Indians went to America as students on what was called an F1 visa.
The line outside the US Consulate in Madras would start to form around 2am; no appointments were given, and visa interviews were on a first-come, first-served basis.
Many of these Indian students opted to stay back in the US; their US employers filed for H-1Bs on their behalf—employing them after they had finished Master’s or PhD courses from an American university.
The flood of H-1B visa holders who came in directly from India without first getting an American degree started in earnest only in the mid 1990s.
The American-schooled Indian F1 crowd sneered at these new arrivals, and derisively referred to them as “FOB” or “fresh off the boat” Indians.
The H-1B holders came up with their own unkind epithet about their American-schooled Indian brethren, calling them “coconuts”.
I trust that this insult will need no further explanation if I simply ask you to look at the different hues on the inside and the outside of a coconut.
In those years, the main impediment to “fresh off the boat” Indian programmers was that many Americans would complain about their Eastern, non-linear way of thinking, and bemoan the lack of quality in their work.
None doubted their individual competence and sheer brilliance at raw programming, but the quality of the finished product when stitched together was doubtful.
This was in contrast to America’s almost Germanic obsession with process, which, luckily for us coconuts, we had imbibed during our periods of study at American universities.
The Indian IT services industry quickly caught on to this, and set about transforming themselves with zeal.
They entered the cocoon of Carnegie-Mellon University’s ‘SEI’ or Software Engineering Institute’s ‘CMM’ or Capability and Maturity Model as chrysalises and emerged with wings, stamped with a CMM Level 5 certification.
Any organization that received this certification could claim to be the best in the world when it came to the quality of their software engineering processes—and if my memory serves me right, when I returned to India in 2002, there were over 60 CMM Level 5 certified organizations in India, compared with a low single-digit number in the West.
The “low quality” objection from American companies simply disappeared.
Urban legend has it that Jack Welch, when speaking of General Electric’s large scale move into India and its use of Indian outsourcers, remarked, “We came for the cost, but stayed for the quality.”
This was all very well in the 1990s and the noughties—an age when processes around computer engineering had emerged from the Neanderthal world of computer assembly language and machine-level coding into a process famously called the “waterfall process” of computer programming—a logical, sequential method for designing and developing software, more suited to the world of Homo Sapiens-Sapiens.
The waterfall method begins with gathering the user’s requirements for the programme, after which the process moves into system architecture and design before it is handed over to the programmers.
The programmers then write the requisite computer code before handing it over to the testers, who test each unit of code, as well as the entire programme.
The finished programme is then tested by users for acceptance before a final “regression” test checks how it will interact with other computer programmes already in use.
Only after a programme has passed all stages of the cascades in this waterfall process is it finally released into an “always-on” environment.
The quantity of computer code was simply measured in how many thousands of lines of coding instructions a computer programme contained, and the elegance of the waterfall process allowed for CMM to easily be the arbiter of quality in what was a logical sequence of events.
There have been variations on the waterfall process in the past two decades. The mid 1990s saw a move to “object-oriented” programming, which was less tightly organized around logic, and more around actual actions, and computer code came to be measured in “function points” rather than in thousands of lines of code.
Methods such as “Agile” and “Dev-Ops” have been all the rage recently. I shall not delve into a detailed explanation of each of these so as not to bore you, and because I don’t understand them fully.
But, as computer programming moves from the world of Homo Sapiens-Sapiens into the Artificially Intelligent world of sapient machines which can program themselves, new methods of checking for the quality and integrity of computer coding capabilities are the need of the hour. The added dimension of the cyber-security of the code and the associated data only magnify this need.
Unsurprisingly, an organization is now trying to take on this mantle.
The Consortium for IT Software Quality, or CISQ, is a sponsored special interest group founded jointly by the SEI at Carnegie-Mellon University and the Object Management Group. CISQ is chartered to create international standards for measuring the size and structural quality of software after analysing the actual computer source code written for Machine Learning, Artificial Intelligence, and “bot” programmes, rather than the various processes used to build these programmes, and regardless of whether the code is generated by a human or a computer.
The executive director of CISQ, Bill Curtis, led the development of the CMM model while at the SEI, and the organization is now trying to build credence as an arbiter of software quality. Its heritage, and its pivot down to the source code level while ignoring the various programming processes, means that it has a high chance of success.
Siddharth Pai is a world-renowned technology consultant who has led over $20 billion in complex, first-of-a-kind outsourcing transactions.