comScore

Should Software Companies Be Held Liable for Security Flaws?

The Biden administration has called on software firms to take more responsibility for making sure their products can’t be hacked
The Biden administration has called on software firms to take more responsibility for making sure their products can’t be hacked

Summary

  • The former U.S. National Cyber Director and the vice president of the Information Technology and Innovation Foundation face off

Does the software industry need to do more to keep its products safe from hackers?

The Biden administration thinks so. As part of its national cybersecurity strategy released in March, it called on software firms to take more responsibility for making sure their products can’t be hacked, and indicated it would support legislation to hold them liable if they don’t take reasonable steps to secure their products.

Software firms are financially motivated to get products to market quickly, not give priority to security, so market forces alone aren’t always enough to keep critical systems safe, the strategy’s supporters say. Most software flaws aren’t discovered until after products are on the market, they say, which leaves consumers holding the bag for poor cybersecurity.

Opponents counter that imposing liability on software makers for insecure products would do little to prevent cyberattacks, and likely do more harm than good. The industry would simply pass on those costs to customers, and slow down the pace of innovation to protect itself, they say.

Chris Inglis, who served as the U.S. National Cyber Director from June 2021 to February 2023, makes the case for shifting more responsibility for cybersecurity onto software firms. Daniel Castro, vice president of the Information Technology and Innovation Foundation and director of ITIF’s Center for Data Innovation, argues against it.

Yes: Follow the Transportation-Industry Model

By Chris Inglis

Imagine if buying and operating a car followed the model we currently employ to buy the computers and software that are essential to our daily lives. Responsibility for things like faulty vehicle air bags, seat belts and anti-lock brakes would fall to consumers rather than automakers, not to mention accountability for the design and operation of a safe and reliable highway transportation system.

This is, of course, an absurd notion, but it highlights the nonsensical approach that is the default model for today’s cyber landscape, where end users shoulder more of the burden for cybersecurity than the industry that develops and makes the products. With technology becoming ever more central to our daily lives, and cyberattacks a constant and growing threat, we can no longer afford to allow safety and reliability to be a hit-or-miss priority for those who build our digital foundations.

Following a model that’s been successful in delivering physical safety for transportation systems, the U.S. national cybersecurity strategy released in March aims to use a combination of incentives and assignment of liability to “rebalance responsibility" in cyberspace toward those best-positioned and capable of shouldering it at-scale—developers and manufacturers themselves.

Despite a long history of incidents, the U.S. software industry isn’t covered by any specific law guiding the safety and security of consumer technology. While some consumer-protection laws apply, they aren’t tailored to the unique challenges presented by software—nor are their enforcement arms focused on safety and security. This has led to instances where tech companies have been slow to respond to security vulnerabilities, have released products with major flaws or have failed to provide adequate warnings about the risks associated with their products.

Indeed, a business model that puts a priority on innovation and speed to market rather than safety and security has yielded a whack-a-mole system where most of the effort to find and fix security flaws occurs after software has been shipped and customers put at risk. Imagine if automakers did little safety engineering but promised to fix any dangerous flaws users could discover.

Market forces remain the first, best route to agile and effective innovation in terms of basic security mechanisms, but when that fails the government needs to step in. One approach is to impose a formal “duty of care" obligation—similar to that in the auto industry—requiring software makers to adopt certain basic security-conscious practices when developing and updating their products.

Any software liability solution must take into account the unique characteristics of the technology sector, where innovation remains a critical asset to improved performance. Liability therefore should focus on codifying and implementing best practices in secure code development, while avoiding one-size-fits-all requirements that would have varying effects across diverse product offerings.

Importantly, regulation and liability must be rigorously informed by the private sector that would bear it, and harmonized across various would-be regulators to ensure that it delivers expected benefits while imposing the lightest possible burden. Such a regime also would need to protect developers and manufacturers by ensuring that fulfilling a duty of care is a valid defense for breaches caused by human error or product misuse.

Some have argued that introducing any set security requirements will increase development time and costs associated with bringing new software to market, resulting in higher retail prices for everyone. This is shortsighted. Safer cars cost more money until one considers the cost avoidance of accidents and breakdowns that would otherwise ensue. The same is true of safer software—in the end, its costs are more than outweighed by savings from a reduced security burden for all.

Concerns that software liability would stifle innovation and create a chilling effect on companies’ willingness to create new products or features also are overblown. The industry has proved again and again that it is capable of accommodating regulation and building society-rocking innovation. What’s more, the government could grant exemptions for new innovations or small companies where innovation and experimentation is a critical driver of growth.

As software becomes more integrated into our lives, and diffused to more far-flung places like space and rural areas, the potential consequences of cyberattacks will expand in equal measure. Establishing a basic duty of care won’t prevent all flaws, but it will provide for a more dependable and accountable technology market.

Chris Inglis served as the first Senate-confirmed U.S. National Cyber Director from June 2021 to February 2023.He can be reached at reports@wsj.com.

No: Software Companies Are Just Scapegoats

By Daniel Castro

Whenever there is a data breach, ransomware attack or other cybersecurity incident, people want to find someone to blame. The obvious culprit is the attacker, often a cybercriminal or nation-state hacker. But since they often evade justice, it is easier to point the finger closer to home.

Software companies are one scapegoat. Making them liable for cybersecurity flaws has some obvious appeal. Imposing costs of security failures on them presumably would increase their incentive to fix problems proactively. But this assumes lack of financial commitment is the reason for insecure software.

In fact, software companies already invest heavily in cybersecurity. Between 2015 and 2020, for example, Microsoft said it spent $1 billion a year on cybersecurity, and in 2021 it committed to quadrupling its spending to $20 billion over five years. In addition, major software companies have spent tens of millions of dollars on bug bounty programs that pay security researchers for pointing out software flaws hackers could potentially exploit.

Despite that spending, companies routinely discover and patch security vulnerabilities because modern software is incredibly complex. Not only do software developers write thousands of lines of code, but their applications interact with external software libraries and operating systems that involve millions of lines of code. These code bases constantly change as developers make updates, so even as companies fix old bugs, new ones emerge.

Yet critics argue that software companies should be treated like automakers, which are liable for defective vehicles. But the comparison isn’t perfect. Cars and trucks need working door locks, but auto companies aren’t liable if thieves find ways to break in and steal valuables. Moreover, automakers routinely issue recalls for software defects in their vehicles because, despite product-liability laws, it isn’t easy to produce error-free code. Indeed, the reason software security is so hard is that attackers must only find one vulnerability, but those securing the product must find them all.

Human errors cause most data breaches. People make mistakes, and more liability won’t change that. If it did, holding anyone who clicks on a phishing email or uses a weak password criminally responsible would solve many cybersecurity issues. Likewise, expecting programmers to produce perfect code is unreasonable and would discourage people from pursuing careers in cybersecurity.

That doesn’t mean we should absolve businesses of sloppy security. But there is a misconception that companies don’t face accountability for poor security. They do. Software companies risk enormous financial consequences for security failures. SolarWinds, for example, suffered reputational damage, hurting both its sales and stock price, after attackers exploited a vulnerability in its networking monitoring software.

Regulators also can file lawsuits against companies for security failures. The Federal Trade Commission has brought cases against D-Link, TRENDnet, and ASUS for insecure software in their internet-connected devices. More recently, the Justice Department launched its civil cyber-fraud initiative to obtain multimillion-dollar settlements from companies that sell the government insecure technology.

Imposing liability on software companies would likely do more harm than good. Companies would pass on those costs to customers, raising costs for everyone with no guarantee of better security. And faced with higher liability risks, companies would have an incentive to innovate less, such as by cutting features to shrink the code base and slowing down the software-development life cycle. Less innovation also would apply to advanced security features, such as multifactor authentication and quantum-proof encryption, which can involve complex code. Companies would have less incentive to protect their customers and more to protect themselves.

There are no silver bullets, but there are many viable options for government and industry to work together to improve cybersecurity. More cybersecurity training for software developers, systems administrators and other technical workers is necessary to build secure software, configure it correctly, and remediate vulnerabilities. Better software supply-chain security—such as including a software bill of materials that lists all the digital components in an application—will help security experts quickly track down vulnerabilities. And the government and industry should work together to audit and refine the security of widely used open-source software, as well as develop and test AI tools to identify security vulnerabilities in code.

While making companies liable for insecure software may sound good on paper, it wouldn’t work in practice.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

topics

MINT SPECIALS

Switch to the Mint app for fast and personalized news - Get App