If you don’t have a secure design, or a secure development model, don’t develop software.
Though obvious, I think it still needs to be said: There’s no software that provides any value to anybody if it’s not secure. It’s like a car without wheels.
And data breaches are expensive and embarrassing. The average cost of data breaches in the United States alone more than doubled between 2006 and 2019, to US$8.19 million.
There are two basic approaches to software development and security.
One focuses on risk-based, functional security towards the end of the development process.
The other — which is the one I favor and recommend — has security embedded throughout software development and operations, all the way through to product end-of-life.
In the first, you deploy iterative development to build your minimum viable product, create deliverable code, and only then — at the end of this development process — you go into the testing cycle to figure out whether you actually have a secure product.
A lot of companies develop software this way and rely on software built using this approach.
In the alternative model, you secure every stage of a product’s life-cycle. Right from the start of the development model you are thinking and planning for security. You create the strategy, define the system architecture, and then prioritize product ethics that you then deploy throughout the product’s life. When you move security into each development and ops process, you consider security at every single step.
As I see it there are many advantages in this second approach. First, you start with the customer’s pain points, to understand the strategy needed to find a software solution, but also for understanding the reasons why you’re creating software at all.
You move from system architecture to software architecture, bringing and enhancing the security requirements, needs and modules with you as you go.
Only then do you start product development — from a secure foundation, but still keeping every step of the process secure. This applies to the teams designing, writing and delivering the actual code as well.
If you follow these principles, by the time you get to the release candidate, you have already considered security as a core value of the product and the requirements behind the product. You’ve considered security in your strategy, systems architecture, software architecture, and through each development stage.
In this model, when you get to the end of the software development cycle, you end up confirming that the new software product is secure, instead of testing to see if it is. If you’ve followed this process, there’s really not that much left for you to do. The product is inherently more secure.
Jumping without a parachute
With that first model, it’s a bit like jumping off a ledge without a parachute, when you start to test whether you can break the software. It’s effectively already broken, because it’s been built without security embedded along the way. Most likely, security problems will be exposed, and considerable changes to the software, including its architecture, will be needed.
I suggest this is a model you simply don’t want to use. So why do teams find themselves using this approach? I compare this to launching a boat with holes in it, plugging the holes while you are trying to stay afloat, and hoping for the best.
It is because they are most interested in developing the product, rather than security. Here’s an example that fits the world we currently live in: A software company builds a software app to measure people’s temperature so that we can identify COVID-19 cases. Speed to market is the most important thing. The software team and the organizations talking to them about such an app are excited about creating software solutions that could help everyone. And then right before release, when the product is tested, everyone suddenly realizes that anyone can hook into the app, see all of a user’s geolocation data, match this with the user’s ID, track their movements anywhere on Earth, and this data is being passed in an unencrypted data stream. Not the outcome anyone wants.
Is this the right answer to the problem? No. What’s important is protecting people in a secure way, not simply protecting people. If I’m going to share my private information with a corporation or a government, I expect that information to be stored and used securely.
Security testing that is safe and ethical is also difficult. It takes years to master, and is expensive. I can train some of my software developers in Unity and after two or three months they would be able to create AR software. It’s not the same with security: Those who are good at it are hard to find.
Beware human nature
Software design methodology and the processes that let this happen have to be in sync, and have to be secure. Data breaches don’t usually happen because hackers are smarter than the software developers who create software: They occur because security is often an afterthought.
The other reason that people choose to develop first and test second, when they should be securing first, and building second, is that they don’t know what they don’t know. If your software architects tell you that they’re good, then you may well believe them — until you end up in the newspapers because of a data breach. Until you make a mistake, you will keep going. Too often, it’s only when someone gets hurt that you realize that your software is not secure. Your mother can spend a lifetime telling you that if you carry on doing something you could break your leg. Until you actually break your leg, you simply don’t believe her, or you think this won’t happen to you.
When planning a new piece of software, with security embedded throughout the development cycle, here’s my six-point approach:
● Establish and implement security processes from product design forward
● Hire the best people you can find, who have secure credentials that can be verified
● Plan for human error — and implement tools and processes to overcome them
● Make sure that you have budget allocated for testing, and that project teams and senior management understand that security testing is an essential part of creating code — even though, when testing, you’re not producing code
● Remain cautious, continuously improve your security controls and process, and make sure that your security team is constantly training and improving their skills so they can stay a step ahead of the latest threats and vulnerabilities.
● Remember that products mostly don’t get hacked through the back door: They get hacked because of people’s mistakes. Everyone in the software development group should receive security training which maps to their role within the organization. Everyone has a part to play in product security.
Organizations may have any number of reasons for starting to go down the path of developing software before testing.
I would argue that you should resist them.
Instead, create secure architectures and processes before you even start developing your software. Then practice sound cyber-security at every step of the development process. If you are the customer, insist that your software development vendor or partner does this, and verify their approach. Your vendor should be following a well-planned, secure, software development life cycle guide from the concept stage through to product end-of-life and they should be able to prove it to you.
Following these steps could save you millions, a lot of heartache, and will keep you off the wrong side of the news.
About the Author
Igor Bergman is Lenovo’s Vice President of Cloud & Software. He has a crazy focus on resolving real customer problems and his passion for AI, ML, IoT, SCRUM, and Kanban is set by a strategic, customer-focused context. Start a conversation with Igor here, on LinkedIn or on Twitter.