Opinion: Maturity Models, DevOps, Security….and wrapping

Opinion: Maturity Models, DevOps, Security….and wrapping

I used to be young. Hard to believe, I know, but it’s true: I had hair and spots and I drank lager. Back in those days the British Computer Society used to run a programming competition for their Young Professionals’ Group. And I competed in it until I hit my 30s, at which point I began setting questions and judging it.

As time went by, the contestants became more and more incredulous: each team of up to five people had a single PC to work on, and we expected them to design their solutions with pencil and paper. They complained that such constraints were unrealistic, and that in real life they’d be much better resourced: we didn’t cave in to their demands.

Rather quaint, don’t you think? Designing a program before sitting in front of a computer and bashing away? Thinking properly about what it’s supposed to do before you start typing?

Wind forward 20 years and I find myself reading the enthralling 88-page novella that is the General Data Protection Regulation. Article 25 is entitled “Data protection by design and default”, and it tells us that we need to ensure that the security of personal data is considered throughout the entire lifecycle of any project we carry out.

Security is a wrap?

I once worked with a chap who had an interesting view on how you do security when you’re developing a new system or product. First, you go ahead and implement the system to do what you want it to do. Then you put a “security wrap” (his words, not mine) around it prior to rolling it out. It was never quite clear what a “security wrap” actually was, and I never managed to ask him because I was too busy resisting the urge to stab him to death with a biro.

What the hell has happened to proper software development? Since when has it been acceptable to include only the core functionality in the design, and to lash everything else in at the last minute so the ops people will take it under their wings and agree to support it through the rest of its life?

Let me give you an example. You’re implementing a new cloud-based application to run (say) your recruitment function. You put together the requirements, you check the password policy to ensure you know what complexity settings you’re obliged to use, you spend a few weeks testing it and then you release the finished thing to the unsuspecting ops team. To which they say: “Hell, why couldn’t you have integrated it into Active Directory rather than giving us another password database to manage?”. Closely followed by the users: “Oh great, another password to remember”.

And don’t moan that this is a far-fetched example: I’ve seen it more than once. And failing to integrate it properly, or to consider all the requirements up front, ends up with idiotic features such as someone whose normal login is “Jon.Smith” being forced to log in as “Jonathan.Smith” to this one new system. It’s unnecessary, it annoys the hell out of the ops team and the users, and it’s a security issue because Jon[athan] now has two passwords to keep track of, which of course will demand on different days that he change them and will thus make the service desk phone ring more.

Thank goodness for the concept of DevOps – and particular DevSecOps (a concept I subscribe to as a practising information security person). Involving the people who have to run this stuff from the beginning of the project is a god-send, because it means we actually stand half a chance of getting it right from the beginning. It’s a Good Thing to implore the people designing these systems to observe some kind of best practice, maturity model, call it what you will.

What sort of mature are we talking about?

Do I care what maturity model you follow? No, not really: I just want developers to work with the poor sods who will be stuck with operating and supporting the stuff that gets thrown over the wall in a so-called “go live”. As a security person I’m quite a fan of the Cybersecurity Capability Maturity Model (C2M2), for example: it includes loads of really nice concepts such as role-based access (“Access is granted to identities based on requirements”), a risk-based approach to identifying and addressing vulnerabilities (“Cybersecurity vulnerabilities are addressed according to the assigned priority”), establishing any critical dependencies on the supply chain or other critical resources (“Supplier dependencies are identified according to established criteria”), and the application of proper development practices from day one (“Software to be deployed on assets that are important to the delivery of the function is developed using secure software development practices”).

If you think I’m getting overly hung up on security maturity in particular, you’re welcome to go for some general development maturity model concepts. Like the 1993 “Capability Maturity Model for Software” from CMU, which contains far-out concepts such as “The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process”. (Okay, I lied: not really all that far-out). The authors were clearly realists, though, as they also noted that: “Maturity Levels 4 and 5 [the highest levels of maturity] are relatively unknown territory for the software industry”.

Or there’s the Software Engineering Institute’s paper, whose 297 pages include gems such as “Understand Customer Needs and Expectations” (note that it says “understand”, not just “read and estimate”), and whose section 2.7 has a diagram that illustrates how the “engineering” function is fed by the “project” and the “organisation”. Sounds like DevOps to me (the dev team are as much “customers” as the end users, after all): the idea existed even though nobody had invented the word when that paper was written in 1995. Even back then the authors were seeing a need to “recognize the responsibility of the systems engineering function to address the entire concept of customer, which includes the user”, to adopt “user-centered [sic] development and maintenance processes”, and to make an effort “to identify any unique end-user needs and expectations and to obtain customer approval to include them”. Yes, that means working with the end users and, by implication, the poor sods in Operations who get to support the monster that the developers spawn.

I just don’t get that we continue to disregard the stuff we were taught in the old days: establishing requirements prior to design, working through development (preferably with the ops people on hand to advise), and then testing the result against a test plan that was derived from the requirements. (Incidentally, if you think the test plan should be based on anything other than the requirements you’re welcome to try to convince me as to why). And yes, requirements change as the project goes on, but that’s what change control is for: and if you involve the ops people and the end users from the beginning you’ll inevitably end up with fewer changes along the way.

How have we forgotten that software engineering is an end-to-end concept? It’s easy to think of “software engineering” as “writing software”, but the IEEE considers it as “the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software”: there’s nothing there to say that consideration of the “operation” aspect has to wait until the “development” bit has finished.

If you look to a suitable maturity model, then, you stand a chance of doing a better job of software development than if you don’t.

But only because maturity models have been around for far longer than many of us think, and they simply tell us what we’ve known all along: don’t start designing until you’ve understood the requirements; don’t start developing until you’ve sorted out the design; and the way to understand the requirements is to get them from the users and the ops people.