A security expert once said to me, “It’s the fastest teams and the fastest systems that get used for new features, not the most appropriate, and this causes security breaches”. I never understood what he meant, until I saw it with my own eyes.
Years ago, I was working for a large company with many systems and teams. Our team was was a massively scalable data-aggregation and interrogation platform, sitting behind a dedicated separate Identification and Authentication service.
As you would expect, this ID service had gone through all the security processes, was hosted on-site etc, and was industry standard for PII (Personally Identifiable Information). It was rock solid, a bastion of security for the wider project, and pretty much a done deal in terms of data protection.
It was also extremely slow.
Slow in performance due to it’s architecture, and slow in adding new features due to the tricky nature of the project. Our system, on the other hand, was blisteringly fast, scalable, and we had a track history of delivering new features quickly.
One day, it was decided that the company needed to store users postcodes/zipcodes. This was PII (as in some sparsely populated parts of the world a single person’s address actually has it’s own code), was part of a users core identity and so would naturally sit within the obligations of the ID system. Except the ID system wasn’t going to be able to scale to reach the demands of this particular usecase, and besides the team behind it were already behind implementing other features.
So they decided that our system would store these postcodes. Problem was, our system (and team) was security cleared for ‘Amber’ level data, that is, data that was anonymised, non identifiable and non personal. This wasn’t just a case of a bit of extra vetting; the whole architecture of the system was based around this ‘secure but not PII level secure’ concept, as were our internal processes and ways of working.
So we had to implement a whole new raft of security procedures, adopt new ways of working, construct highly detailed threat models and go through a rigorous infosec grilling, to ‘sign off’ on a level of security we were not designed as a system to achieve, while under intense pressure to deliver the new postcode feature. This retroactive upgrading of a systems security level is never as effective as building that security level in from the grass roots, and while I think we secured it well, it was far from ideal.
So the experts words were born out: the most security-appropriate system (the highly secure but slow ID system) was ignored in favour of our less secure but faster data-aggregation system. No breach that we know of resulted from this, but it was still a risky move and one that placed a lot of strain on our security apparatus.
So look out for these situations when planing a multi system platform; just because one system seems like the most natural fit for potential functionality, it doesn’t mean that it’s the one that will eventually be used. This can apply to other sorts of concerns too, not just security.