Six Pillars Of Security, #6: Appropriate Escalation and Containment

 

  • In the event of a breach or an infringement of your companies responsibilities, timely and appropriate escalation is required.
  • During one breach I witnessed at a company I used to work for, inappropriate and untimely escalation made the situation a lot worse; the dev team and their managers failed to escalate a serious issue (users credentials being logged in a log file) quickly and appropriately, and as result the situation escalated.
    • access to files is often logged. In the case of a breach, the lower the number of people who accessed the compromised resources the smaller the aftermath (e.g in the case of sensitive data being logged to a file, it’s easier to deal with five people who accessed the compromised file, than thirty). Reducing initial propagation helps this.
  • It is important that an issue is only propagated (revealed to other teams/parties) after it has been fixed.
    • Premature propagation can lead to a lot of chaos and panic and ‘quick fixes’ which exacerbate the situation.
    • Escalation not propagation!
  • Share details after the incident has been contained and fixed.
  • Basic Action Points For A Team:
    • Identify the existing breach report escalation structure.
    • Question it if needs be.
    • Communicate this to the team and ensure they understand it.

Six Pillars Of Security, #5: Controlling External Risk

Once data leaves your system, your ability to control it rapidly diminishes. However there are steps you can take to mitigate risks:

  • Only giving clients the data they require
    • For example, with a centralized service application this would involve analyzing what each client needs, and applying logic so that they only receive that information.
  • Actively engaging with client teams, asking them about security, guiding them. Even though the data has left your system, it is still your data and you need to ensure others are being careful with it.
  • You can’t allow other teams to rely on you for validation, especially clients providing data; this would effectively cripple any attempts by them to validate new changes on their front end, which can lead to them not validating at all.
  • Sometimes, getting a client to understand what data is ‘toxic’ and what data isn’t, is more effective than trying to validate everything.
  • Basic Action Points For A Team:
    • Identify which parts of your data are actually sensitive. This might be more than you initially thought.
    • Identify what parts of your data are ‘toxic’ (eg, can’t be considered trustworthy), make sure that clients understand that.
    • Investigate what data your clients actually need, especially with regards to sensitive data.
    • Talk to other teams, see how they are validating etc.
    • Apply filtering if appropriate.

Six Pillars Of Security, #4: Not Helping The Bad Guys

  • Systems often accidentally reveal their workings and vulnerabilities:
  • Revealing the server/component type and version in their error response -this allows the attacker to search for known vulnerabilities against that version number.
  • Logging full stack traces, or worse showing them in the error response -again, this tells the attacker what libraries the system uses, and if any of those libraries have known vulnerabilities, the attacker can exploit this.
  • Just because your client is internal and not ‘user facing’ don’t assume that your error responses won’t filter through to the outside -in a highly distributed system, you never know where your response will end up.
  • Same goes for logging; you don’t know who will end up reading your error logs.
  • Basic Action Points For A Team:
    • Never log full stack traces, unless absolutely necessary.
    • Confirm that non of your responses contain information about the server products you use or their versions.

Six Pillars Of Security, #3: Detection Of Breaches

firefighters-live-fire-training

  • Good visualization and graphing can expose suspicious activity.
  • Tools like Datadog are invaluable when coupled with appropriate queries/visualization.
  • For example, if a user or client is generating a lot of backend usage from a comparatively small number of requests, this can be a sign of a breach. If the front-end to back-end activity ratio per user is plotted, you can see this happening in real time.
    • In this example, monitoring and graphing database read/out volume would be a quick and easy measure; simple data breaches normally show up as a ‘swell’ in outbound data. More sophisticated attackers will extract the data slowly over a period of time to avoid such detection, so more specialized graphing and alarms would be needed e.g database usage aggregated over time, against average usage for that user.
  • Likewise, response size is a good indication of a breach/extraction; if someone is trying to steal information, then the amount of data per response is likely to be higher.
  • Looking for large numbers of bad requests, 404’s, ‘bad search parameters’ etc. is a good metric as well, as it’s a sign that someone is trying out different things with your API.
  • Basic Action Points For A Team:
    • Set-up basic monitoring for unusual activity volumes/frequencies/sizes etc, especially relating to other metrics eg. amount of database activity generated by a particular request or user (in the case of a database export). Tools like Datadog would be good here.
    • Set-up monitoring for bad requests/404’s/bad search parameters; these can be a sign of someone trying to guess a resource-id or probing how to access your system.
    • Investigate other options/possibilities for monitoring/visualization, specific to your project.
    • Understanding the threat model of your project, and what kind of attacks you are likely to encounter is key here -the belief is that commercial attackers are our main threat; is this accurate?

Six Pillars Of Security, #2: Configuration, Internal Processes And Human Error

 

  • 1-mistakeCare with configuration. Misconfiguration is one of the top 5 reasons behind companies getting hacked.
  • For sensitive data and functionality, consider incorporating a per-role based permissions system to reduce risk, and help track what happened in the event of an attack.
  • Principle of least privilege again helps secure a system; a leaked password isn’t any use if there is no way to invoke important processes with it.
  • Team must follow the internal processes for any key handling. Security developers should be completely familiar with and other team members have read
  • Review any security vulnerabilities or concerns for third-party libraries. Some well known libraries have massive flaws. eg. XMLDecoder (which is core java, but the point still stands) allows XML external to trigger system processes and execute java code: http://blog.diniscruz.com/2013/08/using-xmldecoder-to-execute-server-side.html
  • Basic Action Points For A Team:
    • Look at existing configuration, can it be made more secure? Talk to those responsible for it.
    • Implement a policy of fine-grained roles/permissions, and least privilege if possible.

Six Pillars Of Security, #1: Secure Application Development

5660-security-lock-and-key

Infosec requirements should be determined during requirements gathering and where relevant integrated into JIRA tickets as infosec non-functional requirements.

  • Each system should have a comprehensive threat model documented with all relevant attack vectors called out (key compromise, network compromise, DDOS).
    • This would include understanding the business model of those commercial attackers who would target us.
  • Security focused dev should be familiar with the OWASP Top 10 vulnerabilities and the Common Weakness Enumeration Top 25 and validate all tickets with a security element against these:
  • Un-validated/unstructured inputs into your system should be determined and risks presented to product stakeholder. These are vectors for injection attacks.
  • Attacks may still exploit product weaknesses without directly targeting the product itself itself.
    • eg. inserting an injection attack into one of your incoming requests, which then ends up being read by other systems, which then compromises a component in those systems, which then opens the door to further attacks etc.
    • Sounds far-fetched, but commercial attackers (who are more proficient than your usual Anonymous, Hactivist, ‘Script Kiddie’ DDOS-er) will know how to perform complex, multi-system attacks like this.
  • Action Points:
    • Infosec review process integrated into workflow.
    • Comprehensive threat model for project/product, including up and downstream components.
    • Product stakeholders need to do one of the following: a) accept risks arising from threat model, b) escalate c) add tickets to remove risk
    • Review of incoming data schema, seeing which parts are sensitive, which parts are toxic, and which parts are open to exploitation.
    • Security developers should be completely familiar with and other Team members trained in OWASP Top 10 and CWE Top 25 security flaws.

Thoughts On The IBM-Watson-Conversation Hackathon, London 2016

This October I had the privilege of participating in the IBM-Watson-Conversation Hackathon in London, as part of a five person BBC team. We eventually won out of the fifteen teams that participated, with our combination of technology, ‘human interest’ and humour being noted.
img_1012

These guys, right here….

The brief was simple: use IBM’s Watson powered Conversation engine to create a chatbot, integrating with Watson’s other Artificial Intelligence based APIs (e.g tone analysis, image recognition, context based news etc).
Conversation is a Natural Language Processing (NLP) engine, that allows the construction of non-linear, non-brittle dialogs. It’s integrated into a wider eco-system of IBM and Watson based products, using the IBM BlueMix cloud platform as its bedrock, so getting off the ground is as easy-as-pie. It also enables integration with select external services such as Foursquare and Twilio.
Our Team:
Our idea was a mobile based dating assistant, called wingApp! The idea was that you could enter things like ‘He comes from Italy’ and it would suggest new stories from Italy, or something like ‘she really like me’ and it’ll suggest nearby venues to move onto. All this was done in a very tongue in cheek way, with our own little mascot picture; the app would even give you a ‘get out’ call on your phone if the date goes south!
c88f7953-5a93-46ac-8968-0176e73fb160
Each of us had our own roles: one of us did UX design (including the mascot!), another of us integrated with Facebook messenger and so on. I managed to get the hang of the Conversation dialog and training engine very quickly, so I was charged mainly with turning the teams ideas into plausible statements and responses.
Conversation In Detail:
The Conversation platform works by looking for Intents in the users inputs, matching any Entities that it recognises, and responding accordingly. So, the input phrase ‘She works as a Doctor’ could match the Intent of ‘looking for something to talk about’ (as in the user intends to find something related to their date), and the Entity of ‘Doctor’.
The main trick here is that the Intents are not hardcoded phrases; rather, you give Conversation a list of example phrases for an Intent, like so:
  • She works as a Doctor
  • She’s a Doctor
  • I think she’s a scientist
  • He’s Musician
Eventually Conversation learns the semantic meaning behind these phrase, and variations thereof, so if it encounters a new permutation of the phrase, it still reads the intention correctly. It has its own built in dictionary of words and idioms to draw from too; for example, I trained it to tell a joke in response to “I want to make them laugh”, or “I need a joke”, but it also recognised the intention from “I want to crack her up!”.
My Thoughts:
This is conceptually simple to grasp, and works well. Instead of relying on a linear flow for the conversation, Conversation finds the Intent behind the message, and uses that as it’s starting point. Further interactions and conversational branches can then be followed.
In my experience the engine worked well in this regards. Obviously we couldn’t explore it’s true potential in the 48 hours that we had to build the chatbot, but once the basics were established we made good steady progress.
The actual interface was very much in ‘beta’, but the underlying system gave us plenty of scope for more advanced things; for example, there is a shared context object between Conversation and other parts of the Watson platform, so we could easily pass information between the various services. Integration with Facebook messenger was obstacle free as well.
One problem I did encounter was that the training cycle (which was kicked off everytime you entered a new Intention or corrected a mis-matched one) was long, about 20 minutes. This probably wasn’t helped by the fact there were 15 other teams in the same building doing the same thing, however I have used other chatbot platforms with similar concepts where there was no learning time (however, I did feel that the quality of Conversations learning was better).
Summary:
So what did I think overall? If I was to build an NLP based chatbot Watson-Conversation would be my first port of call, just for ease of integration with the other Watson services, the wider BlueMix workflow and toolset, and the ease of setup. Wit.ai (another chatbot platform) seems more integrated into Facebook, and is possibly quicker to get off  the ground with, but it doesn’t have access to the wider IBM ecosystem, and doesn’t work as well in my (limited) experience.
The Human Factor:
One thing that is missing from my description above is that our chatbot was funny and engaged with the user ‘in a human way’; this was picked up on by the judges, and indeed is arguably the whole reason behind Conversation and NLP.
‘Information Technology’ is, at the end of the day, always about trying to solve a human problem, and both Developers and Business managers would do well to remember that.

My Presentation at OWASP London

I recently had the honour of presenting a talk at OWASP London at Bank in London. The talk was originally aimed at my company’s ground troops (developers, product managers), but also clearly presents a way of organising a security team; this may sound trivial, but the way a security effort is organised has a big impact on how effective it is. My current project (about 120 people across seven teams) has approached this by nominating security champions in each team,  who manage risks using their own separate, cross team project (to avoid workflow issues), and having a unified ‘Security Council’.

owasp_2

Watch the video here!

The presentation was warmly received, and a number of good questions were asked, so it’s worth viewing the Q&A!

AppSec: Beat The “It’ll Never Get Fixed” Blues!

We’ve all been there.

We’re busily going about our work, when suddenly we notice something odd. Maybe it’s a badly thought out permission policy, maybe it’s some unprotected URL configuration that could be used to get an EC2 instance to spill its guts , but whatever it is, it smells.

But you’re knee deep in your own task, so you wearily go to JIRA, click ‘Create New’ and enter in the most perfunctory ticket description possible. And off it goes, your new little ticket, to reside deep within the project backlog, collecting crust with the other non-function-requirement tickets. Hey, business is business, and business needs features!

Or maybe you don’t do anything at all. ‘Cause why bother?

Either way, the problem never gets looked at, never gets evaluated and never gets fixed.

Not every security risk is an immediate threat, nor does every problem need work scheduled to fix it. Knowing the risk, and getting the appropriate sign off when that risk is acceptable, is just as good if not better than trying to fix the problem; no system can be one hundred percent secure, and it’s up to the business to decide what assets are valuable to them and what level of security they need.

The problem is getting a good workflow that allows you to have an overview of risks for a project, and to escalate and action them effectively. This is especially important for multi-team projects, where each team may have their own Scrum or Kanban flow, their own managers and reporting structure, and their own internal politics. Your usual Agile setup is also geared toward concrete bits of work, not considering/escalating/actioning abstract risks.

Therefore it’s useful to have a separate appsec project or workflow that’s outside of the day-to-day workflow, where risks can be logged and processed, with lead developers (or security focused developers) and managers invited. A typical workflow would look like this:

  • Risk Ticket raised by one of the teams developers; this ticket should be as much in the abstract as possible (at this point it’s wasteful to start looking at the concrete fix)
  • Ticket reviewed by the lead developer e.g as part of weekly review
  • The lead talks to the product owner or similar manager about the risk posed, and possible solutions.
  • The product owner decided whether it’s worth scheduling the work for the fix, or whether it’s an acceptable risk.
  • If it’s an acceptable risk, the owner closes the ticket.
  • If it’s something that should be fixed, then a new linked ticket in the teams actual ‘day-to-day’ workflow project is created, with concrete specifications on how to fix it, along with a priority. This is a work ticket, and separate from the original Risk ticket.
  • This new ticket is then fixed as part of the usual development process.

This may seem like overkill, but it really helps with larger projects especially when applied across teams, as it bypasses a lot of the usual siloing that you get in big projects and allows for cross team threats to be addressed.

More importantly, it empowers all developers to raise security concerns that they spot, safe in the knowledge that these concerns will be seen, evaluated and dealt with appropriately.