19
Aug 16

Developing our technology network and approach

This was originally posted on the Government technology blog

FullSizeRender (30)

I’ve been with GDS since its very first days, leading its technology communities, shaping our approach to architecture and development, providing assurance and direction across a number of programmes, and building connections across government. I’m really excited to now be taking on a clearer role developing our approach to technology leadership, open standards and architecture and I want to tell you a bit more about this work.

Technology leadership

Over the last few years new connections have formed across government as new people have joined, more people have got involved in digital transformation, long-time civil servants have connected with new ways of working, and our relationships with suppliers have become more open.  

My focus is to help these connections bring even deeper change, making sure the government has the right technology to meet new challenges.

Our technology community will work more closely with our Digital Leaders and Data Leaders groups, recognising that we focus on different dimensions of the same problems and that the solutions we come up with will invariably require expertise from all three groups.

We’ve still got a lot of older technology that’s expensive to change and we’ll work together to move away from that, but we’re also now in a position to start focusing more on where we want to be and what new technologies allow us to do.

Within that context we’ll work with the Technology Leaders Network to refine and update our strategy, policy and guidance on our core building blocks like hosting, networking and effective use of cloud tools. The goal is to accelerate the move to technologies that respond to users’ increased expectations.

Technology Leadership isn’t solely about what Chief Technology Officers (CTOs) decide and do. CTOs will need to make some big decisions about our future direction, where we’ll invest and what risks to take. To be effective, though, we need to empower technology leaders at all levels of government and that will be a growing emphasis in our work.

We’ve already run social events and unconferences to connect up architects and developers working in government and we’ll build on those to look at a variety of ways to connect up teams and practitioners to help respond to challenging topics.

Open Standards

After bringing in the right skills, careful use of Open Standards (standards on how to connect software together or how to structure data to meet our Open Standards Principles) is one of our most powerful tools for taking control of our technology destiny.

Too often we see organisations locked into specific products from specific vendors. The organisations accept the lock-in because the products offer  the only way to integrate with other product’s bespoke or proprietary interfaces

We don’t just get locked into one product or vendor, we get locked into an ecosystem and the cost of change becomes exponentially greater.

Open Standards break those locks. Truly open standards emerge from processes where domain experts build their understanding of how to break a problem into the right pieces, and then decide the right parts for the solution..

They open up markets by allowing new entrants to develop their products without worrying about complex licensing arrangements, and by making it easier for organisations to adopt different pieces at different times. They allow us to really understand the architectural dependencies in our systems, manage the cost of change, and move faster.

Over the past few years we’ve used the Open Standards Principles to identify and adopt a number of standards in government. We’re now engaging a broader community in that work to help teams across government understand the role of standards, identify appropriate standards, engage in the development of standards in a well-coordinated way, and where necessary get agreement for standards across government.

Dan Appelquist has been helping lead on this work over the past few months building communities around technical and data standards within government.

Taking a fresh look at problems

We often take incremental approaches to changing technologies. This means it’s tough to dedicate the time to considering problems in genuinely different ways.

Our work with the Technology Leaders Network will sometimes identify areas where a loose network of government teams won’t be able to make the progress we need and instead a small, focused-team is needed to explore, prototype and find new approaches. This is not so much an innovation team approach, but more some space to take a fresh look at a problem.

Sometimes the focus will be giving a little extra support to unblock projects people are already doing, providing a hub for teams assembled across government, or helping with some commercial arrangements. It’ll depend on the particular problem. We’ve started small, supporting work on improving the security of service.gov.uk but will be looking broader soon.

Open source

Alongside Open Standards, the past few years have seen us transform our relationship with open source software. The Service Standard requires new code to be released under open licenses, and many contributions have been made to open source projects.

We now want to build on that work with a more concerted approach to open source. This approach will involve building collaboration and reuse internally and making higher impact contributions to the wider open source community. There’s an enthusiastic and committed group of developers across government ready to work on this and we’re currently recruiting someone to lead and facilitate that.

Government as a network

To accelerate our progress we need to find ways to work across organisational boundaries, since these boundaries almost always involve trade-offs. Thinking of ourselves more as a network and less as a hierarchy is vital to ensure we connect the right people and expertise, understand where the real problems lie  and move ahead together.

We’ll be writing to technology leaders and other members of our leadership networks over the next few weeks with more information on how to get involved with this approach. We’ll also blog regularly about the work as it develops in order to make sure that it’s as open and accessible as possible.


13
Jul 16

Why ‘security says no’ won’t cut it anymore

This was originally posted on the GDS Technology blog.

GDS poster displaying the words 'Trust. User. Delivery.'s

I spoke recently at the Business Reporter’s Data Security in the Cloud event about how security has changed to face the reality of the modern internet era. The old world of assurance and compliance and ‘security says no’ won’t cut it anymore. Security thinking has to be holistic and take into account users, culture, context and behaviour not just technology.

This post will summarise some of the areas I discussed in my talk, detailing these modern realities and how we manage the changing security landscape.

Thinking beyond the cloud

The GDS remit has always been about digital transformation, which our former colleague Tom Loosemore recently expressed as “applying the culture, practices, processes and technologies of the internet-era to respond to people’s raised expectations.” Note the lead-in on culture, practices and processes before we get to technologies.

All too often, when responding to changing security expectations, there’s a tendency to talk about the cloud and related IT approaches rather than considering the context of broader change that’s happening to organisations.

We need to think about what’s changing across the whole environment, rather than simply thinking of cloud security in isolation. For example, while adopting cloud technologies, we’ve also seen the ascendency of continuous delivery practices, a shifting skills profile in our organisations and a move to being dependent on a range of small suppliers and contracts rather than large outsourcing contracts.

Securing while trusting teams

In the fast paced internet era, we need to move at the pace that’s expected of us and that means devolving lots of responsibility into focused teams. Teams need to be as autonomous as possible and effective teams need context. That starts with understanding what everyone is trying to achieve and ensuring they have the right tools at their disposal to deliver.

Securing in this type of setting can’t involve blanket lock-downs. This just won’t work; if we block the tools people want to use, we will only get more Shadow IT (people tend to circumvent controls to get their jobs done more efficiently).

Instead security must be proactive in helping teams work at speed, while selecting and using the most intuitive and secure tools available.

Transparency is essential

Tom Read who led the Cabinet Technology Transformation (now Group CTO at the Department of Business Innovation and Skills) has talked about an experiment his team ran to measure the number of people using non-work devices. The team installed some Wi-Fi access points around government buildings and then kept track of how many people connected to their personal devices. This let them identify people whose needs weren’t being met by their official IT. The team could then talk to those people about how new tools would help them.

Where there are tradeoffs to be made between how people want to work and what makes for secure behaviour, we can explore those with the users and find the best design. In the old way you may have a secure system that gives you a degree of confidence but the mass of shadow IT and the users working around your security policies means poor visibility into the real security of your system. That’s a natural result of a blanket approach and we need to do better.

Apart from more personalised and targeted security policies, we need tighter auditing. We need to know, for example, who is spinning up virtual machines and if someone has made changes to a server. If we know that we have a better chance of determining whether a change is appropriate or if it’s evidence of tampering. Previously a lot of the work we want to track was done by sysadmins but now the majority of it can be managed through automated auditing systems. Configuration management and infrastructure automation tells you whether there is any deviation in your infrastructure that can indicate compromise. The use of these systems can also vastly reduce the number of people needing direct access to a system, which can be hard to track.

Auditing’s not just for managing infrastructure, it works at the software as a service level as well. The best cloud productivity tools present us with opportunities to get logs of activity and an understanding of who has copied which documents, who has shared what with whom, and so on. We can get useful data about what’s happening in a way that’s not intrusive to our users and review logs to see unusual patterns.

There’s also transparency needed from our providers. Now that teams are working across a global network rather than within carefully controlled business networks, we need to gain certain guarantees from cloud hosting providers and dig deep into their security policies. For instance, providers can supply us with a guarantee not to look inside our virtual machines or containers, and we can ask what data encryption mechanisms they have in place to avoid them seeing our data.

Finally, design for privacy. There are differing public attitudes on privacy and it’s not clear where public expectations will go. At the moment though, principles of good privacy design revolve around making things transparent, ensuring clarity of ownership of data, providing the subjects of data with control, and minimising the amount of duplication and sharing. These are also important tools for building secure systems. If there’s one area to watch in the next few years, it’s privacy engineering.

Assessing cloud providers

When we talk to a hosting provider we don’t want to do a complete security audit ourselves. We want to know where they’ve applied industry best practices and how they can assure us of their methodology.

ThIs means establishing the right level of relationships with providers. When entering into security conversations as government, it’s all too common for us to be met with layers of the provider organisation; first public sector sales, followed by compliance, etc. We should instead first be talking to the actual architects and engineers. We want to talk to them about what systems really do and how they’re composed. Then we can be sure we’re on the right path.

Our colleagues in CESG produced these really helpful principles on cloud security but we still need to take care in how we assess providers against them and apply them internally ourselves. The security practices for our primary hosting provider needn’t be the same as for our shared calendaring app. Think proportionality, think trust, think context – we’re still working on how we apply this thinking ourselves and we’ll blog more on this soon.

Apply the principles incrementally and proportionally. When you start, there should be just a basic sanity check. For example, if trialling some Software as a Service, you may look to see if the provider has a clear privacy policy, whether it offers obvious points of contact, if it requires good passwords, and offers everything over strong HTTPS, but you wouldn’t want to go to the effort of understanding how every element of the system is tested or how incidents are handled. As you decide whether the tool is what you need for a given task, you’ll be able to understand whether there are areas you should probe more deeply.

Preparing for incidents

However much we prepare, there is always the possibility of an incident. In order to respond quickly, the team on the ground needs context to make decisions, clear leadership and an understanding of their communication channels.

Incident management is often perceived to involve managing at speed during chaos, but learning from emergency response teams in other realms, we have recognised that response teams that constantly practice, run drills and rehearse their roles are significantly more effective.

Teams should be running red team exercises and game days to rehearse incident management practices, and after each incident we recommend that a blameless post mortem is conducted to identify whether there are actions that could improve the team’s ability to respond.

All too often we think about incident management through the lens of dealing with the moment. It needs to follow through into action to address systemic issues and this needs to be done proportionally and calmly. That’s something that’s very much in scope of the new National Cyber Security Centre (NCSC).

I ended the talk reiterating that “cloud” is just one area of change impacting our organisations at the moment. When considering security, we need to think about the wider changes taking place to how we work and what we expect from our technology, etc. We’re continually developing our thinking in this area and we’d be interested in your feedback.

We also plan to update security sections in the Service Design Manual soon on areas such as cloud and information security. In the meantime, the following resources may help:

Risk management in digital projects

Government cloud security principles

Principles for building secure digital systems


05
Jul 16

Introducing GDS’ architecture approach and principles

This was originally posted on the Government Technology blog.

Screen Shot 2016-07-05 at 10.37.21

With the number of visitors we have coming through GDS we’re often asked to present various areas of our work, particularly architecture. We’ve usually kept the presentation pretty informal. People mean so many different things by architecture that it’s important to take some time to understand our audience before we dive into explaining our approach.

Recently though, we’ve noticed some common patterns emerging and have begun pulling our thoughts together as an introductory slide deck.

We are sharing the thinking behind our slide deck here. In sharing this, we want to be very clear that this is a snapshot of a conversation starter. It’s very easy for a piece of work like this to switch from “useful starting point” to “expected, fixed approach” but used carefully it will make it easier for us to reduce the learning curve for new members of the team and present our approach.

The context

Last year Dave Rogers of the Ministry of Justice wrote a great blog post talking about their approach to technical architecture. For me the main insight in that post was captured in the comment:

The emergence of a mature infrastructure-as-a-service and platform-as-a-service marketplace has transformed compute, storage and networks into utilities. With this the costs associated with major architectural changes has dropped, in some cases, to near-zero.

Technical architects who are able to take advantage of these changes are now working with a single medium: code. The physical infrastructure, the manual processes and their constraints have largely gone.

The world in which we’re operating allows more and more elements of our systems to be defined in software, and our profession has developed practices that make that software easier and cheaper to change.

At the same time, not only are more and more of our activities digital, but users have come to expect that great services will be regularly improved. Responsive and regularly improved services engender trust, without which government services fail.

That’s not to say that all architecture is software and at GDS we have architects working on a variety of more “infrastructural” projects, working on Crown Hosting, Public Services Network, blueprints, such as secure email, for Common Technology Services, and so on. Over time, the interfaces between the hardware and software elements are becoming clearer and more standardised, with software defining more and more.

The approach

Within a more rapidly evolving environment, architecture has to be a team sport. That doesn’t just mean “architecture teams”, it means architects and architecture as part of a much broader team.

Teams build systems and services at pace using better tools than we’ve ever had before for. They test the behaviour and performance of our systems every step of the way and iterate as we learn more about our users, their needs, and the interactions between systems. We’ve recognised that to provide great services we need collaborative, diverse and multi-functional teams.

So too, the economics of how we approach common components has shifted. It is easier than ever for teams to build minimum viable components of their own and then swap them out as more mature alternatives emerge.

Common capabilities can still speed up delivery and improve services, but the best components emerge rather than being proscribed in abstract organisational designs.

The GDS architecture community always starts with the same Design Principles as all other disciplines working on delivering digital services, and draw on other great work like the Cloud Security Principles, Open Standards Principles and the Technology Code of Practice but we also have some working principles of our own:

Start small and self-contained

Focus on understanding the user needs your service needs to meet and how it will do that. Consider whether you can do this using existing tools to keep your technology simple.

The unit of delivery is the team

When we work in disciplinary silos we can easily reinforce our biases and we cause friction with complex hands-offs. Make space for the team to do the discovery and participate in it fully.

Work with the grain of the internet and the web

The web is the most successful technology platform we have, building from simple protocols to support incredible large-scale applications. It’s our starting point for the vast majority of what we do. That leads us to federated and distributed approaches, and to architectures that make use of resources across the network rather than tightly-integrated technology stacks.

Platforms, standards and re-use emerge

Design for concrete needs, not for abstract reuse, but look sideways to see opportunities.

No undifferentiated heavy lifting

Our effort should be put where we really add value. That’s why we have a Cloud First policy and focus on open source software.

Assume evolution

Software is never finished but different elements evolve at different paces. Allow for this in your planning and your management, providing clear interfaces between components and making sure each of them can be changed at the appropriate pace.

It’s not real until it has users

A project isn’t a success until its users think it is. The only good technology is the technology used by real users.

Government is rarely unusual

Very little of what we do operates at significant scale, and most of our challenges are common. That should inform how we work, and where we learn from, seeking out the best examples from across our profession.

Design for operability

Users need available and resilient services. This requires well maintained technology.

Common architectures and mandated components

There are areas where GDS is developing common architectures and has controls in place to ensure the use of common components and platforms.

In the slide deck we state:

There are reasons to mandate use of certain components, but they’re never about conformance to a technical roadmap.

GOV.UK Verify is a good example of this. Government services that need the levels of assurance GOV.UK Verify offers are expected to use GOV.UK Verify.

That’s not because of any technical purism saying we should only have one, but it’s about a collection of other factors. Such as GOV.UK Verify’s architecture using a federated identity model to help drive industry innovation (people can choose from a number of identity providers to handle their initial registration when they sign into government services) and avoid creating a central database of personal data within a single supplier or within government. GOV.UK Verify has also been rigorously tested with users.

As time goes on and more common components emerge, we will need to think broadly about the best ways to ensure we can take advantage of platform effects, by concentrating demand in a way that lets us better take advantage of changing technical and commercial options and making things simpler for users.

Where the technical considerations are dominant we need to ensure that we are always creating “services so good people prefer to use them even where alternatives are available (or could be made by the team)”.

We have shared our slide deck as an open google file. We do not recommend using it as a stand-alone presentation and it is purely intended to give an introduction to our thinking at this time. If you want to discuss the presentation contact the technical architecture team.

All our work is done out in the open so to keep up to date with our way of thinking check out:


05
Feb 16

Future of Government ICT 2016

Last week I delivered the opening keynote at Salford University’s Future of Government ICT conference. Sadly I was only able to be there for an hour or so and didn’t get a chance to hear any of the other speakers, but it was fun to get a little time there and to talk with a few participants.

The talk was trying to jam together an update on what GDS is working with and some thoughts on what’s going on in the tech world more generally. That was a lot to cover in half an hour!

What I was trying to do in the latter section was unpack what I’ve meant when I’ve used the familiar GDS line that “the era of Big IT is over”.

Over the past few years many of the things that used to slow down IT delivery have dropped away – the web gives us a lot of building blocks, cloud-based services allow for rapid provisioning and increasingly provide other components we can build on, and we have improving tools for managing change with continuous delivery extending into infrastructure and security tooling becoming part of the deployment process. In government, we’re also finally beginning to have procurement options that can keep up.

That context provides the opportunity. With that comes pressure from the fact that most peoples’ perception is that technological change is accelerating. We expect to see services improving rapidly and if an organisation can’t do that they lose our trust. For governments trust is the primary currency and we can’t allow our poor technology to jeopardise that.

When you combine that opportunity and that pressure you can take a different approach to technology. That’s where a lot of recent talk about “business/IT alignment” has been coming from. I’m not a huge fan of that phrase. It’s better to bring things together under a new banner, bringing together two departments is rarely the right way to frame a change. our focus should be on what the two can do together that they couldn’t do before.

For us that’s about user-centric technology-enabled service design. It’s continually improved services that meet user needs based on the best tools available to us, tech or otherwise.

I concluded by bringing the focus to three areas, though that was a bit of a cheat as I really had two things under each.

  • Focus on data. One of the hardest things for us to fix is the data we store that should be consumed by services. We need to do that. As we are able to move faster with more disposable resources, we also need to make sure we’re getting the monitoring, the metrics and the management information to understand what’s going on.
  • Focus on services. Mainly that’s about the services we’re providing for our end users, but we should also be thinking about how we provide technology and technologists as a service.
  • Focus on people. Again, we do this for our users. But we also need to recognise that we need more skilled people, and we especially need more diverse insights which means we need to address our industry’s awful track record on diversity and inclusion. Our biggest challenge as tech leaders is to build up the people and skills we need to work in new ways.

There’s loads more to pull out from those three points and I’m hoping to get a chance to expand on all of them over the next few months. For now, you can find the slides on Speakerdeck.


26
Jan 16

Working out how to open up the Register to Vote code

Originally posted on the Technology at GDS blog.

Over the course of the past few years many teams across government have begun publishing their code under open source licenses. That’s a change that’s been pushed by the Digital by Default Service Standard but it’s just as much a result of the change in development culture that’s permeating the civil service, with teams eager to share their work.

Despite that, releasing code is still a challenge for many teams and it can be one of the trickier areas of service assessments for many. Recently, Alex wrote about the work that the GOV.UK team did to open up their infrastructure code. Another project GDS has been deeply involved in that has been looking at opening up its code is the Register to Vote service.

Register to Vote started out in private because it was designed to support a policy that hadn’t yet been announced and between the need to move at pace and the complexity of connections with some other parts of government the team haven’t to date been able to go back and really address that.

The service is now maintained outside of GDS but we continue to liaise with the team and in order to support the opening of the code we set up a workshop with representatives of the service team, GDS and CESG to work through piece by piece and think through what needed to be considered.

The approach

We deliberately had a mixture of people in the room, some with deep knowledge of the service, others who were new to it but who have a solid understanding of software development, architecture and security. That mixture made sure there were people who could answer detailed, specific questions and had been thinking about this for a while, and others who could challenge assumptions and bring fresh perspectives.

  • We started out by discussing some broad considerations that apply to most projects. That helped us set the scene and also to understand each others’ perspectives.
  • We then went step by step through the architecture of the system, looking for any particular issues around specific components, making notes as we went along.
  • Finally we went back through and looked at a few issues that had come up repeatedly. In this case the main thing that came up here was around infrastructure and test suites.

This post is based on notes I took through the session, slightly reformatted for clarity.

Why do we want to open the code?

We started by reviewing the reasons that we would want to open the code in the first place. There’s a clear requirement to do so to meet the service standard but it was important to us that everyone in the room had more understanding of why that requirement is there. We quickly listed out four reasons for opening up as much of our code as possible (in no particular order)

  • Clarity about intellectual property. Opening our code forces us to think about the licenses applied to it and to know exactly who has what rights. This helps us reduce vendor lock-in as we have flexibility to change suppliers or move to in-house teams without needing to pay to move the code.
  • Transparency and audit-ability. Where the work is being done in public it’s immediately clear what work is being done, and also makes it much easier to consult with a wider community for input, whether that’s experts in forums like StackOverflow, vulnerability researchers, or others.
  • Provide examples for other teams. Whether or not our code is directly reusable it’s helpful for teams to have examples of how other people have solved common problems. Sometimes that’s government-specific, but often it’s also just about developing the knowledge across the development community.
  • Reusability for other people. Some of the code we use will be directly useful to other people. It can be hard to invest the time in extracting that and packaging it, but working in public gets us thinking about that more clearly, and gives other teams the opportunity to say which bits they’d like to make use of.

It was interesting to note that for two of these, being able to see the process you’ve followed is useful as well as the end result.

It’s much easier to start in public

As we discussed the particular challenges for Register To Vote we kept coming back to the fact that it’s much harder to make something public after the fact than to start out that way. When you’re working in private it’s easy to be a little lazy about what you disclose in the code and the commit messages you make. You can often make a set of assumptions about who will be reading it and fall into jargon that makes sense to them rather than to a wider group. Those assumptions aren’t safe even when working in private as you never know who will be working on the code in future, but working in the open should challenge them more quickly.

Taking an existing repository of code and going back through the history to review every past commit can be a hugely time consuming challenge and it’s easy to miss something you didn’t mean to make public. As the GOV.UK team explained, careful use of our tools can help with that but it can still be very time consuming. With all that work to do in one go, it can be difficult to prioritise the effort when you also have changes to make based on what you’re then learning about your users (or to address technical debt).

If you start out in public then you can think about that as you go along. If not, you have to make the choice between investing the time in sanitising the history or taking a snapshot and losing all that history, which may also lose some of the value. It won’t always be possible to start in public (as it wasn’t for Register to Vote) but we’d encourage all teams to at least try and think as if their code was public and conduct code reviews accordingly to avoid problems later on.

Considering security

Security is often the word used to caution against opening up code. As we worked through the Register to Vote architecture we kept coming back to what security issues could be introduced or exacerbated by opening up the code.

GDS has blogged before about when it’s okay not to open source code but the areas we particularly focussed on here were:

  • Code that you are not confident in the quality of (eg code that is yet to be reviewed). This should be the easiest to manage as code that hasn’t been reviewed shouldn’t be deployed, but it’s worth checking that your processes prevent that
  • Bespoke functions that are designed to reduce fraud or mitigate attacks, though quite what that means will vary from service to service
  • Configuration rules for security appliances. It may be worth sharing heavily used rules for the wider community but generally you won’t want to share it all.
  • Code that is known to bring your infrastructure down (eg certain types of performance testing scripts)
  • And of course, our keys, usernames and passwords

There is certainly a chance that opening up your code, particularly if you open it all, can make potential attackers’ lives easier – they can conduct their reconnaissance and craft their attacks on local copies of the system rather than having to test against your live system and potentially triggering your monitoring to give you advance warning. Hiding your code isn’t a strong defense and our aspiration should always be for the system as a whole to be solid enough that this isn’t a serious threat, but this is very much an area of risk that should be flagged and considered carefully.

We talked at length about protective security (monitoring, fraud detection, etc) and approaches to performance and security testing. When working through these things it’s very important to put yourself in the mindset of an attacker and ask as an attacker is there anything I’d learn from the open code that would let me bypass triggers/fraud detection/etc? Tests are a particularly thorny issue as good tests will push systems to their extremes and may well reveal the elements you consider fragile. While your development focus should be to remove that fragility, that does take time. As with everything else we see value in making tests public, but sometimes it’s better to open source the framework you use for testing and a few examples, and allow for holding back some of the more invasive tests.

And when, as with Register to Vote, you’re calling out to third-party systems that have private interfaces it’s polite to discuss with those third-parties about what they’re comfortable with you opening up. If they’re not, it may be worth talking to other teams who are likely to be using the same interfaces and either working together to change the approach, or working out some other way of collaborating on the code.

Infrastructure as code

We talked for quite a while about “infrastructure code”, by which we meant everything from puppet modules to firewall configuration.

The general sense here, as with a lot else, was that we could open up most of the code but that we’d be more deliberate about extracting specific modules. Partly that’s because we could more immediately see the potential for re-use of some of the puppet modules we’re looking at, but it also seemed like releasing the puppet code as a single lump would expose a particularly large amount of information about the configuration. Clear, strong separation between code and configuration is always good practice, but absolutely vital when you’re planning to make that code open. Careful attention to making your puppet code modular is really helpful for making sure you think that through. It’s worth noting that while GOV.UK opted to open up their main puppet repository, they’d extracted and released a number of modules beforehand.

Firewall configuration, and in particular the rules used in a Web Application Firewall, also seemed like an area to be particularly cautious as they may well contain details of specific attacks (and sometimes attackers) that would affect our ability to defend the system. That said, most of these tools have sets of common default rules and if we discover useful new rules that aren’t deeply specific to our systems there’s no reason that we shouldn’t contribute them back to the community in some way.

The wider context

Things that sit outside your specific code also have an impact on how to approach opening up your code.

We need to be confident that our code will be managed with integrity, particularly if we decide to make the public version of the code the definitive one. If we’re using code hosting and management tools that are run by other people, are we confident of their processes and that only agreed changes will make it in? The Cloud Security Principles apply here as much as anywhere.

Working in the open also raises the importance of being careful about how you manage your dependencies and keep them up to date. If a vulnerability appears in one of the third-party libraries you use, will you know about it and be able to update your code before someone can discover that you’re exposed via simple use of a search engine?

This is another area where being open doesn’t create a new threat but it does potentially exacerbate it. In this case we’re confident that our patching policies are responsive enough that we’re covered, but raising it did lead to a useful conversation about safe dependency management, when we should be keeping our own repository of common packages with code review before we accept new versions, etc.

Similarly, if you find a vulnerability in your own open code, you’ll need a way to make sure you’ve deployed that code into the production service as quickly as possible and don’t accidentally disclose the details via commits to your open repositories until you’ve fully dealt with them. For Register to Vote the approach to handle that by doing their primary work in a private copy with the changes pushed out to a public version after a period of time, for most GDS projects we work completely in public but have scripts available that let us switch to working in private copies for a period of time should the need arise. There are trade-offs involved in either approach and it’s important to carefully consider priorities when making those decisions.

Learning from the experience

The session felt like a helpful one and is something we’d encourage all teams to do, probably at specific points in the evolution of their architecture such as when new applications or services are being added, or aligned with the Service Standard stages of alpha/beta/live.

As teams get more experienced at running the sessions they should get shorter and more focussed. This one took us quite a while as we wanted to start from first principles and were looking at an already running service piece by piece. If it were part of a regular rhythm we could focus more specifically on what had changed since last time we’d met.

The external input was really helpful and well worth pulling in from time to time, but not necessarily essential every time you do a review of this sort. In this case we actually spotted a couple of issues that the team went away to fix.

What are Register To Vote doing next?

In the short-term the IER team are planning to open source the core Register to Vote API code and associated documentation.

Once that’s done, they’ll move on to making as much of the rest of the code public as possible. That won’t happen overnight given the amount of due diligence required to ensure that only ‘appropriate’ code is made available but following the workshop it’s much clearer what they need to look out for and they will work to do this alongside the ongoing responsibility of supporting the live service.


20
Jan 16

Recent history of UK government and open source

Over Christmas I spoke with a team in the US government who are pulling together some work on open source policy on that side of the Atlantic. To help them I tried to document recent UK government history on the topic. Having done that it seemed helpful to publish it somewhere in case I ever need to referene it, but the GDS blogs didn’t feel quite right.

This is definitely incomplete and I know a lot of other people were doing a lot of work. There’s a clear GDS-centric slant here because that’s what I know first hand. If you spot any particular egregious missing pieces, feel free to use the comments to add them.

In the first couple of years of our last Parliament (which ran from 2010-2015) there were two parallel strands of work running which eventually came together in GDS. The first was focussed on ICT/”technology”, and the latter on “digital”.

The last government’s ICT strategy laid out the initial commitment to a level playing field between open source and proprietary software

Most of the ICT strategy’s recommendations are embedded in the “Technology Code of Practice” which we still use, and which is backed up by a set of spending controls.

We started our work on GOV.UK in the open but didn’t have the support in place to really support any of that code by packaging it for others’ use. I coined the phrase “coding in the open” to describe what we were doing and blogged about that.

The Design Principles we published early in the life of GDS have been the rallying cry for openness in everything we do.

The Open Standards Principles were one of the areas where the digital and technology groups began to come together (we formally merged into GDS in 2012).

The Digital by Default Service Standard was gradually piloted over 2013 but came into force in 2014. It’s been revised a little, but point 8 has been consistent.

Our “Service Design Manual” includes some content on open source – that’s still weaker than I’d like in terms of advice and we’re working on an updated version that should begin to appear during 2016.

The project GDS has invested most in open sourcing (to date) is called vcloud-tools. My colleague Anna wrote about our process for that.

A lot of this is underpinned by procurement reform that intends to make it easier to buy smaller pieces, work to open standards, etc. Examples of that are our G-Cloud and (forthcoming) Digital Outcomes and Specialists frameworks:

I see a lot of what’s happened to date as being about laying the groundwork for what we really want to do, which is about building a community and leveraging this work to reduce duplication, better understand opportunities for consolidation (eg. providing more common platforms), and also softer things like improving our profile as an employer of technologists.

We’ll be gearing up to really invest in that in the new year, but we’re doing a similar thing around the “open standards” side and some recent blog posts start to give a flavour of that.


20
Oct 15

Sub-resource integrity at github

There’s lots of really good work going on at the moment to make the browser environment more secure. Github wrote up their experience of implementing one of them.

These changes don’t just make the users’ experiences more secure, they can have very real direct financial benefits too. Github claim:

“Widespread adoption of Subresource Integrity could have largely prevented the Great Cannon attack earlier this year.”


22
Sep 15

WhatsApp’s 50 developers

Wired’s piece on how WhatsApp serves 900 million users with only 50 engineers is getting a lot of attention.

It’s an incredibly impressive feat, but it’s a shame the article focuses on their use of Erlang rather than looking into what effect the tight focus of the product has.

The language is a factor, but it seems like the main reason they’d be able to work with a relatively small team is that they stick to a very small set of features?


21
Sep 15

alpha.nhs.uk

I’m really excited about the work Adam and team are doing at Department of Health and NHS.

It’s great to see them beginning to unveil what they’re up to.


21
Sep 15

“How we ended up with microservices”

Write up from a departing SoundCloud engineer of that company’s architectural journey.

“I am sorry to disappoint my fellow techies, but the reason we migrated to microservices had to do much more with productivity than pure technical matters. I’ll explain.”