DrupalCamp London 2014: A Review

The City University London campus was the venue for Drupalcamp London 2014 and I went along for the weekend as a delegate. This was the first conference for a while where I wasn’t helping out, speaking, or organising in some form so it was good to just turn up and relax. I travelled down on the Friday night from Manchester and successfully booked into my Airbnb room.

Saturday

The (pre-breakfast) keynote was from Mark O’Neil and was about the Government Digital Service (GDS) and about how they are working towards making the UK government IT service better and more open. Mark is Head of Innovation and Delivery at GDS and has had a hand in most of the projects that the organisation is involved with. The team he runs is only a dozen or so developers, but they are producing things that are used by millions of people right now. The most prominent of these is the gov.uk website.

£14 billion a year is spent in the UK on government IT, but this arena is dominated by small number of large suppliers. The GDS set out with 7 principles, which revolve around ensuring that users get what they need and that the systems produced are as open as possible. There is very little government presence at events and so Mark was here to spread the message of what he and his team are doing.

Image removed.

Mark’s team build a tool called a Needotron that is used to gather requirements. With this tool he found that users have a different perception of need than what is generally expected. From this they then built gov.uk based on the requirements gathered. The gov.uk site was built on open source and open standards, but also with open performance and open cost. It is possible not only to view the code behind the site, but also to see how the site is performing and how much the GDS is costing the UK taxpayer.

The GDS team has a philosophy of using the right tool for the right job and so Drupal is part of the systems that they manage. There are several satellite sites that are running Drupal as it fits the needs to their users very well. There is a lot of back office transformation being done with Drupal, like document management systems. Drupal acts as a glue between many of the major components within the government.

The next talk (after a spot of breakfast) was VijayaChandran Mani, who was talking about the configuration schema in Drupal 8. Unfortunately VijayaChandran didn’t actually state what the configuration schema was and so I was a little bit lost at the start of the talk. The schema configuration is a way of defining what fields a table will have within Drupal 8. The schema configuration uses YAML files in order to describe the tables and is Inspired by a piece of software called Kwalify.

All of the schema definitions are stored in the module directories under lib/config/schema, which is defined with PSR0 autoloader standards. All of the schema configurations are kept in files with the *.schema.yml file name suffix. The schema files are also used to define translations and validation information. The data types for the table fields are defined in datatye.schema.yml and any new data types must first be defined in these files before being used.

Vijaya Chandran finished by showing a few modules that work with the schema configuration. One that stood out was Configuration inspector, which creates a configuration setting and update form that allows configurations to be configured.

Directly after this was Rupert Jabelman with his talk Apache Solr: Beyond the search page. This was an interesting talk that looked at the Apache Solr Drupal module (instead of using the Search API module) in order to extend.

The Apache Solr module is a good starting block for Solr interaction, especially for developers. It has a number of hooks to allow people to get started in development of Solr search configurations. The module allows for the creation of search pages and the use of custom filters, which use Lucene queries to restrict content in some way. Solr is really just a web service (that wraps Lucene) so it can be called with a URL, normally as a GET request. The URL’s can contain the same argument more than once, not allowing this is a bit of a PHP-ism.

Image removed.

Rupert shows how he had created a set of search result pages what show content in various ways. One interesting page contained the results of many different entities, many of which linked data from one of more other entities. This page was able to show these results without loading a single item of data from Drupal, it was all based on results already stored in Solr.

This talk was really interesting and well presented. It was also very relevant to me, especially for some upcoming projects that use Solr in order to store and retrieve results.

Lunch consisted of a set of brown paper bags containing a sandwich, crisps, a bottle of water and a cookie. I was good to have lunch with a few people I’d not seen for a while and people I haven’t met before.

After lunch was over I went to see Sven Bery Ryen give a talk about controlling Drupal Commerce using Rules. The Rules module creates a framework that executes actions based on events and conditions. I had used Rules and Drupal Commerce in the past so I was hoping for some guidance and best practice on using them, especially around some of the more complex Rules configurations.

Sven attempted to show some examples of Rules in action with Drupal Commerce. Unfortunately, the local demo site that he wanted to show wasn’t working correctly and as a result he ended up showing us the examples of a recent client site. The fact that this site was in Norwegen made things a little difficult to follow along. That is the trouble with live demonstrations really, sometimes they don’t work out as expected.

Barney Hammond was up next with his talk, Devops: the Next Generation. I had met Barney during lunch and chatted a little about the content of his talk and so went along to show my support (and a useless Apple remote). This was one of the best talks I saw during the weekend. It was interesting, informative, funny, and laced with more memes and Star Trek references than an afternoon spent on Reddit.

Sysadmins and developers are diametrically opposed. Sysadmins spend their lives keeping things the same and developers spend their lives making new stuff. This is where DevOps comes in, it sits in the middle ground between these two worlds. In reality developers need to know more than just pushing to github, and sysadmins need to use a little programming every now and then.

The 5 tenets of DevOps are monitoring, profiling, security, performance, and automation. Barney went through each of these to show why it was important and what tools and resources he uses to allow each of these. Lots of tools and standards were mentioned and I’ll be looking over my notes for the next few weeks so that I can get up to speed on them.

This session showed me the next stage in my career path. I’ve been getting more and more involved in system administration in the last few years and this talk showed me the sort of things I should be doing and where I should be heading. I used to have to log into a server every time I needed to do anything, so a while ago I decided to do something about it. So used tools like Phing and Ansible to do things on the servers, with Xhprof and Munin to keep an eye on how the servers are performing but I still have to log into the box when something going wrong and I want to see what the logs contain. So the next step in this is to use some of the logging tools that Barney mentioned to organise system logs. Barney actually talked a bit about Ansible during the talk so it was good for me to talk to him about how he uses it.

The final session of the day was an introduction to concurrency with John Ennew. This was a look at how to make the Migrate module run many times faster by using concurrency and executing sections of the import in parallel. Migrate does a lot of work, but it doesn’t have much of a strain on the server, so splitting the execution of Migrate into sections that run in parallel the server can be allowed to run migrate with more resources.

In order to allow the Migrate module to run in parallel John created a Drush component. With this in place John was able to show the amount of speed increase by just having 2-4 more processes running in parallel. He did a live demo where he imported 100 nodes into a site, which took about 2 minutes (a long time during a talk). He then increased the number of processes to 2 and reduced the time from 2 minutes to about 20 seconds.

This was a good introduction to the process to speeding up Migrate by giving it more resources. It also contained enough cautionary tales about keeping an eye on the amount of resources the server has and ensuring that everything gets imported correctly. I think I will be using the plugin he wrote when doing some migration later this year as it will allow me to run a migration of thousands of nodes in a few minutes, rather than several hours.

After the final session I made my way down to the Saturday night social, which was held at the Slaughtered Lamb.

Sunday

Sunday began with a keynote talk from Megan Sanicki, who is the Associate Director of Drupal Association. She talked about her experiences with running DrupalCon London and how it was realised that the world (not just the UK) needed more regional representation from the Drupal Association. As an example, a bank account is now present in the UK for local funding opportunities. Megan talked us through the history of the Drupal Association and how it had gone from just 2 people to a team of 30 in just a few years.

Image removed.

In 2014 the Drupal Association has many initiatives, some new and some ongoing. The top two initiatives for 2014 are to drastically improve the drupal.org website and cultivate a successful Drupal 8 launch.

The penultimate session of the weekend was Oliver Davies, who was talking about git flow. This was an original idea by Vincent Driessen and is now a collection of tools to provide a branching model. The model is that every time any new feature is created it gets it’s own ‘feature’ branch, and once complete this branch is merged with a development branch.

The branching model that git flow provides can be summarised in the following way.

  • master is where the production code sits and does not get committed to directly. Instead updates are merged into this branch from the development branch (or the hotfix branch) and tagged.
  • develop is the development code, and although it is possible to commit directly to this branch it should be avoided. This branch should really contain stable code.
  • feature isn’t really a branch in it’s own right but multiple branches. They are used to develop new features on the application and once complete the feature branch is merged with the development branch. This merging is known as ‘releasing'.
  • release is a temporary release branch for testing single features.
  • hotfix is an emergency branch for fixing things in the master branch. The master brach is forked off into a hotfix branch and once the hotfix is complete it is merged back into the master branch.
  • support is used to support older versions of the master branch.

Git flow allows for the separation of production and development code, which means that you can be more flexible in your approach to developing an application. The process encourages collaboration and peer review and leads to better code quality.

Whilst sat in this talk I came to the revelation that git flow is the perfect addition to working with git and that I should have been using it a while ago. I have had the situation in the past where I needed to apply some new code to an existing website and have had to do some branch juggling in order to allow a fix to be applied to a live site whilst active development was going on. Git flow gets around this issue (and many others) by having a (more) rigid structure of workflows in order to allow for the branching model in git to actually work in an every day development environment.

The final session was Konstantin Kudryashov with Delivering Value Through Software - The BDD Pipeline. This was a really interesting analysis of the whole process of software development from capturing client requirements to building applications that actually have value.

There was essentially two parts to this. The first was a way of capturing what it was that the system would do that would facilitate the biggest return on investment and then creating a set of features based on those requirements. The second was plugging in the requirements created in the investigation process and using then to run behavioural tests, using Behat.

The way that Konstantin pulled the application requirements and plugged them directly into Behat has really stuck out in my mind. There is a bit of a disconnect between a multi page document that describes a system in exquisite detail to testing that system. By reducing the documentation down to a set of simple features you can simplify the documentation process and also directly plug the features into some behavioural tests. Konstantin said that his team has become so used to doing this process that they now rarely have to tweak the features to be understood by the behavioural testing system and so there is a streamlined process between features and development.

After the final session I got some lunch and went into one of the sessions that was going on at the time. I chose to enter the Drupal Ladder session in order to see if there was anything I could learn that would help me finally contribute to Drupal core in some way. As it turns out the only thing I am lacking if the confidence to take the plunge and submit a patch.

By the way, if you don’t already use Dredditor to navigate the drupal.org website then you should as it gives a bunch of tools that make looking through issue queues a lot easier. Dredditor is a browser plugin that is available for Chrome and Firefox.

DrupalCamp London was a really good event and it was good to meet up with people I haven’t seen in a while and make some new friends. I have lots of notes to go over, which is usually a good sign that the talks were pretty good as well. It was interesting to me both in terms of absorbing Drupal knowledge, but also in terms of seeing how other people setup and organise a camp.

I just wanted to take the opportunity to thank the organisers and sponsors for putting on such a good event.

Some random take-home lessons from the weekend.

  • Start using git flow.
  • Reduce my (already low) need to actually log into a server.
  • Alex Burrows likes to play music really, really loud :)
  • Question how functionality delivers value to a project.
  • Use BDD.
  • Contributing to Drupal shouldn’t be scary.

Add new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
9 + 11 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.