PHPNW12: A Review

The annual PHPNW conference gets better every year, and this year was no exception. I have been going to the PHPNW conference since their inception in 2008 and this year I was lucky enough to be involved in some of the pre-conference organising and helping out over the event.

When the call for papers ended in June I spent a weekend reading abstracts and bios of speakers to try and reduce the 169 talk submissions to around 32 sessions. I then sat down with Jeremy Coates, Rick Ogden and Jenny Wong to select which talks would be included in the final selection. Once the sessions were complete I sat down and started working on a blog schedule and wrote a few blog posts to garner attention.

Tutorial Day

I was pleasantly surprised a few weeks before the conference when I was offered a free slot at one of the tutorial days, provided that there was spare seats available in the room. The tutorial day was held in the Britannia hotel, just across the road from the Mercure hotel where the main conference was to take place on the Saturday. I chose to attend Lorna Mitchell's day long session about tools of the PHP trade. When I turned up on the day I ran a few errands and helped with the tickets before I could join in on the tutorial, which was after the first break. What I had missed was people setting up Github accounts and installing joind.in so that they could analyse it using reusable PHP tools but I quickly got up to speed on things.

joind.in, if you haven't heard about it, is an open source project that allows people to comment and rate talks that they have been to. It's a good way of telling the speakers where they went right, or where they went wrong, so that they can improve their talks. I'll be using that site to enter my ratings for talks I saw at PHPNW12.

Image removed.

The rest of the day consisted of using tools like PhpDocumentor, PHPCodeSniffer, XHProf and Phing along with the joind.in source code to see what needed work. Although I had seen a few of these tools in the past having them all presented in one place and with context about how to use them was really useful for me. What impressed me was that many people in that session didn't have Github accounts and hadn't contributed to open source projects at all and by the end of the day they had contributed code back to the joind.in codebase, which Lorna merged right there and then.

Lorna is a capable and knowledgeable communicator and I definitely got a lot out of the day. It actually made me realise that I know way more than I thought I did and that I only need to put my knowledge into practice. I suppose my take home message from the day is that knowledge is one thing, but having the confidence to put that knowledge into practice is something else. I not only learnt a lot about PHP tools, but about my own abilities.

After the day was over I helped move everything downstairs to the hackathon event, which I unfortunately couldn't attend. So I made my way home, eager to have a play with all of the tools I had learnt about during the day.

Saturday

My first duty for Saturday morning was making sure people had their lanyards as they walked into the conference. I was one of the people stood at the desk for surnames ending in N-Z. We got through everybody quite quickly and it was soon time for the main conference keynote.

Keynote: Developer Experience, API Design And Craft Skills
Ade Oshineye

Ade is a developer who has worked on multiple Google projects over the years and has experience of what makes a good API. Anyone who uses code that you have written, is an API user, which makes most of us API designers. The problem with building an API is that it requires no research before hand, people generally throw some stuff together and hope for the best. This is in direct contrast with crafting UX experiences, which takes research, testing and proof that it would work.

Image removed.

It is for this reason why Ade and some other people set up the website developerexperience.org to try and collate the best information about how to create an API.

Ade has worked for Google for many years, and during this time he has realised that once an API is built it is important to get people to use it first. When gmail was created the developers introduced the concept of labels rather than folders. After several years of people asking for folders and being told about labels they eventually realised that the user's were correct, and they built folders into gmail.

The same thing was seen when Google created the OAuth API. They found that they were answering the same 6 questions over and over again and rather than document the things that needed to happen they decided to write these common problems out of the API.

APIs are powerful but can often be difficult to use due the their complexity. It turns out that the Germans have a concept for this called Zuhanden (ready to hand) and Vorhanded (present at hand). With Zuhanded you focus on the thing you want to do, not the tool you want to do it with. With Vorhanden the tool becomes more important than the thing you are trying to do, which is often the case for APIs. You should think about what an API is like to use, making them easy to use makes developers more ready to use it.

An example of a real life API that is bad to use git as it requires knowledge of internal structure in order to make it work. Mercurial on the other hand is a better API as it does what you would expect.

Ade talked about Richard Sennett and the 'skill of repair' (in the book The Craftsman) which asks the question 'what does it mean to be a skilled worker?'. In order to repair something you need to understand it first, and there are two ways in which things can be repaired. Static repair is the simplest way to repair something so that it does what is expected. Dynamic repair is where you change the tool to fix the problem. For example, a screwdriver is perfectly useful as a hammer, but when the screwdriver breaks you simply replace the screwdriver with a hammer.

The python feedparser is a widely used RSS parsing system that is able to parse even badly written RSS feeds. Ade found himself in need of it and took over the development work on the project. When he announced that he was dropping support for older versions of python people spoke up and complained. As this was an open source project with limited resources he offered people the opportunity to work on legacy support as he didn't have the time to do it.

The take home message from this talk was that we should go back to work on Monday and change a little of something to make it better. Think about the people using the interface and change it to make their life better.

I enjoyed this talk quite a bit. Ade is an amusing character who is able to find humour in the dry world of API design. He has definitely done his homework on the principles of design and was able to talk about people like Richard Sennett and Dieter Rams and how he applied the principles they talked about in developing API technologies. A thoroughly interesting talk with some good take home messages.

PHP 5.4 Features You Will Actually Use
Lorna Mitchell

This was a trip through some of the features that are in PHP 5.4 that are actually useful enough to be used. Lorna stated at the start that this is an entirely subjective selection of the best bits of PHP 5.4.

New array creation syntax
The new array syntax essentially means that you can define arrays in much the same way as you would in JavaScript.

$game = ['scis', 'stone', 'paper']
$game = [0 => 'scis', 1 => 'stone', 2 => 'paper'];

Array dereferencing
This is a way to access the contents of an array that has been returned by a function. This is a really nice trick, but not the height of best practice as it doesn't allow for the array to be empty.

function getlist() {
  return ['Grumpy', 'Sleepy];
}
$item = getlist()[0];
echo $item; //Grumpy

Speed
The memory management and garbage collection has been improved in PHP 5.4 and as a result it is faster than previous versions. To test this there is a script called bench.php in the PHP source code that can be used to benchmark PHP builds. This benchmark script runs a series of functions and took 4 seconds to run in 5.1 and just 2 seconds to run in 5.4.

Traits
A trait is a reusable group of methods that looks like a class, except the class declaration starts with the word 'trait' instead of 'class'. When you want to use the trait in other classes then you include it by using the 'use' keyword. A good use of this is to include logging functionality or similar in multiple classes without having to create multiple levels of useless hierarchy.

trait Audit {
  function somemethod(){
    return 0;
  }
}

class Otherclass {
  use Audit;
}

$object = new Otherclass();
$object->somemethod();

It is also possible to use traits in other traits, therefore creating trait hierarchy.

Image removed.

Built in web server
PHP 5.4 now have a simple, lightweight HTTP server that should be used for development only. Lorna was very sure to drive this point home as the simple server should only be used for simply debugging purposes and never in production. The main problem is that requests are served sequentially, which means that any request that takes a few seconds to process has to be complete before any new requests can be filled.

Lorna said that this is the feature she did not know she needed. Initially skeptical of the feature at first it is now something she can't live without. To get a server up and running you use the -S (capital s) as a command line flag and indicate what the address and port to be used should be. This will get the index.php file from the directory that you ran this script from.

php -S localhost:8080

You can also set the domain name as long as you set the address in your hosts file. The upshot of this approach is that other people can also view the webserver on your machine if they edit their hosts files in the same way.

php -S wibble.local:8080

You can also specify the doc root with the -t flag. This is used if you don't want to serve files from the directory you ran the script from.

php -S localhost:8080 -t /var/www/project

You can also specify which php.ini file should be used for the server by using the -c flag.

php -S localhost:8080 -c php.ini-development

A routing file can also be used, which changed the default behaviour of looking for a index.php file in the current directory.

php -S localhost:8080 routing.php

Session upload progress
File upload progress has previously only been possible by using the PECL uploadprogress library. PHP 5.4 has the session upload progress feature built in which does exactly the same thing. This can be paired with the new HTML5 progress bars to create a file upload script. This feature can't be used with the PHP 5.4 web server as it works by uploading the file in one process and then spawning another process to check on the progress. As the PHP 5.4 webserver can only handle one request at a time you will not be able to test this functionality properly.

JsonSerializable
Classes can now implement JsonSerializable and use a jsonSerialize() method to render the class as a JSON object. The jsonSerialize() method only needs to return an array that will contain the data to be printed as JSON. This is incredibly useful for anyone who has ever tried to convert a class to JSON, which is usually a time consuming process and normally needs third party code to get done easily.

Lorna then went onto a quick look into some of the nonsense that has been taken out of PHP. These are things like register_globals, register_long_arrays, y2k_compliance and the ereg* functions. I don't have a problem with any of these but I did see that magic quotes are due to be removed as well. This doesn't concern me as I have made it a rule never to rely on magic quotes being present on a system, but I think some people might find this an irritation.

Overall this was a great introduction into some of the things I had heard were new in PHP 5.4, but hadn't got a chance to look at yet. PHP 5.4 will be available in the next major release of Ubuntu (due in a few weeks) and so it will not be long until these features can be used. I will certainly be making use of the webserver functionality to run quick tests of things without having to fiddle about with Apache config files.

React: Event Driven PHP
Igor Wiedler

React is a non-blocking I/O library that is the PHP equivalent of node.js and Igor is one of the people involved in the project. Node.js is a technology that ties together different network libraries into a coherent layer (which includes windows support). The Google V8 JavaScript engine then provides an interface to act on this low level library. The use of JavaScript makes this connection easy to use, which has made node.js so popular. Igor stated that he wondered why node.js is so popular, seeing that React is the same thing that doesn't rely on any external libraries. He also made some comments about JavaScript which got some attention over Twitter.

"Difference between PHP and JavaScript communities is that PHP guys *know* their language is shit."

Igor said that the PHP community is made up of pirates, when we see a good idea we copy it, although this is something to be proud of.

React creates what is called an event loop where streams are controlled and dealt with. It is a library not a framework so nothing is provided for you, everything has to be built. It is installed via the composer package manager, which makes it very easy as React has many different components. Igor is also responsible for work on composer and said that it is the most advanced dependency manager in existence.

The philosophy of React is that it should be usable out of the box and need no extra components than PHP. The idea was also to prove the world wrong, that a non-blocking I/O is perfectly possible in PHP. The event loop is the only part that blocks, so two loops can't exist together. React makes this single event loop and provides this as event loop component for use in other systems. React consists of the layers HTTP, Socket, Streams and EventLoop.

A real test of any framework is the user-land implementations and React has a few already. One example is React/WHOIS which is an implementation of the WHOIS protocol in React. This has been taken further by using it to create Wisdom, a domain name checker. The most notable example is Ratchet, which is a websocket server and was one of the first examples of how to use React to make something useful. One thing that Igor did mention was not to do anything with the data, non blocking systems should be used as a resource more than a way of saving data.

Overall this session lost me a couple of times during the more complex parts. I wasn't too worried though as I was impressed by the ideas behind the technology and was making notes to look things up afterwards. Igor's message was that PHP can be glamorous like other languages, there is nothing we can't steal, and PHP can be made more than just another scripting language.

Symfony Components In The Wild
Jakub Zalas

Symfony components are parts of the Symfony framework that allow you to solve certain tasks without having to write code from scratch. Writing less code means introducing less bugs, which is always a good thing to strive for. There are quite a few Symfony components available that provide all sorts of functionality and Jakub choses a few of his favourites to demonstrate.

The micro-framework Silex is a good example of a working framework that packages together Symfony components in a coherent structure.

HttpFoundation is an OOP to request/response solution that wraps a lot of different things that are commonly used when fulfilling HTTP requests. Things as simple as checking for a secure connection is built into the component. Routing is an important part of deciding what happens during a page request and the Routing component provides this functionality. EventDispatcher provides the ability to throw or listen to various events. You can listen to events in different ways and react to them as needed. The HttpKernel component ties the EventDispatcher, Routing and HttpFoundation components together.

Console is a way to create scripts that do not have web frontends. It is essentially a way of getting the input from the command line to provide output. You can extend the command line arguments with the configure() and execute() methods.

To get started with Symfony components you need to use composer. This is a dependency manager that can download the needed dependencies from a simple set of instructions. It is probably the most convenient way of getting the components you need for the application you are creating.

Using Symfony is a good choice for multiple reasons. The source code is covered by unit tests and has been independently verified by a full security audit. The API is stable and flexible. There is wide adoption of both Symfony and it's components in the industry so any skills you learn when using it will be transferable to other systems. There is also a great community around it that is responsible for driving innovations and producing things like Twig and composer that are great solutions to real problems.

This talk was interesting and I learnt more about the components I will most likely be using in Drupal 8. Although nervous at first Jakob settled into things and managed to deliver a good talk.

Effective code reviews
Sebastian Merek

Sebastian started by introducing a series of programming related characters based on angry birds, called the angry nerds. With these characters he showed a few scenarios around the subject of code reviews.

The first piece of advice that Sebastian gave was not to use email as this caused delays and wasn't traceable. It is essential that a bug tracking system is used to see who has raised it and when so that it can be assigned to someone. This also allows a value to be assigned to how much time was spent doing the code review.

There are a few code review tools available, things like Crucible, Fisheye Gerrit and Github allow multiple developers to view the code and review it. These tools are generally quite good, but before using them it is important to think about what is being reviewed and what is the purpose of the code.

When submitting a review there are a few things to avoid, but stating things like "it works" or is syntactically correct doesn't really help. The first thing to be done (before the review is done) is to check the code using tools like lint and PHPCodeSniffer. Unit testing the code is fine, but you need to review the tests as well as the code, there is no point in running tests without making any assertions. Other tools like PHP depend and PHP mess detector can look at code quality and code complexity.

Image removed.

These can all be combined into a tool called Sonar. Sonar is a static analysis and code quality server, with things like this there is no need to plan in code reviews. You can just look at the information radiator. With this in place you can then look for code quality, code design and whether the solution actually fixes the problem.

Code reviews are good for knowledge sharing, getting new starters up to speed and fostering collective code ownership. Developers should understand and accept when they make mistakes and that the code is not the developer. It's important not to rewrite code without consultation as changing things might break things in other areas of the code that you aren't aware of.

When code reviewers are looking at code it is important that true authority stems from knowledge, not from position. Managers shouldn't get involved in code reviews as they aren't as skilled as senior developers. Use experts to review the code but always make sure you question the solution, even if the solution was created by the senior developer. Sebastian told us about a story where he wrote some code that worked, but was pretty shoddy, which was approved by the code reviewer as they assumed that Sebastian knew what he was doing.

When talking through mistakes in code don't use words like "you" as this appropriates blame, use words like "it" as this will talk about the code.

This was a great introduction to code reviews and how to conduct them. Sebastian gave a quick demo of Sonar, which looks like a better solution than I gave it credit for. It's automated code analysis looks like a great way to keep track of code quality across multiple developers and multiple products.

Twig doesn't make Templating your Enemy!
Hugo Hamon

PHP on it's own is not a good template engine. It doesn't separate logic and markup, there is no manual escaping and no isolation of the template from the core program. Also, in order to have components you have to litter you templates with includes, which is messy. Using a templating engine stops all of these problems and allows easy prevention of cross site scripting attacks. More importantly, it allows web designers to work on the design whilst developers work on the background code.

There are plenty of existing templating engines for PHP and before building Twig an analysis was done to see if they could be adapted. They all solved things in different ways, but the major issue was that some didn't compile and some still had PHP embedded in HTML markup.

To install Twig, use composer. There is a PEAR library available, but composer is the best way. The phpize extension can also be installed to aid with certain aspects of Twig's functionality. To bootstrap Twig you just need to include the autoload.php file from composer and create a Twig object.

Twig has a nice and concise templating syntax consisting of 3 tag types that comment code, do things, or print things.

{# comment something #}
{% do something %}
{{ print something }}

When you run a Twig template it will compile your template into PHP. The result is a PHP class with a hashed name that will be run the next time that template is used.

Twig comes with debugging functionality, something that many templating engines don't have, this can be accomplished with a debug variable, but other methods are available. It also has sensible error messages that try to guess what the problem might be. There is also a strict variables setting that throws an exception if the variable doesn't exist.

Twig is capable of automatically escaping output, and does this through the htmlspecialchars() PHP function. Some other escaping strategies are available and these can be forced by using a pipe character followed by an escaping mechanism. To stop any escaping being done use 'raw'.

{{ name|raw }}

To can force escaping use 'e' or 'escape'.

{{ name|e }} {{ name|escape }}

You can also escape as a strategy like CSS, which will surround the output with stylesheet tags.

{{ name|e('css') }}

You can also auto escape lots of output in one block by using an auto escape block.

{% autoescape %}{% endautoescape %}

Twig has good strategies for printing variables. For example, the following code will print out the title of an article.

{{ article.title }}

The title can be the key of an array, a property of an object, or even a method, although others are available. As this can be quite an expensive operation the phpize extension will provide this functionality in a much faster manner.

Filters allow formatting of contents and are used in the same way as the escape functions. Here are some examples of formatting some text items.

{{ post.published|date('d/m/y') }}
{{ post.title|lower }}
{{ post.title|upper }}

Filters can be piped in order to chain the output from one filter into another.

{{ post.tags|sort|join(', ') }}

Twig has the ability to trim whitespace using a simple syntax. To trim whitespace to the left of the output use the following:

{{- name }}

Conversely, to trim whitespace to the right of the output use the following:

{{ name -}}

Twig functions can be run independently of the engine itself. It is possible to compile and tokenise templates without rendering them. It is also possible to reconfigure Twig in all sorts of ways, even by configuring the Twig_lexer to use different tempting tags (eg. {{ }}). It is possible to add functionality to Twig with extensions, a few of which are included with Twig to provide some default options. It is quite easy to roll your own and Hugo showed a few examples of how this was possible.

Twig has a wide level of adoption with extensions built for Symfony 1 and Zend Framework. It is built into Symfony 2 already and requires no extra plugins. An extension for Drupal 7 is available, and will be the default template engine in Drupal 8. Twig has been built using PHP 5.2 as a codebase to make it widely compatible.

Hugo stated at the start that this was a talk for both web developers and designers and I can agree with that. It had enough technical elements to get developers excited about things and enough HTML syntax to allow designers to see how they would use Twig in their day jobs. With full Twig integration into Drupal just around the corner I found this talk a great introduction into the tempting engine.

Image removed.

Closing Session

All that was left was to gather everyone together and give out a few prizes and thank all of the sponsors the speakers and the helpers for their contributions. As UKFast were the platinum sponsors they were given a slot to show people what they were all about and give out a few prizes that they had organised themselves. I was surprised when I was called to the front to receive a pair of PHPNW12 cufflinks for helping out with things prior to the conference.

This signalled the end of the sessions for the Saturday, so I took the opportunity to check into my hotel and drop of my things before coming back to the conference for dinner. I was staying at the Britannia hotel where the tutorial day was held the day before. Although the conference suite was good at the Britannia the room I had was pretty grotty. Not actually dirty, but not very well maintained. I wouldn't recommend anyone actually staying there.

After dinner, we had the PHPNW conference party, where there was more than enough free drinks for everyone. I spent a few hours with other people from the conference talking, drinking and playing on the Wii. A really fun end to a great day, but there was another day to follow.

Sunday

The start to the Sunday of the conference was quite quiet, probably because of the late hour that many people went to bed the night before. After a quick introduction we started off with the first session.

Scale and Adapt with PHP and Responsive Design : A story of how we're building BBC News
John Cleveley

John's first message was that creating a fully responsive site design was hard work. Lots of time must be taken to make sure things work in all devices and all resolutions.

In the past at the BBC news website there were two separate streams of development, the www and a mobile version. This was because things were simple, you either wanted the desktop version or the mobile version. There has recently been an explosion in the number of devices with different screen sizes, resolutions and bandwidth. This means that the mobile web doesn't really mean anything, it is more about a coherent usage across different platforms.

The trouble was that with the BBC news website they had to serve their content to people with desktop machines and broadband connections but also to people on old mobile phones with not much connection at all. There are currently 953 million smartphone sales but 6.1 billion normal mobile phones, so with an international audience you have to take this difference into account. The trouble is that many sites do not, one example is Obama's campaign site, which is a 2.5 meg download (with all of the images and files attached) and has no mobile site.

Speed is the key to a good user experience, and the aim of the BBC news site was to get the site to load in 10 seconds over GPRS, which equates to about 60-100 KB of data. Responsive design has a lot of dirty little secrets that push the page size up, things like keeping the same image size, javascript libs. Facebook like buttons alone will add 100KB to the size of a page.

In order to build the new design for the BBC news site they used a mixture of browser and server side detection. With this information in hand you can then serve the right content to the right platform. The important part here is thinking about what the actual content is and then serve this to the mobile versions, scaling up for larger bandwidth and devices. There are lots of people in the BBC trying to get content onto the page, but in essence, the news story is what should be there.

What the news site does is split the content and features into two types. The good for desktops and smartphones and the bad for old mobiles. Essentially, the bad gets the core experience, which is the content.

The developers at the BBC decided to create a JavaScript check to test that the device could work. This test was called "cut the mustard" and is just a simple check for certain things to make sure they will work. If things do work then the device is adequate and run the basic of site functionality. Many mobile devices have no JavaScript functionality at all, or are very slow at rendering elements, and so couldn't be relied upon to do much, if anything.

CSS gets a bits mental when things go responsive, there are multiple media queries trying to style the same element in different ways. In order to get things manageable they used SASS. This meant that a lot of the CSS was auto generated.

Once the content of the page is in place a lot of different things are lazy loaded behind the scenes. They knew what parts to load as they carefully watch what users are doing to see what are the most commonly clicked elements. This means that they can pre-load them onto the page so save time when they are clicked.

As the BBC is a publicly funded body with a small amount of budget they needed to use the servers they had to the best of their ability. The BBC news site has lots of users and with little money to do expensive things. Essentially, everything is cached using Varnish, even for things like JSON callbacks. The Varnish servers are also load balanced to spread the load of serving out content. Varnish is context aware, it knows where you are, what device you have, and any cookies you need, which makes it an ideal tool for this site. It's important to keep an eye on the hit/miss ratio so that the Varnish cache is served more than it isn't.

When things get crazy (like the 80m users) then a CDN is used to protect things. In this case the Varnish context is lost, but it means that the site still gets served.

Testing all of this is an almost impossible task and involves a mixture of manual testing 20 devices, selenium and cucumber for automated testing, sandbox testing and remote debugging. A tool called Weiner is also used to inspect the pages on mobile devices. Some devices, like the Blackberry (5 specifically) and Symbian are really difficult to get working consistently. Android is quite difficult to test for as it is fragmented across multiple versions. iOS is generally fine, but there it is starting to fragment with older devices still being used along side newer ones.

John's talk was informative in terms of creating mobile friendly designs as well as the technology behind it that tests everything. Him and his team have obviously spent a lot of time working on this problem as he told stories of a few ideas that they had that didn't quite work. I came away from this talk with lots of ideas and suggestions about creating mobile sites.

Image removed.

To SQL or No(t)SQL
Jeroen van Dijk

abc-always be caching

Relational Database Management Systems are battle tested and standard. They have good vertical or horizontal scalability, but are prone to problems with schema changes. Brewers CAP theorem states that systems can have Consistency, Availability and Partition Tolerance, and databases (even NoSQL ones) are only allowed two of these features at a time.

During his research Jeroen has found 4 basic types of NoSQL database, these are key-value, column, graph and document.

Key-value databases like redis or couchbase is essentially a schema-less design that stores strings of data with key references. It can be hard to query but as it exists mostly in memory it can be quite very quick. It acts much like memcache and can be a good alternative to using a memcache system.

Column databases like Cassandra, Riak, Voldermort and hbase use a bigtable or dynamo style format. They rely on consistent hashing and use a ring based partitioning system for storing the data.

Graph databases like Sones or neo4j work on a system where relations are more important than entities. This works well for social media style datasets where users are linked together in some way. neo4j has a custom query language, which makes it more difficult to query it.

Document database systems like CouchDB or MongoDB have the largest resemblance to traditional RDMS and even has a familiar query language. MongoDB is perhaps the MySQL of the NoSQL world in terms of popularity. It also has the ability to use geographical indexes, which is used by fore square.

The bottom line was that these systems are not replacements for traditional database systems, just an addition to the ecosystem of data management. They have their uses, but there is no need to drop your traditional SQL systems in favour of them just because you think they might be a good idea.

Jeroen admitted that he hadn't actually used any of them in production yet, but that he has done the research to find out more about them. He hopes that he will be able to use them on a project soon. Overall the talk was informative and well structured. He even went into a good deal of detail about how data is stored using these systems and how to get data out of them, which is ideal for people who are wanting to know more about how they work. A good resource to find out more about NoSQL databases is the website nosql-database.org.

Fork it! Parallel Processing in PHP
Nathaniel McHugh

Natheniel started by telling the story of how he tried to prepare pasta and that the limiting factor was waiting for the water to boil in a kettle. He talked about how he separated the water into different kettles, thus saving time, and introducing the concept of parallel processing to speed things up. Someone from the front row pointed out that he could save even more time by putting the pasta into the kettle rather than transferring it to a pot afterwards.

"Put the pasta in the kettle, I hadn't thought of that..."

One thing that PHP doesn't report on is how many processors are available on a system, so Natheniel made one of his own. This was partly as an experiment on how to create extensions, but also to generate something useful. The extension he created adds the functions num_processors_available() and num_processors_configured(), which can apparently be different.

PHP has no multi-threading support, with the exception of a single PECL extension. The PCNTL functions are used to fork a PHP process, but they are for Unix systems only, and only for CLI use only. Web servers have a different model for multi-processing so the PCNTL functions wouldn't do much as the threading is handled behind the scenes.

Forked processes have parent processes and can fork children of their own. But this can sometimes be a bad idea as it can make PHP really hard to kill. In which case you might want to create a daemon to run the process and detach it from the terminal. A good example of this is in the example of the pctnl_fork() function on the php.net site. Children processes that have had their parents killed are called orphans. Zombies are processes that are dead but have not been cleaned by the host operating system.

"Zombies are children who you forked and then died. Daemons prefer orphans though"

When forking processes resource clashing can happen quite easily, especially with multiple children and parents. Things like database connections and files can be opened by children, which makes the parent unable to get hold of them. The solution to this is to allow the child processes to do things and then pass on the data to the parent, which will then take care of everything. Things like temporary files, memcache or sockets can be used to transmit the data from children to parents. The parent's job here is to then combine the results of the children.

In order to scale things properly in PHP you need to use things like gearman, which can be used to send jobs to the right process that might be running on multiple machines. These solutions can be used without any command line programs.

There is no automatic way of taking an algorithm and making it parallel, and this is a really difficult problem. There is no simple answer to the question of how to distribute work effectively. You also need to ensure the small size of data that needs to be merged together at the end. There is also no way to effectively organise dependencies.

Nathaniel decided to try to make some things parallel. The first thing he tried was to take phploc (the PHP lines of code tool) and create a parallel version. This sped the tool up from 7 seconds to 5 seconds to process a PHP project. Nathaniel said that it was extremely experimental and wouldn't recommend using it. He also tried his hand at using parallel processing on Mandelbrot Set generation. This is an example of a system that is embarrassingly easy to make parallel. I realized at this point that I recognized Nathaniel from his github profile where he has a repository that looks at fractals in PHP. It's a shame that he ran out of time as I would have liked to hear more about parallel processing fractals.

This session was generally good, but Nathaniel lost me on some of the more complex sections. I think I will have to look over the PCNTL extension and try to figure things out for myself, but at least I know where to start looking. Overall, he was realistic about the benefits of using parallel processing to speed things up, stating that most of the time you should just use a queue based system like gearman to handle the processing of separate tasks. This handles many of the issues that are inherent when making PHP parallel.

Keynote - Community Works for Business Too!
Michelangelo van Dam

The final session of the conference was about how businesses can get involved in open source technologies. It was similar to a talk that Michelangelo gave at PHPNW10 about getting involved in the open source community, but with a twist towards business involvement. This was an inspirational talk that pointed out that companies who use open source software should donate something back to the project. There are number of ways to donate things to open source projects, and money is only one of them. Many projects need time from developers to help with bug fixing or new features, or even just server space to help host things.

Michelangelo is good at getting the message of community involvement across, and there were many ideas that companies could use to get involved. He is good at getting the warm fuzzy feeling of being involved in a community like this can bring.

Image removed.

After the final session we had a few closing remarks and thanks from Jeremy Coates and Rick Ogden before we all went over to the Britannia hotel for a roast lunch in celebration of another successful conference. PHPNW12 was a great event, and perhaps even the best PHPNW conference yet. I just want to thanks everyone involved for making the event such a success, I feel proud to be a part it. My only regret was that I wasn't able to go to any of the unconference talks this year (again) but from what I hear they were all pretty good talks as well.

One thing I found was that Composer was mentioned a lot during this years PHPNW, and I can't remember it being mentioned at all last year. This might be a byproduct of the projects that were being talked about, but I think Composer is definitely something to get involved with. If you haven't heard about Composer then take a look.

Finally, thanks to @akrabat and @stuartherbert for taking some of the awesome pictures seen here. Go along to their Flickr profiles to see more.

Add new comment

The content of this field is kept private and will not be shown publicly.
CAPTCHA
3 + 7 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.