Our Tokyo week just ended and I really enjoyed to be at the Summit with the Puppet OpenStack folks.
This blog post summarizes what we did this week and what we plan for the next release.
Here is the etherpad (that contains the full agenda and other pads) about our Tokyo Design summit.
How to get faster +2 reviews
One of the feedback we’ve got from our group is that some patches take sometimes a lot of time to get reviews. In my experience, that’s something which happens when the project is growing without scaling-up the way the team is working together.
Our community is growing every cycle. We have new contributors that come up and this is a very sane thing for the project to make sure our core-reviewer team is regularly updated.
In Mitaka, we will create new Gerrit groups that will have core-reviewer permissions on some modules.
Here is the plan:
- Keep puppet-manager-core group that contains people who can +2 any Puppet OpenStack repository.
- Create puppet-<project>-core (example: puppet-keystone-core) groups that will allow some specific people to +2 some modules.
That way, we will hopefully get better and faster reviews on some modules from some people who have skills in both Puppet and the project itself.
How to get quickly involved in our group
Another awesome feedback came from newcomers who sometimes have hard time to get in through Puppet OpenStack modules to setup their first Cloud.
Some other projects like Chef and Ansible already provide some tools to run an All-In-One setup.
The plan will be:
- Create a new repository that will contain the tools to deploy a basic OpenStack Cloud All-In-One with Puppet modules, using Vagrant.
- Consume puppet-openstack-integration manifests so we don’t have to maintain multiple manifests for different purposes.
- Provide better documentation about deployments but also how to get involved in the project and what are the expectations to become core-reviewer.
We really love having new contributors, so we hope these actions will help to scale-up our team and the project.
API healthcheck for better orchestration
Sometimes, Puppet requires an API service to be started before creating a resource.
The problem is Puppet is not able to know if the service is actually started and running (Puppet only check systemd.service or upstart status). A good example in OpenStack with API services that take ~4s to start (depending of the configuration) and that could lead to race conditions during the deployments where Puppet tries to create some resources while the service is not started yet.
A solution is going to be investigated: using puppet-healthcheck. One of the first usage will be to create a resource from this module that will test if the service is actually running (by testing API connection or port binding) before managing some resources. Hopefully this solution will help us to improve the deployments.
What’s next in CI
- We are going to revive the multi-node job and start a controller/compute deployment to test a new scenario. We are interested by testing nova-compute on a separate node and also Neutron DVR. Another interesting thing would be to start testing with cross-node healthcheck (see previous topic).
- OpenStack packaging (RPM & DEB) are new projects in OpenStack. We are currently working together and think about gating packaging jobs with our current Puppet OpenStack Integration testing so Puppet CI would be able to run trunk packages without failures related to packaging like we had previously (during end-cycle).
- More beaker tests for the new modules and more integration with Tempest for functional testing.
- Third-party CI with Fuel (we already run TripleO jobs).
Reshaping of our current init.pp
For those who are familiar with our Puppet modules, most of common configuration go in init.pp modules which lead to something that starts to be hard to maintain.
We are going to continue the work on splitting the configuration according to Olso projects (we already did database and logging, we’re going to continue with messaging). It will help to have more readable manifests and better consistency across our modules.
Add ability to handle deprecated config options in *_config
Some people sometimes run our master branch against a recent stable release of OpenStack, which can lead to configuration issues because some parameters have been moved or deprecated.
This work is going to be about supporting both deprecated & new way to configure a parameter in a project. It will help people to update their modules during an OpenStack upgrade, without breaking services that run on stable release. A blueprint will come-up soon enough.
Ruby library vs openstackclient
Some discussion recently came (again) on our mailing-list about replacing the usage of python-openstackclient by a Ruby library to improve performances and skip openstackclient bugs or lack of features.
Our group agreed this is a bad idea for these reasons:
- We already tried to use Aviator but we decided to switch to OpenStackClient.
- In Kilo and Liberty, we made huge progress in puppet-keystone modules to support a lot of interesting features, ahead of other deployment tools, thanks to OpenStackClient. We would have to rewrite everything in a new library… I’m not sure we really want that.
- Until today, there is no stable/tested/documented Ruby library that we can out-of-the-box consume in our modules. We would have to create it, maintain it and create a community around it. The risk of putting efforts on this work is not something we will do in Mitaka, we don’t have enough resources to make it and we would loose some maturity and stability in our modules.
- Using OpenStackClient is good for OpenStack: it provides direct feedback to OpenStackClient’s group (bugs, feature requests) and bring consistency in the project.
- We will try to find ways to improve performances at how we use OpenStackClient in our providers, but we will need more insights from operators.
I would like to take again the opportunity to thank all our contributors and the OpenStack community for this awesome summit.
Hopefully see you all in Austin!