Thanks to Mike Svoboda at Linkedin and a league of experienced CFEngine users, we are happy to announce the “CFEngine Office Hour”. Meet with CFEngine folks, bring your questions! Here is what to expect: “Instead of lecturing about how we’ve used CFEngine, the focus of this office hour is dedicated to helping you!” “Have you ever had a question that you wanted to ask, but didn’t want to blast it out on the mailing list because its too public? Would you like for someone to take a look at one of your policies and maybe suggest improvements? Have a question about how to approach an automation problem?” “The idea behind the office hour is that we want to help other folks in the community bootstrap their environment.” “Getting over that initial learning curve can be quite a challenge. Having a video conference with a person whom you can ask questions of, and can interact with directly can make this process a lot easier.” “Even if you’ve been using CFEngine for a few years, feel free to drop in. Maybe you can learn a thing or two by looking at policy examples.” If you haven’t joined the #cfengine channel, we’re on libera.chat. Feel free to drop by and ask questions there as well, there are typically a few of us around. We will post the times of Open Office Hours on our Events page We hope to see you!
In this blog post I would like to show how one of the best configuration management solution integrates with an equally well known ticketing system - Jira When a specific policy becomes out of compliance, there is a common need to integrate this with a ticketing system. For example, you have an important web application configured and ensured to be running using CFEngine. If any aspect of that fails, you want to be notified immediately. But since you already get enough email, and you already use a ticketing system for all other tasks, you want to open an issue in the JIRA issue tracking system on such an event. CFEngine 3.6.2 introduces Custom actions as a notification method for alerts, which virtually enables any notification method for any event happening in your infrastructure. In our new How To, we show how to integrate CFEngine with JIRA using Custom actions. Let CFEngine open a ticket for you whenever something important happens with your infrastructure, and spend your time planning instead of monitoring!
Dear CFEngine Community, we are proud to announce our new mailing list: dev-cfengine. Given that the contributions in both Core and Masterfiles repositories have been steadily increasing, the need for such a list became apparent. While patch submissions and code reviews will still be taking place using GitHub’s pull requests, this list serves the purpose of facilitating any other discussion on the development of CFEngine. We are looking forward to seeing the community being active on that list. In addition, we, the CFEngine developers, are planning to participate with all our discussions that do not touch on CFEngine Enterprise. Regards, CFEngine AS
This post clarifies whether CFEngine is affected by the newly published vulnerability in the SSL protocol,POODLE. CFEngine core functionality, i.e. agent-to-hub communication is not affected in any way by the POODLE vulnerability. If the protocol version is set to “classic” or “1”, or is just left to be the default, then all communication happens using the legacy protocol which has nothing to do with SSL. If it is set to “latest” or “2”, then TLS version 1.0 is used, which does *not* suffer from the specific flaw in SSL v3.0 that enables POODLE. So the vulnerability is not applicable in any case. CFEngine Enterprise provides the Mission Portal web interface, served via the Apache web server at port 443. Unfortunately the default package installation uses default Apache settings, and httpd currently accepts connections using SSL v3.0. To remedy the problem, the following line should be edited in
CFEngine recently released version 3.6, which makes deploying and using cfengine easier than ever before. The greatest improvement in 3.6, in my opinion, is by far the autorun feature. I’m going to demonstrate how to get a policy server set up with autorun properly configured.
Installing CFEngine 3.6.2 The first step is to install the CFEngine package, which I’m not going to cover. But I will say that I recomend using an existing repository. Instructions on how to set this up are here. Or you can get binary packages here. If you’re not using Linux (like myself) you can get binary packages from cfengineers.net.If you’re inclined to build from source I expect that you don’t need my help with that. Having installed the CFEngine package, the first thing to do is to generate keys. The keys may have already been generated for you, but running the command again won’t harm anything.
CFEngine 3.6.2 is now available - in both Community and Enterprise editions! There are major new features in the Enterprise hub; High Availability and Custom actions. In addition, we have resolved numerous issues to provide you with a very stable release. It has been about 8 weeks since the 3.6.1 release, and we plan to continue on a 6-8 week schedule for maintenance releases going forward.
High availability for the hub A common requirement for most enterprises is that key processes and mission critical applications are highly available - in essence to ensure there is no single point of failure. Although CFEngine is a distributed system, with decisions made by autonomous agents running on each node, the hub can be viewed as a single point of failure. Essentially, the hub has two responsibilities:
With the slew of recent security issues like Supermarket Point of Sale Compromises not once but twice, other large retailer card breaches, the famed Heartbleed vulnerability and others in the news. We want to share an example of how CFEngine can be used to quickly identify and remediate affected systems. In our documentation please find the “Reporting and Remediation of Security Vulnerabilities” tutorial. The tutorial walks through policy to both identify and remediate the recent #shellshock exploit. For those using CFEngine Enterprise there is guidance on creating dashboard alerts and inventory reports included.
In this installment we turn to Danilo Fernando Chilene who recently wrote about **monitoring CFEngine with Zabbix. **The original blog can be found at
https://bicofino.io/post/monitoring-cfengine-with-zabbix/. In this particular piece learn about how Zabbix can be leveraged to monitor processes, memory use and the promise summary log in the context of CFEngine. If you have other such stories of CFEngine use, we would love to hear back from you. Thanks Danilo for a great post! Monitoring CFEngine With Zabbix I created a template to monitor CFEngine with Zabbix. This allows the monitoring of processes, memory use and the promise summary log.
We recently announced the general availability of CFEngine Enterprise 3.6.1. One of the key capabilities added to this maintenance release is a supported upgrade process. In today’s post, I’ll walk you through an outline of the upgrade procedure, which will hopefully provide a good starting point for you to map out the entire process for your CFEngine deployment. Note that the examples here assume starting the upgrade from a CFEngine 3.5.x install, but the same steps are applicable to version 3.0 as well. Don’t forget to refer to our online documentation for the complete set of steps.
Or what we should mean by Distributed Orchestration Orchestrating complicated distributed processes is an unfamiliar aspect of computing that leads to all kinds of confusions. We are not taught how to do it in college, so we end up trying to apply any methods we are taught, often in inappropriate ways. Promise theory paints a very simple picture of distributed orchestration. Rather than imagining that a central conductor (controller) somehow plays every instrument by remote magic wand, in an algorithmic fashion, promise theory says: let every player in an ensemble know their part, and leave them all to get on with it. The result is an emergent phenomenon. The natural dependences on one another will make them all play together. Over-thinking the storyline in a distributed process is the easiest way to get into a pickle. This is a key point: how we tell the story of a process and how it gets executed are two different things. Modern programming languages sometimes pretend they are the same, and sometimes separate the two entirely. Scroll back time to 1992, and the world was having all the same problems as today, in different wrapping. Then there was cron, pumping out scripts on hourly, daily and weekly schedules. This was used not only for running jobs on the system but for basic configuration health checks. Cron scripts were like a wild horde of cats, each with their own lives, hard to bring into some sense of order. In the early 90s the acme of orchestration was to sort out all your cron jobs to do the right thing at the right time. The scripts had to be different on the machines too because the flavours of Unix were quite different – and thus there was distributed complexity. Before CFEngine people would devise devious ways of creating one cronfile for each host and then pushing them out. This was considered to be orchestration in 1992. One of the first use cases for CFEngine was to replace all of this with a single uniform model oriented language/interface. CFEngine was target oriented, because it had to be repeatable. Convergence . In this article I explain why virtual environments and containers are basically this issue all over again. Another tool of this epoch is the make for building software from dependencies. In 1994, Richard Stallman pointed out to me that CFEngine was very like make. Indeed, this ended up influencing the syntax of the language. The Makefile was different, it was the opposite of a script. Instead of starting in a known state and pushing out a sequence of transitions from there, it focused on the end state and asked how can I get to that desired end state? In math parlance, it was a change of boundary condition. This was an incredibly important idea, because it meant that – no matter what kind of a mess you were in – you would end up with the right outcome. This is far more important than knowing where you started from. Makefiles did not offer much in the way of abstraction; you could substitute variables and make simple patterns, but this was sufficient for most tasks, because patterns are one of the most important mechanisms for dealing with complexity. Similarly, make was a serial processor running on a single machine, not really suitable for today’s distributed execution requirements. The main concession to parallelism was the addition of “-j” to parallelize building of dependencies. What was really needed was a model based approach where we could provide answers to the following questions: what, when, where, how and why. So now we come to the world of today where software is no longer shackled to a workstation or a server, but potentially a small cog in a large system. And more than that - it is a platform for commerce in the modern world. It’s not just developers and IT folks who care about having stuff built - it’s everyone who uses a service. Many of the problems we are looking to solve can be couched in the model of a deployment of some kind. Whether it is in-house software (“devops”), purchased off-the-shelf software (say “desktop”) or even batch jobs in HPC clusters, all of these typically pass through a test phase before being deployed onto some infrastructure container, such as a server, process group, or even embedded device. Alas the technologies we’ve invented are still very primitive. If we look back to the history of logic, it grew out of the need to hit objects with projectiles in warfare. Ballistics was the cultural origin of mathematics and logic in the days of Newton and Boole. Even today, we basically still try to catapult data and instructions into remote hosts using remote copies and shells. So if a script is like a catapult, that takes us from one decision to the next in a scripted logic. Another name for this triggered branching process is a chain reaction (an explosion). A Makefile is the opposite: a convergent process like something sliding easily down a drain. The branching logic in a script leads to multitudes of parallel alternative worlds. When we branch in git or version control systems we add to this complexity. In a convergent process we are integrating possible worlds into a consistent outcome. This is the enabler for continuous delivery. So developers might feel as though they have their triggered deployments under control, but are they really. No matter, we can go from this… To this … This picture illustrates for me the meaning of true automation. No one has to push a button to get a response. The response is proactive and distributed into the very fabric of the design – not like an add-on. The picture contrasts how we go from manual labour to assisted manual labour, to a proper redesign of process. Automation that still needs humans to operate it is not automation, it is a crane or a power-suit. CFEngine’s model of promises is able to answer all of the questions what, when, where, how and why, at a basic level and has been carefully designed to have the kind of desired-end-state self-healing properties of a drain. Every CFEngine promise is a controlled implosion that leaves a desired end-state. Today, configuration promises have to be supported across many different scales, from the smallest containers like a user identity, to processes, process groups, virtual and physical machines, local networks, organizational namespaces and even globally spanning administrative domains. How do we do that? The simple answer is that we always do it “from within” – through autonomous agents that collaborate and take responsibility for keeping desired-end-state promises at all levels. Traditionally, we think of management of boxes: server boxes, rack boxes, routing boxes, etc. We can certainly put an agent inside every one of those processing entities… But we also need to be able to address abstract containers, labelled by the properties we use in our models of intent – business purpose. These are things like: linux, smartos, webservers, storage devices, and so on. They describe the functional roles in a story about the business purpose of our system. This brings up an important issue: how we tell stories. Despite what we are taught in software engineering, there is not only one version of reality when it comes to computer systems. There is the story: