Editing and copying large files or large numbers of files is slow. For a configuration management tool, it is probably one of the slowest things we do, apart from waiting for other programs to finish or waiting for network communication. In this blog post, we look at how to copy files. More specifically, the most performant approaches available on modern Linux systems. We are working on implementing these techniques so CFEngine and all your policy will copy files more efficiently.
To manage large infrastructures, efficient solutions for both making changes and observing the current state are necessary. As most information (inventory) about hosts is quite predictable and static, there are many opportunities for optimizations in terms of compression and avoiding re-transmission of the same data. In the CFEngine team, we are improving our reporting systems with a focus on correctness and low bandwidth consumption. This will benefit many users, both large data centers where bandwidth (networking equipment) is costly, as well as small IoT devices with limited connectivity. Inspired by git, we are implementing commits of reporting data, with table-based diffs, and compression of multiple changes, akin to squashing git commits.
This is the second blog post in a short series about processes on UNIX-like systems. It is a followup to the previous post which focused on basic definitions, creation of processes and relations between them. This time we analyze the semantics of two closely related system calls that play major roles in process creation and program execution.
fork() and exec() The UNIX-based operating systems provide the fork() system call1 to create a clone of an existing process and the execve() system call to start executing a program in a process. Windows, on the other hand, provide the CreateProcess() function which starts a given program in a newly created process. Why are UNIX-based systems doing things in a more complicated way? There are many reasons for that, some simply historical, as described in The Evolution of the Unix Time-sharing System:
While working on the integration of CFEngine Build into Mission Portal we came to the point where we needed to start executing separate tools from our recently added daemon - cf-reactor. Although it may seem like nothing special, knowing a bit about the process creation and program execution specifics (and having to fight some really hard to solve bugs in the past) we spent a lot of time and effort on this step. Now we want to share the story and the results of the effort, but since understanding of the reasons behind the work together with how the implementation works requires quite deep knowledge of how processes are being created and programs are being started on UNIX-like systems, we first start with a series of blog posts focused on this seemingly simple area. They cover the basics as well as some advanced topics in two parts:
Databases are great for data processing and storage. However, in many cases it is better or easier to work with data in files on a file system, some tools even cannot access the data in any other way. When a database (DB) is created in a database management system (DBMS) using a file system as its data storage, it of course uses files on the given file system to store the data. But working with those files outside of the DBMS, even for read-only access to the data stored in the DB, is practically impossible. So what can be done if some setup requires data in files while at the same time, the data processing and storage requires a use of a DB(MS)? The answer is synchronization between two storage places – a DB and files. It can either be from the DB to the files where the files are then treated as read-only for the parties working with the data, or with modifications of the files being synchronized to the DB. In the former setup, the DB is the single source of truth – the data in the files may be out of sync, but the DB has the up to date version. In the latter setup, the DB provides a backup or alternative read-only access to the data that is primarily stored in the files or the files provide an alternative write-only access to the DB. A two-way synchronization and thus a combination of read and write access in both places, the DB and the files, should be avoided because it's very hard (one could even say impossible) to properly implement mechanisms ensuring data consistency. Both between the two storages, but even in each of them alone.
In this blog post we show how it is possible to run an arbitrary program, script, or execute arbitrary code in reaction to changes and generally events in a PostgreSQL database.
Triggers Database management systems (DBMS) provide mechanisms for defining reactions to certain actions or, in other words, for defining that specific actions should trigger specific reactions. PostgreSQL, the DBMS used by CFEngine Enterprise, is no exception. These triggers can be used for ensuring consistency between tables when changes in one table should be reflected in another table, for recording information about actions, and many other things. PostgreSQL's Overview of Trigger Behavior describes the basics of triggers with the following sentences:
When working with CFEngine, it’s common to hear advice about separating data from policy. Separating data from policy allows for separation of concerns, delegation of responsibilities and integration with other tooling. Each organization is different, and a strategy that works well in one environment may not work as well in a similar environment of another organization, so CFEngine looks to provide various generic ways to leverage external data. For example, Augments (def.json) is useful for setting classes and defining variables very early during the agent execution which can be applied to the entire policy having differences based on system characteristics as well as being used for host specific data.
Generally speaking, CFEngine and Ansible can be used to solve the same problems, but their approaches are different. In this blog post I’d like to discuss the different approaches, their consequences, some advantages of each tool, and even using them together.
CFEngines autonomous agents CFEngine works by installing and running an agent on every host of your infrastructure. It is distributed, each CFEngine agent will evaluate its policy periodically and independently. They rely on a centralized hub for refreshing policy and reporting. Updating the policy, enforcing it, and reporting on the results are decoupled - each of these 3 steps can happen with different configurations / schedules.
Scalability is an important feature of any infrastructure management solution. Either the to-be-managed infrastructure is big already or it is expected to grow as the business grows. Over time more and more resources are needed for CI/CD pipelines and more customers use the product(s). Generally, growing a business means more traffic and requests need to be handled by the infrastructure. Hence, scalability is an important metric for comparing infrastructure management tools when deciding which one to use. Or which ones. Read our latest white paper, benchmarking and comparing the scalability of Ansible and CFEngine for large scale infrastructure management:
In some performance critical situations, it makes sense to limit management software to a single CPU (core). We can do this using systemd and cgroups. CFEngine already provides systemd units on relevant platforms, we just need to tweak them. I’m using CFEngine Enterprise 3.12 on CentOS 7, but the steps should be very similar on other platforms/versions. This post is based on an excellent article from Red Hat: https://access.redhat.com/solutions/1445073
Using ps to check what CPU core is utilized Listing all processes and their core We can use ps to check CPU core for desired processes: