Show posts tagged:
developers

Extending autorun

What’s autorun? Autorun is a feature of the Masterfiles Policy Framework (MPF)1 that simplifies the process of adding and executing new policy. We have talked about Modular policies with autorun and the Augments before. This time, we dig into autorun a bit deeper to explore some of its current features and look at how to implement your own as we did during The agent is in, Episode 15 - Extending autorun Note: All paths unless otherwise noted are relative to the root of your policy set (typically /var/cfengine/masterfiles is the distribution point). cf-agent and other commands are run as the root user.

Posted by Nick Anderson
August 11, 2022

Processes, forks and executions - part 2

This is the second blog post in a short series about processes on UNIX-like systems. It is a followup to the previous post which focused on basic definitions, creation of processes and relations between them. This time we analyze the semantics of two closely related system calls that play major roles in process creation and program execution. fork() and exec() The UNIX-based operating systems provide the fork() system call1 to create a clone of an existing process and the execve() system call to start executing a program in a process. Windows, on the other hand, provide the CreateProcess() function which starts a given program in a newly created process. Why are UNIX-based systems doing things in a more complicated way? There are many reasons for that, some simply historical, as described in The Evolution of the Unix Time-sharing System:

July 28, 2022

Processes, forks and executions - part 1

While working on the integration of CFEngine Build into Mission Portal we came to the point where we needed to start executing separate tools from our recently added daemon - cf-reactor. Although it may seem like nothing special, knowing a bit about the process creation and program execution specifics (and having to fight some really hard to solve bugs in the past) we spent a lot of time and effort on this step. Now we want to share the story and the results of the effort, but since understanding of the reasons behind the work together with how the implementation works requires quite deep knowledge of how processes are being created and programs are being started on UNIX-like systems, we first start with a series of blog posts focused on this seemingly simple area. They cover the basics as well as some advanced topics in two parts:

July 26, 2022

Synchronize data between PostgreSQL and files

Databases are great for data processing and storage. However, in many cases it is better or easier to work with data in files on a file system, some tools even cannot access the data in any other way. When a database (DB) is created in a database management system (DBMS) using a file system as its data storage, it of course uses files on the given file system to store the data. But working with those files outside of the DBMS, even for read-only access to the data stored in the DB, is practically impossible. So what can be done if some setup requires data in files while at the same time, the data processing and storage requires a use of a DB(MS)? The answer is synchronization between two storage places – a DB and files. It can either be from the DB to the files where the files are then treated as read-only for the parties working with the data, or with modifications of the files being synchronized to the DB. In the former setup, the DB is the single source of truth – the data in the files may be out of sync, but the DB has the up to date version. In the latter setup, the DB provides a backup or alternative read-only access to the data that is primarily stored in the files or the files provide an alternative write-only access to the DB. A two-way synchronization and thus a combination of read and write access in both places, the DB and the files, should be avoided because it's very hard (one could even say impossible) to properly implement mechanisms ensuring data consistency. Both between the two storages, but even in each of them alone.

April 6, 2022

Trigger arbitrary code from PostgreSQL

In this blog post we show how it is possible to run an arbitrary program, script, or execute arbitrary code in reaction to changes and generally events in a PostgreSQL database. Triggers Database management systems (DBMS) provide mechanisms for defining reactions to certain actions or, in other words, for defining that specific actions should trigger specific reactions. PostgreSQL, the DBMS used by CFEngine Enterprise, is no exception. These triggers can be used for ensuring consistency between tables when changes in one table should be reflected in another table, for recording information about actions, and many other things. PostgreSQL's Overview of Trigger Behavior describes the basics of triggers with the following sentences:

March 31, 2022

Show notes: The agent is in - Episode 10 - Event-driven CFEngine

Interested in the efforts underway to make CFEngine manage the environment even faster? Vratislav (Software Engineer) joins the show to talk about cf-reactor Video The video recording is available on YouTube: At the end of every webinar, we stop the recording for a nice and relaxed, off-the-record chat with attendees. Join the next webinar to not miss this discussion.

Posted by Nick Anderson
February 24, 2022

CFEngine bootstrap with Ansible

CFEngine and Ansible are two complementary infrastructure management tools. Findings from our analysis show that they can be combined and used side by side with joint forces to handle all areas in the best possible way. Part of infrastructure management is hosts deployment, either when building a brand new infrastructure or when growing one by adding new hosts. This is something Ansible truly excels in as it makes it very easy to run a sequence of steps on all hosts to initialize (deploy) them and it only requires SSH access to the hosts and Python installed on them. 1

February 3, 2022

Static checking of CFEngine code

Software quality has been a topic and an area of interest since the dawn of software itself. And as software evolved so did the techniques and approaches to assuring its high quality. Better computers providing more computing power, bigger storage and faster communication have allowed software developers to detect issues in their code sooner and faster. And so we got from getting a syntax error after two days of waiting for the box of punch cards to go through the queue of boxes and get loaded into a computer running a compiler to getting such errors from a compiler in seconds or even in real-time from the code editor. And we got from bugs being detected by actually seeing real bugs on punch cards with machine instructions to operating systems providing bug reports with coredumps, tracebacks and lots of information helping the developers to identify the problem, tests detecting problems before the code gets into production, or compilers and tooling detecting them before the code is even executed the first time. We can afford doing things like fuzz testing, we have enough computational power for compilers and special tools to analyze the code and check all possible paths through it and much more. At the same time, software has become a part of almost everything we use or interact with every day and so with the incomparably greater amount of software potentially affecting our lives there is an incomparably greater amount of bugs that need to be detected and fixed or at least handled gracefully. With some software being more critical than other and with bugs ranging from minor annoyances to losses of human lives. Many things have changed in this evolution, but one rule has always been key:

December 9, 2021

Announcing CFEngine Build

Earlier this year, we hinted at what we were working on - a place for users to find and share reusable modules for CFEngine. Today, the CFEngine team is pleased to announce the launch of CFEngine Build: The new website, build.cfengine.com, allows you to browse for modules, and gives you information about how to use each one of them. When you’ve found the module you were looking for, it can be downloaded and built using the command line tooling.

November 1, 2021

Show notes: The agent is in - Episode 6 - Running CFEngine on IoT (Part 2)

Still interested in running CFEngine on IoT? Craig (Digger) shows building CFEngine Enterprise for Yacto and deploys a Raspberry Pi Zero with a sensor to measure the height of Nick’s (Doer of Things) desk. Video The video recording is available on YouTube: At the end of every webinar, we stop the recording for a nice and relaxed, off-the-record chat with attendees. Join the next webinar to not miss this discussion.

Posted by Nick Anderson
October 28, 2021