Securing Your Online Presence

Why and Where You Should Plant Your Flag presents a set of places where it would be wise to make sure that you have your ‘online identity’ defined in order to prevent fraud artists from impersonating you.

The list is USA-centric, and as a Canadian, my set of things worth “planting” are slightly different, nevertheless many are relevant here. A more distinctly set of “Canada-relevant” places are:

  • Canada Revenue Agency – CRA My Account
  • Credit Bureaus – Equifax and TransUnion. Note that Canadians are allowed to request, from TransUnion, a free consumer’s credit report about themselves, once per month. Requesting a credit report isn’t exactly a “placing of a flag”; it is, however, a powerful tool for verifying that there aren’t any extra flags lurking out there with your name on them.
  • Online accounts for your bank(s)
  • Provincial government (e.g. – for drivers licenses and similar)
  • Utility accounts (power, water, as apropos)
  • Phone company
  • ISP
  • Email service

The notion here is that, for any of these that you can possibly have an online account for, you should set it up, and secure it as well as you can, with such things as

  • Good passwords, securely recorded (e.g – randomly generated, as with tools like KeePass, OnePass, and such)
  • If multi-factor authentication is available, it is way better to have that than to not have that

The purpose of “planting your flag” is to prevent someone else from surreptitiously taking that treated-as-unique piece of online presence, pretending to be you, and thereby giving themselves a back door into your finances.

The sort of situation where this is especially troublesome is where seniors who never became “computer literate” have never bothered to have these sorts of online accounts, and therefore have no online footprint. Unfortunately, such people are very attractive to scam artists, who can probably search out enough information on the web to be able to get a guess on the old “Mom’s maiden name” authentication rules, and then initiate fraudulent activity.

I’ll note that I was pretty impressed with the CRA process, which included an exchange of secrets before they sent a secret key to the address indicated on past tax returns. I imagine that for someone that moves regularly, there could be some inconvenience in proving your identity, but I have been sufficiently stationary that their process worked well for me, and seemed pretty secure. However, where people choose terrible passwords, apparently this led to thousands numbers of cracked CRA accounts in August 2020.

At the bank, fraudulent activity might involve transferring funds away, or establishing an unexpected mortgage. At CRA, it might enable redirecting a tax refund, or initiating a COVID-19 assistance payment, directed to someone else’s bank account. The sets of possible frauds are, alas, decently large.

CFEngine Alternatives

I have been using CFEngine 2 (which is substantially different from version 3) for a great many years to manage various aspects of my home system environments, making use of such things as:

  • Copying files, to do simplistic backups where that works
  • Editing files to have particular content such as SSH keys, cron jobs
  • Restarting processes that I want to keep running (syncthing, dropbox, …)
  • Running shell commands on particular hosts
    • To run backups
    • To run cleanup jobs
  • Setting up symlinks to configuration files, so that I have authoritative configuration in a git repository, and then rc files in $HOME or $HOME/.config or such reference them
  • Ensuring ssh keys have appropriately non-revelatory permissions
  • Making sure new servers have my set of favorite directories

I had used cfengine2 to build system management tools with a “PostgreSQL flair”, where the point was to help manage database instances, doing things like:

  • Deploying PostgreSQL binaries and libraries (our custom builds included Slony-I, for instance)
  • Rotating database logs
  • Building out the filesystem environment for database clusters, thus
    • Setting up needed directories for PGDATA
    • Setting up database log directories
    • Setting up symlinks for the latest binaries, alongside the above “deploying” of the binaries

Eventually, others took this over, ultimately replacing CFEngine with newer tools like Puppet and Ansible, so these uses fell out of my hands.

I never made the migration from CFEngine 2 to CFEngine 3; the latter is apparently a fair bit more featureful, but I found myself unhappy with how the authors decided that having decently trackable logging was something they felt should be a proprietary extra-price extension.

Perhaps ten years later, now, I’m finding that builds of cfengine2 are getting sparse in Linux package management systems.

I started looking around at the sorts of systems that are considered to be successors to CFEngine. My encounters with Puppet have left me with no desire to take that on for systems I’m operating for myself; it seems slow-running and tedious. The short list of plausible alternatives I found of most interest were Ansible and Salt Stack. But as I started poking further, I found that none of these actually reflected the ways in which I have been using CFEngine.

Systems like Puppet, Ansible, and Salt Stack are intended for deploying services and applications, along with their configuration. That’s largely not what I’m doing. (Perhaps I should be looking at it more that way, but it certainly hasn’t been…)

It looks like none of these are what I’m needing for my usual use cases. I am doing some replacements with more modern bits of technology, but with only partial migration away from CFEngine2.


The situations where I was having CFEngine launch, and keep running, certain processes are looking, these days, like what systemd does. I am not especially a lover of systemd, but nor am I one of the haters. I am unhappy with the steady scope creep it seems to undergo, but I do like the way that Unit files provide a declarative way of describing services, their semantics, and their relationships.

For the various services that I want operating, I have set up systemd user unit files. This has led to more CFEngine2 configuration, curiously enough:

  • I create Unit files for services in my favorite Git repo that manages my configuration
  • Configuration files for the service reside in that repo, too.
  • I added CFEngine link targets to point $(HOME)/.config/systemd/user/$SERVICE.service to the unit file in my git repo, and, typically, more to point $HOME/.config to the configuration for the service
  • I added CFEngine process rules that check for service processes that should be running, and run /bin/systemctl --user start $SERVICE if they are not running

It means there’s a few more CFEngine rules, but basically of just two sorts:

  • Process rules, to manage the service process (and it’s using systemd tooling, which is pretty “native,” no horrendous hackishness), and
  • Link rules, to link files in the Git repo into the places where they need to be deployed.


A lot of what I now have left in CFEngine is a set of rules for establishing symlinks.

There has been an outgrowth of tools for doing this sort of thing, and, to be more precise, tools for managing “dotfiles”. There is an awesome-dotfiles repository linking to numerous tools that have been established to help with this.

There are two that elicited the most interest from me:

  • dot-templater, a Rust-based tool with a system for customizing which files (and content) are exposed on each system
  • chezmoi, a more sophisticated system that has a "chezmoi” command for interactively attaching dotfiles to one’s configuration repository

Sadly, they are all so much more sophisticated than symlinks that it has, thus far, seemed simpler just to add a few more link entries to my main CFEngine script.

The direction I am thinking of is to take my “hive” of CFEngine link lines, which, in truth, are decently terse and declarative, and write a little shell-based parser that can read and apply that. Actually, there’s several approaches:

  • Read the link rules, and directly apply them
  • Read the link rules, and generate commands for one or another of the “dotfile manager” tools to put the files under management

Cron Jobs

My use of CFEngine has gone through various bits of evolution over time.

  • Originally, I set up shellcommand rules to run interesting processes periodically, so that my crontab would run cfengine some number of times per hour, and the shellcommand rules would invoke the processes.
    This is well and fine, but means that there are two sources of truth as to what is running, namely what is in the crontab, and what is in my cfengine script. Two sources of truth is not particularly fun.
  • As a part of the “Managing Database Servers” thing, years back, I had recognized that the above was not nice, and so wrote up a script that would capture one’s crontab into a file that would be captured in a specific place, complete with history. It would therefore check the current crontab against the previous version, capturing new versions any time there was a change. This is an output-only approach to things, but nevertheless very useful for tracking history of crontab over time.
    I had never applied this at home.
  • I determined that I needed to fix my “two sources of truth” problem, so took measures to Do Better.
    • A first step was to capture, on each host, the current contents of users’ crontabs, and, as a better thing than before, capturing this in a versioned fashion into a Git repository. This provides the history that Managing Database Servers had done, but as it resides, version-controlled within Git, even better.
      pushd ${CRONHOME}
      echo "Saving crontab to ${CRONTABOUTPUT}"
      crontab -l > ${CRONHOME}/${CRONTABOUTPUT}
      git add ${CRONTABOUTPUT}
      git commit -m "Saving crontab for user ${USERNAME} on host ${HOST}" ${CRONTABOUTPUT}
  • The new, still better step was to use editfiles to compute what I wanted to have in my crontabs. This would construct new files, $(CRONTABS)/$(hostname).$(username).wanted
    consisting of everything that my CFEngine script decided ought to be running on this host, for this user. Thus, the CFEngine script represents the Single Point Of Truth as to what is supposed to be in my crontab.
    I ran this, and in the interest of some lack of trust ;-), did not immediately automate application of this as a new crontab.
    • I did a nice manual run across each of my hosts, comparing the dumped crontab output with what is thought wanted, namely $(CRONTABS)/$(hostname).$(username).wanted
    • There were discrepancies (and since it wasn’t automatically applied, no consternation!), so some modifications were done to rectify shortcomings
    • When I concluded that everything matched my desires, it’s apropos to run crontab against $(CRONTABS)/$(hostname).$(username).wanted so that this is automatically applied
    • Now we have a series of single points of truth:
      • The captured-in-git history files document actual states of crontab over time
      • If I want to add or remove jobs, that takes place by modifying the CFEngine code to add/remove editfiles rules.

This is not exactly a “migration away from CFEngine”, but it does make for a way better controlled set of cron jobs.

I am quite sure that I am not sure what would be much better. I have looked into cron alternatives both small and large. At one point, we did a Proof of Concept at work looking at Dollar Universe (now a Computer Associates product), at the really sophisticated end. That would, personally, be ridiculous overkill, but there are places where it’s going to be a good choice.

Cron has a number of weaknesses:

  • Not very easily auditable
  • Not good at handling “flow control” where a system may be getting overloaded by the set of cron jobs getting invoked
  • No in-system awareness of jobs that should be mutually exclusive or that should be ordered. (“Don’t run A and B simultaneously; make sure to only run B after having run A”)

Nevertheless, for small-ish tasks where exact timing isn’t too critical and where conflicts may be addressed by running jobs in separate hours of the day, it isn’t worth looking to a job scheduling system that is way more complex to manage and heavier weight to run.

One would-be alternative to cron that looks somewhat interesting is pg_timetable which has its data store backed by PostgreSQL and which has a notion of “task chains.”

At one point, I did a bit of work creating a “pg_cron,” which had loosely similar requirements. It never reached the point of working; the place where I was pointedly short on answers was on how to establish the working environment for tasks. The environment needs to be “portable” in a number of ways; you’d want to be able to control tasks running on remote hosts, too. David Tilbrook’s QEF environment seemed to have relevance; it had ways of managing the launching of work agents with tight control over the environment they would receive. Unfortunately, time just hasn’t permitted experimenting more deeply with that.

Happy 2020

It sure has been a while since the last time I did up a blog entry…

A thing for 2020 is to do so slightly more frequently, perhaps somewhat systematically. I suppose I’m one of the exceedingly independent “non-herdable cats” of the movement. I’m not especially following anyone else; just following the loose principles that…

  • I should generate my own content
  • On my own web site
  • Hosted on my own domain

Rather than depending on the vagaries of others’ platforms. If you’re depending on Google Plus to publicize your material, oops, it’s gone! And the same is true for other platforms like Facebook or centralized “syndication” systems.

I won’t be getting rich by having someone’s ads on my site, but, again, that’s not a stable source of monies for much the same reasons suggested about material syndication.

The above is all pretty “meta”, and shouldn’t interest people terribly much. What I probably ought to be writing about that might be somewhat interesting would be about things like the following:

  • I have been fooling around with TaskWarrior, a somewhat decentralized ToDo/Task manager, which is allowing me to track all sorts of things I ought to be doing.
    The interesting bit of this is that I’m capturing a whole lot of “things to research”, which tends to point at software I probably ought to consider using, adapting, or, just as likely, ignoring, due to it not being interesting enough.
  • My web site nearby is managed using SGML/DocBook, which is a toolset that is getting increasingly creaky. I’d quite like to switch to another data format that is easier to work with. Some ideas include OrgMode and TeXinfo. I did some poking around to try to find tools to convert DocBook into such; the tools seem to only be suitable for reasonably small documents, and I have 122K lines of SGML, which makes that choke…
  • I have been fooling around with Oh shell (it’s written in Go, and essentially implements Scheme behind the scenes) as a possibly better shell. I’m trying to collect better thoughts as to why that might be a good idea. (I’m not sure Oh is the right shell though)
  • My cfengine 2 configuration management scripts are getting mighty creaky. Initial research focusing on SaltStack and Ansible showed off that those sorts of tools are totally not suitable to the problem I am solving, which is that of managing configuration (e.g. – dotfiles) and the differences needed in differing environments (e.g. – home versus work, servers versus laptop)
  • I’m poking at using Tmux more extensively. I started using GNU Screen in the early 20-noughts, and switched to somewhat simpler to manage tmux a few years ago. There are now tools like tmuxinator for managing sophisticated tmux-based environments, and it looks like that could be quite useful.
  • The big “work” thing I have been gradually chipping away at is Kubernetes. I tend to build batch processes, so this functions quite differently than usual documented use cases.
  • Apparently I should look at some “scrum” tools for task boards; some searching found a bunch of tools where research tasks are queued up in TaskWarrior to get dropped on me at some point…
  • I need to revisit my EmacsConf 2019 notes to see what sorts of things are worth poking at more.

Spamalicious times

Hmmph. Google sent me a “nastygram” indicating that one of my blog entries had something suggestive of content injection.

I poked around, and it was by no means evident that it was really so. The one suspicious posting was which legitimately has some stuff that looks like labels, as it contains a bunch of sample SQL code. I’m suspicious that they’re accounting that as being evil…

But it pointed me at a couple of mostly-irritating things…

  1. I haven’t generated a blog entry since 2013. Well, I’m not actually hugely worried about that.
  2. I reviewed proposed response posts, since, probably about 2013. Wow, oh wow, was that ever spam-filled. Literally several thousand attempts to get me to publish various and sundry advertising links. It’s seriously a pain to get rid of them all, as I could only trim out about 150 at a time. And hopefully there weren’t many “real” proposed postings; it’s almost certain I’ll have thrown those away. (Of course, proposed postings about things I said in 2013… How relevant could it still be???)

Why do Macs get all the cool cases?

I recently had pointed out to me the BookBook case, specifically designed for MacBooks, which is a leather case that makes one’s MacBook look like a vintage piece of literature.

Well… A “distressed leather-covered book.”

Well… Perhaps it makes one look like an arrogant yuppie :-).  And maybe the word “distressed” ought to get applied to other things than just the cover of the book :-).

In any case, I took a somewhat similar approach in building a Moleskine iPod case for my iPod Touch. (Not using a real Moleskine, but rather one of the Moleskine-like notebooks that Google has been giving out scads of at conferences like PG East and PGCon over the last few years.)

Ironically, one of the stated selling points of the BookBook is that

Being individual and different is what Macs are all about…

Of course, it a lot of people buy this, then they’re specifically not being “individual and different.”  The more people that buy it, the less “individual and different” it gets.

I’d still like one, though not necessarily specifically for a MacBook.  There is likely the rub…

On the one hand, it strikes me as something of a “commercial loss” for the vendor not to sell it in sizes suited for other sorts of laptops.

On the other hand, they are a little pricey ($80 USD), and I suppose that there may not be a huge market for overpriced “too cool for school” cases for other sorts of laptops.  There’s some indication that MacFans are willing to buy up all sorts of ludicrously overpriced merchandise that others mayn’t be so willing to overpay for…

Awesome TShirt Idea

ThinkGeek carries lots of geeky T-shirts.

I am wearing the “No, I will not fix your computer” right now!

One that they ought to have is the IMHO one, perhaps with IMNSHO on the back.

That needs more development but seems like it ought to be able to become a subtly insulting product :-).

Blogging from my iPod

I wonder how much more often I will post reflections if I have software that makes the process more portable.

Hopefully portability cuts down on the cost of deciding to reflect. We shall see…

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start blogging!