PowerShell script to alert when an object is added or removed from an important group.

It’s Thursday at 12:27pm. YOu are sitting in a meeting. Meanwhile, someone has just gained access t o your domain admin group and now has the keys to your companies entire compute platform. But all is not lost. If you have the right alerting in place. Less than five minutes later, you receive an alert to say that the group membership has changed and you can now take immediet action. Saving the world… and your company… from absolute disaster! That’s what this PowerShell script does. Along with lots of checks. For example, let’s say someone get’s in through a back door and start’s messing around with systems. Well, if it doesn’t see the right number of logs in the folder, it starts to get a bit jumpy. That results in an alert as well. Or if the previous logs can’t be read, same thing happens. The world get’s an Email.
So here you go. Have fun with this. I encourage you to add more checks. This is been expanded since I published it to add even more validation to make sure that the script and the infrastructure doesn’t change. But this will get you well on the right road.

Audit all windows firewalls on your domain and display the results in a UI using PowerShell Universal

Do you want the code for this? No problem. Just skip down to the heading that says “Code here”.
Yeah yeah yeah. I know that I have given out plenty about Ironman Software and their PowerShell Universal product very strongly on a few different sites. But unfortunately for me, There’s just nothing else on the market that can wrap a nice easy(ish) UI around PowerShell scripts. So stick with me while I explain what I’m doing here.
My need:
Hey, first, lookup something called the star principal. It’s an Amazon Interview technique and I’m going to use it here to explain the last few days quickly and easily.
Star stands for:

  • Situation
  • Target
  • Action
  • Result

So the Situation is:

I need to provide a comprehensive, up to date, reproduceable and accurate report of the status of Windows firewalls on servers.

The Target is:

Re-use a script that I wrote two years ago, warap it in a UI and give that to the director so he can run this report or ask someone else to do it without coming to me any more than one time.

Action:

Ah. here’s where it get’s fun.
Firstly, here is how it hangs together:

  1. . I have all the processing in a PowerShell module. I’m comfortable working in the command line so having it in a module full of functions that I have written to get me through the day by removing repeditive tasks suits me well. But it doesn’t suit anyone else. Having PowerShell vomit out text to the director wouldn’t put me on his Christmas list. In fact, I’m already not on his Christmas list. Maybe I should go back to plain text? Pondering for a different day. Sorry. I went off on one there. Anyway, what I’m saying is I want to wrap that in a UI but I don’t want to rewrite code. Re-use and Re-cycle.
  2. I went in to look around PowerShell Universal for the first time in ages. I was getting weird errors when using powerShell5 where it wasn’t recognising stored secrets. But it turns out that the maximum time you can store a secret for is one year. So I suppose that’s just something I missed in some bit of documentation somewhere.
  3. Then, sometime over the past year, I tightened security on all of the service accounts so by by to storing Kerberos tickets in an active user session. This made me rethink how I was handling permissions for this script.
  4. Sometime in the past two years since I wrote this really great function, I got too clever for my own good. In other words, I over complicated it. Initially, I was just passing in a string as a parameter but then sometime, I must have decided that I wanted to throw custom objects with servers in it and I also started using the pipeline. What am I talking about? Okay. I’ll explain briefly.
    This is how you would pass something to a script using a parameter:
    First, let’s say we have an array called $MyWonderfulArray[] with several fields in it. ServerName and TrafficDirection. If the function doesn’t support taking the fields out of the pipeline, we need to explicitly loop through every item in this array and pass it the values for ServerName and TrafficDirection. That sounds kind of slow doesn’t it? Yeah. It is! Here’s an example:
    $ServerVariable = $MyCoolArray[0].ServerName
    $InboundOrOutbound = $MyWonderfulArray[0].TrafficDirection
    MyCoolFunction -Server $ServerVariable -TrafficDirection $InboundOrOutbound
    Now. firstly. You might ask what the idea of the [0] is. That’s just getting the first item in that array. I could loop over the array but this wasn’t meant to be a PowerShell tutorial.
    But now let’s take a quick look at using the pipeline. Let’s say your function expects two parameters. ServerName and TrafficDirection. Well, because these are already specified as fields in my array, I don’t need to explicitly pass them as parameters to the function assuming of course that I have configured the parameter section at the top of the function to support grabbing these fields through the pipeline. So now without needing to loop or even explicitly pass over the fields, I do this:
    $MyWonderfulArray | MyCoolFunction
    See? The pipeline is cool.
    But because I had changed the function, I was encountering infinit loops and some ocasional errors. That wasn’t too difficult to fix. I got it sorted within a few minutes.
  5. I found that tens of thousands of lines were added for some particular servers. Turns out that when ever a user logs into an RDS session host server running 2019, it creates a hole lot of firewall rules for that session. Okay. Anyway, I fixed that. It required painfully removing tens of thousands of rules then applying a registry fix to each session host server so that the problem doesn’t repeat in the future. Still, this took a good three hours tonight because as I was deleting so many rules each time, the MMC snapin kept freezing? Why didn’t I use PowerShell? Well, because there are about 40 other rules in there specific to the applications running on those session host servers and the last thing I want is someone from that facalty calling me on Monday morning with a room full of students anxiously waiting to start their labs while I try to figure out what rule in the tens of thousands that I removed caused this particularly horrible delay to their teaching and learning. so that really wasn’t fun.
  6. Next, I ran the script again but found that for some reason, one of the filters for traffic direction wasn’t working. I’m running this code using invoke-remote and it’s a non-native PowerShell command so sometimes they can behave in unexpected ways. Again, that wasn’t really difficult to sort. A where-object to only return the output that I wanted got around the problem. But you must understand oh most patient reader that each time I ran this script, it could take up to an hour or even two. It’s going across quite a lot of servers and really diving deep into the firewall rules, what they allow and what they reject. So each thing I changed even if it was minor took a long time for it to process.
  7. I had messed around with creating a UI for this a few years ago but I tidied it up tonight. I had a stupid bug in it. It was using the entire count of servers when reporting on the number of bad / dangerous rules. Now I have a separate variable with the count. Why I didn’t just do that a few years ago, I don’t know.

Result:

It all works. It took a lot longer than I would have liked but I’m really happy with the result. Something that anyone with the right level of permissions can independently use without my input.

Absolutely nothing in my life has gone to plan this week. Well, all I have had time for is technology problems so I suppose my life has just been technology. still though. I still need to get to another job tomorrow where I installed Cuda but the GPU isn’t found after a reboot. I spent three hours on that on Wednesday evening but now the person just wants me to install Docker and use Cuda and Kaldi through containers instead. That’s going to be another truck lode of fun but it’s going to have to wait until tomorrow because I’m tired.
Hey, for the record, I’m not really a fan of Nvidia at the moment either. Their documentation is out of date, their drivers are out of date and they mis and match terms. For example, at the top of the driver support page, they talk about Tesla T4 but then down the page they say how the driver only supports series 9 and above. How the hell am I meant to know what series the Tesla T4 is? Anyway, sorry. I’m rambling again.
Because I’m feeling very generous, here’s some code that will just change your life if you are administering a lot of Windows servers and you need to audit all the firewall configs.

The Code!

complex technical fuck up.

Ordinarily, I would say that adding bad language into a post title is something that should be avoided. But this was indeed a complex technical fuck up. Hey, on a slightly different topic, ever feel like walking into a meeting after something goes wrong and just saying bluntly something like: “That stupid fucken problem was a pain in my ass for days because the stars aligned to screw me. It’s like some divine ass hole thought to it’s self; “Hey Darragh over there is not quite busy enough. Let’s throw way more shit at him to see what happens”. Well better luck divine-o-ass. I’m not giving up that easily”. Damn. There were nested quotes in that rant. That’s a new best in shit writing isn’t it?

Okay. Okay. I’m calm now. I just needed to get that out of my system.

I hear you ask: who pissed in Darragh’s cornflakes this morning? Well, it was Docker. For the past two days.

Here is the rough outline of the crap I have had to deal with outside of work over the past few days.

Firstly, this all relates to HomeAssistant and docker.

  • It all started at the beginning of the week. I was at a wedding for two days and during that time, I noticed that the cert for HomeAssistant expired. That usually means that it has lost it’s connection to the cloud service. When I got back on Wednesday, I found that the subscription was valid but it had indeed lost the connection. I checked for logs that would indicate the source of the problem but there was no luck. Not a single log was written to suggest where the problem was. I was running on 2022-1 and 2022-3 was out so I suspected the container either needed a reboot or it needed the latest version installed. So that’s what I did. First, I restarted the container. That didn’t work. Second I updated. That didn’t work. Finally, I rebooted the host server. This is where the world went into free fall and everything broke.
  • The server came back up and I was met with a default “onboard” page for HomeAssistant. The air turned a shade of blue while I cursed. Thinking that it had reset the HomeAssistant install or something crazy like that. But no. I was able to find my files in the container. Here’s where everything went stupidly bad.
  • I have a few other things running on this Docker host. Yes. I know that really isn’t supported by HomeAssistant but I’m confident enough with Linux to make this work. I say this. But if you keep reading, you will seee that perhaps although I’m confident with Linux, maybe I have no right to be. Did I mess up? I’ll let you decide.
  • I ran docker ps to show the list of running containers. I could see hassio (short for home-assistant.io had four running containers. hassio_audio, hassio_multicast, hassio_samba and hassio_supervised. It looked like these containers were pointing back to where I had HomeAssistant stored. But it wasn’t picking up the right config. I thought to myself, where the hell are my other containers for Pihole, streaming and Unifi? But anyway. I didn’t think much about it. This is where I completely messed up. I should have stopped, thought and realized that if those containers were running they should be shown by docker ps.
  • I relinquished thought’s of this being a quick fix though and set up HomeAssistant as a new installation with the intention of restoring from a backup. Do you take backups? I do. Every night. I was thankful for this. Anyway, I keep rambling. I log into the new installation only to find HomeAssistant Supervisor isn’t available. This is a Core only install of HomeAssistant. Alarm bells begin rinning. Why the hell is this only the Core installation and where has my installation gone?
  • I try to completely uninstall this. Knowing that I had a full backup, I was willing to get a bit agressive at this point. The problem is I get an access denied error when I try to remove any of the containers with docker rm hassio_samba for example. I find that this is because of the hassio_apparmor service. But stopping it with systemctl stop hassio_apparmor.service doesn’t work. I found that it needs to be stopped with aa-teardown. Only then could I remove the containers.
  • So. I remove the containers and I try to install with this command:
    docker run -d –name=homeassistant –restart=always –network=host -v /etc/homeassistant:/config homeassistant/home-assistant:stable
    That didn’t work. I got errors like this:
    Failed to start hassio-apparmor.service: Unit hassio-apparmor.service has a bad unit file setting.
    I’m still not sure what caused that. But I moved on. I found that for some reason, the hassio_apparmor and hassio_supervisor files weren’t removed from /etc/systemd/system/ so I deleted these and the problem went away.
  • I was encountering lots of weird errors so I took a step back and started looking at everything on the server. During the small hours of this morning, I finally found something that triggered an oh crap moment. I found a tutorial that mentioned installing HomeAssistant from the snap store in Ubuntu. I know I didn’t do this. But while I was looking for HomeAssistant files during one of the many times I manually uninstalled this, I remember seeing files in /snap. So I had a moment of realization. Snap must be installed! Now, I have checked my .bash_history and that of the root account. Not once did I issue a command with the word snap in it. SO I have no idea why this is installed. I ran one command and this answered all my questions.
    whereis docker
    Sure enough. there’s a second binary for Docker in /snap/docker. Running
    snap list
    shows that snap-docker is installed.
  • I remove this:
    snap remove docker
    Then I reboot
  • Victory! Now I run docker ps and I see my missing docker containers such as the oen for Ubiquity, Pihole etc. I also see the docker containers for the propper installation of HomeAssistant. But here’s where I shot myself in the foot. I had completely mangled those containers while rampaging through the file system looking for and purging anything that could be causing conflicts during those times that I was encountering errors. The problem now is that the origional and correctly set up docker containers are completely messed up. I try reinstalling using the propper version of docker but the images and the containers are in a terrible state. I’m not able to reinstall because there are images that still exist in a partial or damaged state. (Yes. I really screwed this up didn’t I?). However, I can’t give up. I manage to delete the images by finding the ID’s of each image and passing them to the ps rmi command. Sometimes these had dependencies that couldn’t be removed because they were too mangled. So I used docker rmi -f (imageID).
  • Afterwork, I used updatedb and locate to find all existing homeassistant and hassio files related to a container. I manually removed these and started the installation again.
  • for the record, I find that the most reliable way to isntall the HomeAssistant with HomeAssistant-Supervisor docker containers is to use these Deb installers:

Don’t do what I did. After 3am this morning, I was tired and I installed the container first then the os agent. HomeAssistant complained that the supervisor wasn’t running in privledged mode. But a quick restart of the container fixed this.

What a complete pain in the ass. This blog post is long. But this pales in comparason to the hours and hours I spent on this until the early morning hours for the past few days.

I will say one more thing. I read a post a few months ago where someone said that they started off with a Combee II Zigbee USB device but then upgraded to something a little more serious. In my firm opinion, the Combee II stick is simply amazing and I doubt there is anything else on the market like it. I restored my HomeAssistant config and because the Combee II keeps an independent record of all the Zigbee devices that are connected to it, once the HomeAssistant config was reapplied, the Combee stick just worked. No fuss, no complaints. Having this independent bridge outside the HomeAssistant ecosystem has saved me from a lot of work twice now. Now, of course, I regularly take backups of that config as well. Just in case.

Building a high performance compute server on Azure and installing KenLM and Cuda/Kaldi with NVIDIA Tesla drivers.

About a week ago, I was asked to build a new server. This is going to be used for research purposes so the spec is quite high. 16 dedicated CPU cores, 110GB RAM and an NVIDIA Tesla T4 GPU. It’s running on Azure and the applications needed on it are a little different. So this was a lot of fun.

First the VM type: It’s a Standard_NC16as_T4_v3 server. You can’t just go buy one of these. You must create a support request with Microsoft so that they can release the number of cores required for this specific type of server. This is a painful process! There were 200 processor cores available in that subscription but obviously not at the right type. However, there is a very useful category when creating a support request in the Azure Portal for requesting additional cores. What isn’t so useful is the portal didn’t understand that I had enough cores. I needed the specific cores for this research server. I spoke to a HPC (High Computer Performance) specialist about something unrelated during the week and he knew what I was talking about right away. But it took over a week for Azure Support to understand what I was looking for then make the required changes.

Moving on, Once Microsoft did what they needed, setting up the new server wasn’t difficult. It was created within about 10 minutes after I finished with the VM creation wizzard.

The main requirements of this server are Cuda and KenLM and this is really what this post is about. I don’t spend every day in a Linux environment. So when I need to install something like this that I wouldn’t use often, I rely heavily on documentation. It’s not that I couldn’t go hunt down all the installation sources and dependencies. But that would be a waste of time. And time is not something I really like to waste.

I took notes during this process. These include the commands that I used to install everything and the various sources I read through to learn a bit more about what I was installing and how it could and should be done.

In case anyone copies and pastes the following lines, I am going to proceed my comments with #.

# First you need to determine the GPU that you have and the suggested driver. Fortunately, this is way easier than it used to be.
apt install ubuntu-drivers-common
ubuntu-drivers devices

# Do not use this next command. It installs way too much and will result in massive dependency issues when you go to install Cuda.
# ubuntu-drivers autoinstall

# After installing the GPU driver, you must reboot.
reboot now

# The following command will install the NVIDIA gPU driver. It will also install the unmet dependencies.
apt install nvidia-driver-470 libnvidia-gl-470 libnvidia-compute-470 libnvidia-decode-470 libnvidia-encode-470 libnvidia-ifr1-470 libnvidia-fbc1-470

# This will install all of the Cuda dependencies.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub
add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
apt-get update
apt-get -y install cuda

# Add the Cuda binaries to your path:
echo 'export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}' >> ~/.bashrc

# You can test that Cuda is installed and that the version installed is as expected as follows:
nvcc --version

# IF at some point, you need to start again, this one-liner will remove all the NVIDIA and Cuda packages that you might have installed using aptitude / apt-get.
# apt clean; apt update; apt purge cuda; apt purge nvidia-*; apt autoremove apt install cuda

# The following lines will install KenLM on Ubuntu 20.04.
apt-get update
apt-get install build-essential libboost-all-dev cmake zlib1g-dev libbz2-dev liblzma-dev
apt-get install build-essential libboost-all-dev cmake zlib1g-dev libbz2-dev liblzma-dev -y
git clone https://github.com/kpu/kenlm
cd kenlm/
mkdir build
cd build
cmake ..
make -j 4
make install

HomeServer updates 2022!

Oh this post could become very large. So I’m going to try to keep it brief. Perhaps I’ll pad it out with a few more posts over the next few days or weeks. But here goes.
My home server set up for 2022.
First of all, what’s all this for. Why do I need a home server? What is it used for?
My requirements for a home server have changed a lot over the past 20 years. Home servers for me started as Email and web servers then progressed into DHCP and DNS servers as well as firewalls with big noisy and powerfull beasts running under my stairs then running in self contained cabinits that were custom built for the task.
However, About five years ago, I decided I would move away from hosting my own DHCP and DNS servers and instead I would just go back to off the shelf solutions. Such as those provided by my ISP router and the Ubiquity controller for my wireless network. That has been fine. In fact, it has worked very well. However, it required a few small servers from time to time for testing technologies or ideas that I had. Raspberry Pi 4 has been my tiny compute platform of choice. But this started to get a bit messy. For example: I got more into home automation. So a Pi was dedicated to that. Previously, a Pi was running my Ubiquity Unifi controller and the code for some of my light automation. I was also frustrated a lot by the lack of decent customization in relation to DNS on the Fritzbox router. So here’s what I’m running right now.

  • PiHole for DNS. This is primarily working as an add blocker for all phones, tablets and computers on the network.
  • HomeAssistant. This handles all my home automation. I no longer even have a Philips or Aqara gateway / hub. I’m instead using a Combee ii USB stick as the Zigbee gateway. This will require some more explination.
  • The Unifi controller software for my Ubiquity wireless access points.
  • RClone. This is handling the processing and access to my cloud files.
  • Navidrone. This is my new audio server software. I’ll need to explain why that is needed in another post.
  • Bonob. This is a bridge between Navidrone and my Sonos. Used to let me play the media directly on my Sonos. Okay. I’m going to give you a quick overview of what I’m doing here because in my opinion, it’s kind of cool.

I’m running a large NAS in the house. But it’s getting old. It’s probably 8 years old by now. But it’s reasonably large. Running at 8tb usable storage space in RAID 5. Replacing that NAS isn’t something I’m very interested in doing for two reasons. Firstly. The cost would be huge. But second, it’s a big noisy thing. I could go for a quieter model but to get that kind of storage from solid state disks would cost a lot of money. So again. I suppose it comes down to cost. I’m going to need a NAS. That is unavoidable. But But thanks to an idea from a friend, I will need a lot less space.

So. How am I going to use less space while not removind a lot of files? Simple. Cloud storage. But that leads to another problem. How do you integrate cloud storage into your every day work flows and systems. For exmaple. If you store your music on Google apps or OneDrive, how does Sonos access it? It’s simple. It can’t. Not directly anyway. So here’s where for me it get’s interesting.

Firstly, understand that I wouldn’t just dump all the music up there. Because I have privacy concerns. I have aquired this music on CD over a very long time. It is mine but I would have a concern that if I start uploading 2tb of music, Microsoft or Google are going to start getting suspicious. Actualy, this is a founded concern. Paul Thurrott had this problem with OneDrive about four years ago. So I encrypt the files before sending them to the cloud service of choice. This really complicates things because now there’s really no hope of something like my Sonos reading the files because now they are on the cloud and they are also encrypted.

So. here’s how I get around it:

  1. I use RClone to encrypt and copy all files before I copy them from the old NAS up to the cloud storage.
  2. Now I mount the encrypted volume from RClone.
  3. I have set Navidrome up to look at this volume for it’s music
  4. Bonob then connects to Navidrome.
  5. Sonos is configured to use Bonob as a music service. Bonob is connected into Navidrome so the flow is: Sonos asks for musi c from Bonob. Bonob get’s that music from Navidrome. Navidrome get’s the file from the encrypted mountpoint by RClone. This encrypted mountpoint in turn goes to the cloud storage. All this happens within a maximum of four seconds. But although this sounds like a lot of time, it’s really not and also that 4 seconds is only really an issue when starting playback for the first time. When the Sonos is moving to the next track, it allows plenty of time to pre-cach the next track before playing it.

Have you read this far? Good. You’re officially a geek / nerd. Well done. I’m genuinely proud of you. There’s one more thing to just edge the geek factor up another notch.

Twenty years ago, this would have been running on several physical servers. Ten years ago it would have been running on one big beafy computer with several virtual machines dedicated to each function. In this generation of containers, this is all running on a mini-pc with an I7 processor, 16GB RAM and a 512GB NVME drive. Before the enterprise compute gurus jump out of your skin to tell me that there’s no redundancy here. You are absolutely right. But settle yourself down for a second. I’m going to talk about redundancy and backups now in a second.

Everything is running on Docker containers. So once I have backups do I really care if the computer dies? well, yeah. I would care because this little computer is really nice and it runs way faster than I had expected. But realistically if it dies, all I do is build a new host operating system, bring my docker containers back over to it, bring the containers up, configure networking and everything is back again. It’s not an enterprise environment with 100% up time. The main thing that matters is that it’s cheap to run, quiet, runs at a cool temperature and if something really goes wrong, I can update easily. I have the encryption password and salt saved somewhere safe completely disconnected from the server so once I can decrypt the encrypted backups, all is good. … I hope.

RDS / Terminal services: run multiple xming sessions on one server

Introduction

Xming is an application that is used in conjunction with PuTTY on Windows to remotely access X11 graphical window interfaces on Linux. It’s an old application but a great one. Typically, it would be run as a single instance on one computer. However, my aim is to run it on a RDS session host server running Windows 2019. Several dozen students may need access to it concurrently. This

Problem

Xming is fine when running it as a single instance / a single invocation of the process on a single computer. But it wasn’t really designed to be run on a server where several dozen users might access it similtainiosly. There is a way of running several instances of it by specifying a server number. But the problem with RDS is that it would introduce a security risk if I was to allow end-users to specify their own command parameters when running an executable.

Investigation

A quick Google search took me to this blog post from 2008. From there I found that there were command switches that enabled the process to run several times on the one server. However there are problems with implementing this in a user-friendly way for use on remote desktop services.

  1. Users shouldn’t have access to independently modify the parameters / arguments passed to any executable.
  2. A unique server number must be used for each server or the person already using that number would be kicked out.
  3. Incrementing the number based on the number of currently running instances of the xming process wouldn’t work. Take the example of 12 concurrent users of the process. User 5 logs off. There are now 11 users. A new user logs on so the count returns to 12. The number 12 is already in use but the user doesn’t know that so the person on number 12 is kicked off because the new person get’s 12 instead. Less than ideal!

Solution

The approach I took was to write a script that would associate every person who logs in with a unique number. Then each time they log in they will be allocated that same number.

$File = “c:\temp\tempuserlist.txt”
$Username = whoami
$CSVFile = import-csv $File | where-Object { $_.name -eq $Username }
If ($CSVFile -eq $null) {
$CSVFile = import-csv $File | Select-Object -Last 1
[int] $count = $CSVFile[0].Count
$Count ++
$details = [pscustomobject]@{
name = $Username;
count = $Count
}
$details | Export-Csv -Path $File -NoTypeInformation -Append
}
else {
[int] $count = $CSVFile[0].Count
}
Start-Process -FilePath “C:\Program Files (x86)\Xming\Xming.exe” -ArgumentList “:$Count -clipboard -multiwindow”

You now need to advertise PowerShell as an application, change the alias of PowerShell then also change the icon so that users think they are clicking on Xming. That’s easy enough as well. I assume that if you are reading this, you are already aware of how to set up an application in RDS. So I’ll focus on the step required to change the Icon of the PowerShell application so that it matches the icon of the Xming application.

  1. Open PowerShell as an administrator on the session host server where Xming is installed.
  2. Use the following command. Replace the connection broker and the collection name with the values from your environment.
    Get-RDRemoteApp -ConnectionBroker rds-con-01.ad.dcu.ie -CollectionName GenericStudent -alias “powershell” | set-rdremoteapp -ConnectionBroker rds-con-01.ad.dcu.ie -CollectionName GenericStudent -iconpath “C:\Program Files (x86)\Xming\Xming.exe” -IconIndex 0

The night before Christmas – 2021

Méabh and Rían continue to be the stars of the show. But I have tried to feature Emma and her mother in this more as let’s face it. Without their input, none of this would happen.

If you know us, or if you have followed these podcasts for the past 7 years, you will probably find that quite a bit changes from year to year. Sometimes not all for the good. This year, you’ll find that some of the dynamics change between Méabh and Rían. It’s nothing negative. Just an interesting alteration in their interactions.

Of course, in this podcast there’s always some fun. Some funny conversations and at some stage, someone will talk about poo.

I would sincerely like to take this time to with you and your family and friends the very happyist of Christmas. May 2022 bring you greater certainty, fortune, stability and safety.

Some music for a change.

I have a music related site over at www.darraghpipes.ie so I rarely post my music related stuff here any more. But I had two professional videos recorded before the summer this year and I’m only figuring out that perhaps I should start getting them out to the public to get a bit of a return on my investment. 🙂

There’s another here.

Create a persistent alias in PowerShell

One of my pet hates about PowerShell is the /b switch isn’t available for get-childItem AKA dir or ls. I just want to read through a list of files in the directory. I don’t want all the other information. At least in the old dir command, the file and directory names were at the start of the line.

So to get around this, I simply just select the Name property from get-ChildItem. But I don’t want to type that every single time. Enter Profile.ps1 and set-Alias.

I have a simple command: List. This shows me exactly what I want to read and no more. You might have something similar for your situation.

You will notice that the alias list actually calls a function. This is because I’m piping the get-ChildItem command to the select command. This isn’t supported by set-alias. So adding this to a function get’s around that minor annoyance.

Oh, you will also notice that I’m changing the colour of the table row header to bright white using a new feature called $PSStyle. At some point, I’m going to script my screen reader to recognise these different colours and tell me what is a directory, what’s an executable, what’s a column header etc. This is the beginning of this process. Colouring the output of get-ChildItem is an experimental feature in PowerShell 7.2 through $PSStyle.FileInfo. I’m trying this out at the moment.
$PSStyle.Formatting.TableHeader = $psstyle.Foreground.BrightWhite
Function GetShortDirectory {
get-childitem | select Name
}


set-alias list GetShortDirectory

Check for high memory usage or hung status in Dell Boomi

I needed to add a check today to dell Boomi or as it’s now known as “Boomi” because it failed twice in the past few months. The problem was it failed but it didn’t actually stop the service. Because Boomi runs within a Java virtual machine, it doesn’t necessarily expose it’s problems to the host operating system. So monitoring systems such as Nagios don’t always pick up the accurate status.

The best way of determining if Boomi was not behaving as expected is to check the Boomi container logs. If there are memory errors, report an exit code of 3 to indicate to Nagios that there is a critical state or check for no logs written in the past two minutes to indicate again that there is a critical status as the Boomi Atom has hung.

Create the script on the Nagios Host and the Boomi Atom

Add this to the libexec directory. Probably /usr/local/nagios/


#check_boomi_memory.sh
#!/bin/bash
BoomiMemoryErrors="$(sed -n "/^$(date --date='10 minutes ago' '+%d %b %Y %H:%M:%S')/,\$p" /opt/Boomi_AtomSphere/Atom//logs/$(date +%Y_%m_%d).container.log | grep 'Low memory')"
if [ -z "$BoomiMemoryErrors" ]
then
echo "Boomi has no memory errors."
exit 0
else
echo "Boomi has encountered memory errors"
exit 2
fi


#check_boomi_hung.sh
#!/bin/bash
AnyBoomiLogsWritten="$(sed -n "/^$(date --date='2 minutes ago' '+%d %b %Y %H:%M:%S')/,\$p" /opt/Boomi_AtomSphere/Atom//logs/$(date +%Y_%m_%d).container.log)"
if [ -z "$AnyBoomiLogsWritten" ]
then
echo "Boomi has stopped"
exit 2
else
echo "Boomi is Running correctly"
exit 0
fi

On the Nagios server and the Boomi atom

First, find the commands.cfg file. Either use the locate command or use find -name. You need to add this to the bottom.


define command{
command_name check_boomi_memory
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_boomi_memory.sh
}


define command{
command_name check_boomi_hung
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c check_boomi_hung.sh
}

Add this check to the boomi Atom host file within your Nagios servers directory

I’m going to assume you know where that is. It’s usually in either /usr/local/nagios/servers or /etc/nagios/servers/


define service{
use generic-service
host_name #add your hostname
service_description check_boomi_hung
contacts #Add your contacts here.
check_command check_nrpe!check_boomi_hung
}


define service{
use generic-service
host_name #add your hostname
service_description check_boomi_memory
contacts #Add your contacts here.
check_command check_nrpe!check_boomi_memory
}

Update the NRPE config with these new commands

This file is likely in /usr/local/nagios/etc/nrpe.cfg


command[check_boomi_hung]=/usr/local/nagios/libexec/check_boomi_hung.sh
command[check_boomi_memory]=/usr/local/nagios/libexec/check_boomi_memory.sh

Verify that your new checks work

You will do this from the Nagios server. Make sure you reload the config first.

Quick tip:
Use the following command to check the validity of your config.

/usr/sbin/nagios -v /etc/nagios/nagios.cfg

Now to reload nagios, use the usual systemctl reload nagios.service


/usr/local/nagios/libexec/check_nrpe -H -c check_boomi_hung
/usr/local/nagios/libexec/check_nrpe -H -c check_boomi_memory