The case of the missing docker volume

Contain yourself…

Containers, or docker containers in my case, can sure be fun. There are so many moments of instant gratification, which are especially powerful when linking multiple orchestrated images and firing it up with docker-compose up. Suddenly, a stack of services that may have taken days to setup with multiple virtual machines or physical boxes, is conjured up and running on a single machine. Then, you just get to play. However, what if that stack is so fun, and maybe so useful, that it seems like you may want to keep it going? Well then, of course we need to start thinking about how to make that ephemeral stack have some persistence.

Detecting changes over time

In a recent case, I was playing with a docker compose script that comes from the folks as Wazuh, which packs up OSSEC HIDS, along with and ELK stack. I had experimented with this a few times in the past, but I was considering leaving a Wazuh stack running for a while. (Having just setup a new Windows laptop computer, and thinking about documenting the initial state, I thought it seemed like good job for OSSEC. Maybe overkill, but frankly, its just how I do.)

So, I thought it was a good time to try out Wazuh again, and put docker to the test while I was at.

A Window into Docker

While my home lab is mix of operating systems, the machine I have been using for a virtual machine “server” is a Windows 10 Pro XPS, with 32 GB of RAM, i7. I do think my first experience with docker was actually on Windows 8.1 and of Windows Server 2012, I really have preferred using Docker in Linux for the most part. There’s just something I’ve never really felt comfortable with the that GUI entry point for some reason. Also, the need to share the whole C drive in order mount drives just feels icky. Am I crazy? Sorry, I digress. The point is, I was reluctant to make this by main VM server, and to use Docker along with it, but Hyper-V has been a solid choice for me in the past, I know it pretty well, and there’s so many resources for troubleshooting, I went with it.

Why is this important? Well, it all comes back to the pesky mounting of data thing I mentioned. As I was testing the set up, I thought I’d make a few volumes to persist the data, but, I decided to just let docker handle it for bit, with one exception. I chose to edit the docker compose yaml file, and add my one named volumes. One for OSSEC data (/var/ossec/data) and one for the configs (/var/ossec/etc).

Everything seemed to work just fine.

It wasn’t fine

It seemed to me, that the named volumes didn’t persist one day. I had been making some changes to the host system, and had to turn off the docker desktop service. When I brought the machines back up, everything worked, but my data was not there. It was like a new installation. What happened to my volume? The problem, I’ve come to believe, is more a problem of timing. It think the initial spin up I did, didn’t actually use my named volumes. So, what did that mean? Well, it means that the docker volumes, with massive random names, probably had my data.

First, I had to see if I could even find my volumes. Inspecting the volume (docker volume inspect <volume name>) showed that the path as /var/lib... which, on a Windows 10 machine, didn’t make much sense. So, I figured (and verified online) that the .vhdx drive that the Docker uses is where I need to look.

Next time, how did I go looking for that data? Did I find it? Once I did, how would I get it back into machine it belongs in? To be continued…