Skip to content

Migrating from Docker to Podman and IPv6 Only Host VM's

  • hosting
  • servers
  • infrastructure
  • docker
  • podman
  • alpine linux

Docker to Podman

Why? Quite frankly I'm tired of a lot of the limitations in docker, specifically it's network stack. So I wanted to try podman, and boy was it a journey.

To many times when searching on some problem with docker the answer was "I don't know, I switched to podman.".

One time I was trying to get FreeIPA running, to evaluate it and see what it's like. For the life of me I could not get it working on docker. I tried with podman and it worked.

Silly IPv6 issues due to what's best described as partial ipv6 support

The actual amount of differences in usage on the surface are minimal so most every docker-compose.yml will still work out of the box. I'm still running docker:24.0 images from dockerhub.io with podman.socket (But a symlink to with docker.sock so I don't have to change from /var/run/docker.sock ), to build docker images and upload them to gitlab. Very minimal changes (but there is one or two small ones, see Building Images)

Idk why Portainer wouldn't work with an agent, it should work just fine. Cursory internet search says it works fine.

Post getting shit working analysis

Podman Vs Docker:

Docker pros:

  • far superior in the ease of getting it going category likely in a large range of distros. 👍
  • is where I started. 👍
  • cursory results images were building faster - however - this is scewed because of the first con

Docker cons:

Podman Pros:

Kubernetes base runc is podman This is a misnomer. runC is the low level container runtime for both podman and docker by default, which kubernetes also can use.

Podman Cons:

  • Not client/server so docker.sock is a hack, it's there though as /var/run/podman/podman.sock
  • Not really designed to orchistrate containers so restarting containers on reboot takes more effort (or newer versions)
  • it's not docker so some things are not completely 1:1 such as the build command
  • OS Support is still evolving, things are better on new OS versions probably.

Alpine

I like Alpine for my host OS's for my VM's. It's lightweight and from my experience it's easier to turn things on than it is to turn things off. I plan on using my VM's with IPV6 only, so first I had to get the base image ready. The base image contained Docker before, so that had to be cleaned up.

apk del docker
apk add podman netavark podman-docker
rc-update add podman default

Alpine Versions

I was using 3.17, I had installed cloud-init into it and things were working well. I installed podman, and things were looking good. Made a cloud image, tested it, had to tweak boot scripts to recognize podman vs docker because a lot of things changed. I added dhcpcd to the base image this time, and ran echo "ipv6only" >> /etc/dhcpcd.conf to append ipv6only to the config file. I stopped the image, made a clone as per my usual process, booted the clone, logged in with the pre-set root ssh key. I enabled cloud-init, removed the pre-set ssh key and locked the root user as per my usual security practices.

It worked. I was able to launch the gitlab runner on boot, but then of course I tried the reboot test, and on reboot the containers wouldn't come back up

Updating to Alpine 3.18 added changes that allows podman's rootful service to restart containers with --restart=always.

I had issues with updating.

Cloud-Init

After I setup a base image with the stuff I want, I edited /etc/dhcpcd.conf and commented out 'hostname' so my later scripts can uncomment it after it's been set by cloud-init. Otherwise it'll set the hostname to the ipv6 received, resetting a hostname in my configuration is difficult. The following commands are used to try to get the cloud-init image into a clean state, so it will generate a new duid and secret and all the things on next boot along with cloud-init. Cloud-init can then set the hostname, and the boot script can finally dhcpcd -k && dhcpcd to set the hostname. (Testing this now.)

PS C:\Users\deadc0de> ssh fd60::fc5 -l root
Welcome to Alpine!

The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <https://wiki.alpinelinux.org/>.

You can setup the system with the command: setup-alpine

You may change this message by editing /etc/motd.

10gb-alpine:~# hostname "localhost"
localhost:~# echo "localhost" > /etc/hostname
localhost:~# nano /etc/dhcpcd.conf
localhost:~# echo "" > /root/.ssh/authorized_keys
localhost:~# passwd -l root
passwd: password changed.
localhost:~# setup-cloud-init
Enabling cloud-init's boot services...
localhost:~# rc-update -a -u
 * Caching service dependencies ...                                                                                                                                                                        [ ok ]
localhost:~# service dhcpcd stop
localhost:~# rm -r /var/lib/dhcpcd/*
localhost:~#

minimal commands, dhcpcd can be cleared by cloud-init boot scrip

nano /etc/dhcpcd.conf
hostname "localhost"
echo "localhost" > /etc/hostname
setup-cloud-init
rc-update -a -u
echo "" > /root/.ssh/authorized_keys
passwd -l root
service dhcpcd stop
rm -r /var/lib/dhcpcd/*

Terraform

Terraform started acting up when I didn't give the vm any ipv4 on boot. Everything would work fine but terraform would get stuck 'creating vm' forever. So I removed the ipv6only declaration from /etc/dhcpcd.conf - and then the following script is run by cloud-init on boot. This way it grabs an ipv4 on first boot, which it holds for maybe 3-5 seconds, but should be long enough for whatever to be set for terraform to move along.

restart_dhcpcd() {
  dhcpcd -k || true
  service dhcpcd stop || true
  rm /var/lib/dhcpcd/* || true
  ip addr flush dev eth0 || true
  service dhcpcd start || true
  dhcpcd || true
}
enable_hostname_dhcpcd() {
  #Set a variable called 'dhcpcd_restart', set it to false by default, set it to true if a restart of dhcpcd is needed at the end of this function


  if ! grep -q "^duid$" /etc/dhcpcd.conf; then
    cp /etc/dhcpcd.conf.apk-new /etc/dhcpcd.conf
  fi
  if ! grep -q "^ipv6only$" /etc/dhcpcd.conf; then
    echo "ipv6only" >> /etc/dhcpcd.conf
    norestart_dhcpcd=""
  fi
  pattern="^#hostname$"
  if grep -q "$pattern" /etc/dhcpcd.conf; then
    replacement="hostname"

    # Use sed to perform the replacement in the file
    sed -i "s/$pattern/$replacement/" /etc/dhcpcd.conf
    norestart_dhcpcd=""
  fi
  if [ "$norestart_dhcpcd" = true ]; then
   echo "No dhcpcd restart."
  else 
    restart_dhcpcd
  fi
}

Alpine 3.18 update issues

3.18 provides pyserial and python-netifaces as apk packages now, so pip is no longer required. I had modified /etc/init.d/cloud-init to add needs podman to the depends() section. On update if a config file is to be replaced but it's modified, apk will create a .apk-new or similarly labeled file. This conflict has to be resolved and the extra file removed, or problems will happen due to two files defining the same service names.

After resolving these conflicts things began to work again.

I also updated /etc/rc.conf i believe to change the cgroups from hybrid (default) to unified (cgroups v2)

Networking

To set up networking I had to change the default podman network settings. Turns out it's pretty easy. First I create a network, eg podman1 , with the settings I want, like ipv6 enabled with the address range fd24::/64 . something like: podman network create --ipv6 --subnet fd00:1:2::/64 podman1 Then find then generated file /etc/containers/networks/podman1.json , rename the file to podman.json , open the file with favorite text editor and edit the name from podman1 to podman , and now should have a dual stack ipv4 and 6 default podman network.

Because I'm using IPv6 only on the host, i simply highlighted and erased the ipv4 subnet and gateway from the podman.json. This should hopefully prevent containers from trying to access anything with IPv4.

Gitlab Podman DinD differences

In order to keep changes to a minimum for any other component in the setup, I modified the cloud image's /etc/init.d/podman to create add ln -s /var/run/podman/podman.sock /var/run/docker.sock || true , (make sure to append the || true or it will fail the script if the symlink already exists) this way I don't have to change the location in any of the configurations off of docker. ***

...
                einfo "Configured as rootful service"
                checkpath -d -m 0755 /run/podman
                ln -s /var/run/podman/podman.sock /var/run/docker.sock || true
        else
                einfo "Configured as rootless service"
...

Building images

I'm using image: docker:24.0 in my gitlab cicd's, now using podman.sock to actually run things, but still actually use the docker runtime to build things to keep changes minimal.

Almost everything works, then when I tried to push images to the repository things blew up.

It failed saying the image wasn't in the repository, of course it wasn't. Podman requires a slight difference to the build command to operate the same way as docker. --output type=docker must be added into the docker build command.

Now builds work.

Also of note, docker system prune -a -f produces some strange failure code even though it seems to still work, so more || true's can be littered in.

Current State

  • Rootful --restart-always containers restarting on reboot 🖥️✅
  • Gitlab docker image build runners 🖥️✅

I think it's minimally ready to go.