random writes A systems engineer's blog

Trying out CoreOS on vSphere

CoreOS is a minimal Linux distribution created to serve as a host for running Docker containers. Here’s some info on how to try it out on vSphere.

Since stable release 557, CoreOS comes packaged as an OVA and is supported as a guest OS on vSphere 5.5 and above. Deployment instructions can be found in VMware KB 2109161 and they are pretty much straightforward, with the exception of the need to modify boot parameters in order to change the password for the core user before logging in for the first time.

vCenter customization of CoreOS is currently not supported and customization can be done only through coreos-cloudinit. This is a CoreOS implementation of cloud-init, a mechanism for boot time customization of Linux instances, which can be employed on vSphere with some manual effort. More info on coreos-cloudinit and its configuration files called cloud-config can be found in the CoreOS Cloud-Config Documentation page, while for a tutorial on how to make Cloud-Config work in vSphere take a look at CoreOS with Cloud-Config on VMware ESXi.

CoreOS comes with integrated open-vm-tools, an open source implementation of VMware Tools, which is quite handy since CoreOS doesn’t offer a package manager or Perl, so no way to manually install VMware Tools after deployment. According to VMware Compatibility Guide, the open-vm-tools package has recently become the recommended way for running VMware Tools on newer Linux distros and vSphere editions (e.g. RHEL/CentOS 7.x and Ubuntu >=14.04 on vSphere 5.5 and above). For more info on open-vm-tools head to VMware KB 2073803.

As for CoreOS stable releases prior to 557, releases 522 and 494 are supported on vSphere as a Technical Preview. Since they don’t come prepackaged in a format that can be used directly on vSphere, their deployment involves one additional step - converting the downloaded .vmx file to OVF via OVF Tool. Check out VMware KB 2104303 for detailed installation instructions.

CoreOS quickstart

After logging in for the first time, you’ll probably want to start building and running containers. Obviously, Docker comes preinstalled and CoreOS currently uses BTRFS as the filesystem for storing images and containers:

$ docker -v
Docker version 1.4.1, build 5bc2ff8-dirty
$ docker info
Containers: 0
Images: 23
Storage Driver: btrfs
 Build Version: Btrfs v3.17.1
 Library Version: 101
Execution Driver: native-0.2
Kernel Version: 3.18.1
Operating System: CoreOS 557.2.0
CPUs: 2
Total Memory: 5.833 GiB
Name: core557
ID: 2BUV:642W:WTZQ:3L4O:FFIY:JOC5:XKO2:3QPC:ADEJ:LSCS:QS5K:XHKB

As mentioned before, CoreOS doesn’t offer a way to install additional packages directly to the OS, but provides Toolbox, which is by default a stock Fedora Docker container that can be used for installing sysadmin tools. Toolbox can be run using /usr/bin/toolbox, first execution of which will pull and run the container:

$ toolbox 
fedora:latest: The image you are pulling has been verified
00a0c78eeb6d: Pull complete 
834629358fe2: Pull complete 
834629358fe2: Pulling fs layer 
Status: Downloaded newer image for fedora:latest
core-fedora-latest
Spawning container core-fedora-latest on /var/lib/toolbox/core-fedora-latest.
Press ^] three times within 1s to kill container.
-bash-4.3#

After that, packages are yum install <package> away, while CoreOS filesystem is mounted inside the container to /media/root.

Further reading

Compared to traditional Linux distributions, CoreOS has a fundamentally different approach to perfoming updates, which involves automatic upgrades of the complete OS as soon as the new release is available . Therefore, take a look at the CoreOS Update Philosophy in order not to be surprised when your container hosts start automatically upgrading and rebooting themselves.

For running containers at scale, check out CoreOS documentation pages on etcd and fleet.

List host's modified advanced settings via ESXCLI

Just wanted to share a great tip by Ather Beg on how to display all of ESXi host’s advanced settings which are different than the default values via ESXCLI:

esxcli system settings advanced list -d

This will return a list of advanced settings similar to:

...

   Path: /Net/TcpipHeapSize
   Type: integer
   Int Value: 32
   Default Int Value: 0
   Min Value: 0
   Max Value: 32
   String Value: 
   Default String Value: 
   Valid Characters: 
   Description: Initial size of the tcpip module heap in megabytes. (REQUIRES REBOOT!)

   Path: /NFS/MaxVolumes
   Type: integer
   Int Value: 256
   Default Int Value: 8
   Min Value: 8
   Max Value: 256
   String Value: 
   Default String Value: 
   Valid Characters: 
   Description: Maximum number of mounted NFS volumes. TCP/IP heap must be increased accordingly (Requires reboot)

...

where Int Value / String Value (depending whether the parameter stores its value as integer or string) shows the current value for the parameter, while Default Int Value / Default String Value show its default value.

Source: ProTip: How to remind yourself of Advanced Settings changes in ESXi

vSphere CBT bug (and PowerCLI to the rescue)

A few months ago, VMware published information about a pretty catastrophic vSphere bug described in VMware KB 2090639. TLDR version: if you have expanded a CBT-enabled .vmdk past any 128GB boundary (e.g. 128GB, 256GB, 512GB etc.), all your backups since are possibly corrupted.

The above KB article provides workaround information for this issue, which involves resetting CBT for the affected VMs. This can be accomplished either via shutting down the VM and editing its Configuration Parameters or on-the-fly via PowerCLI / VDDK API. Veeam KB 1940 provides PowerCLI code which can be employed for resetting CBT for all CBT-enabled VMs in the vCenter inventory, without the need for shutting them down first.

I have adapted this code to a quick-and-dirty PowerCLI script, which can be found here (for those not familiar with Github, clicking on raw will take you directly to the script).

Just take notice that the script will create and instantly delete a snapshot for all CBT-enabled VMs, so it’s probably a good idea not to execute the script during work hours. Also, the first following backup will last longer, since backup software won’t be able to use CBT information and will have to scan the whole .vmdk for changed blocks.

Downloading files with wget on ESXi

A limited set of familiar POSIX-like tools and utilities is available on ESXi, courtesy of the busybox executable:

~ # /usr/lib/vmware/busybox/bin/busybox --list
[
[[
addgroup
adduser
ash
awk
basename
cat
chgrp
chmod
chown
chvt
cksum
clear
cp
crond
cut
date
dd
delgroup
deluser
diff
dirname
dnsdomainname
du
echo
egrep
eject
env
expr
false
fdisk
fgrep
find
getty
grep
groups
gunzip
gzip
halt
head
hexdump
hostname
inetd
init
kill
ln
logger
login
ls
lzop
lzopcat
md5sum
mkdir
mkfifo
mknod
mktemp
more
mv
nohup
nslookup
od
passwd
poweroff
printf
readlink
reboot
reset
resize
rm
rmdir
sed
seq
setsid
sh
sha1sum
sha256sum
sha512sum
sleep
sort
stat
stty
sum
sync
tail
tar
tee
test
time
timeout
touch
true
uname
uniq
unlzop
unzip
usleep
vi
watch
wc
wget
which
who
xargs
zcat

As you can see, one of the tools present is wget which can be used for downloading files (e.g. installation ISOs, VIBs, offline bundles..) directly from the ESXi Shell, instead of first downloading locally to your desktop or jumphost and then uploading to hosts or datastores.

First, connect to ESXi Shell over SSH or DCUI and cd into the destination directory, which can be e.g. a shared datastore available to all hosts in your cluster:

cd /vmfs/volumes/datastore_name_here

or host’s /tmp directory (common choice for VIBs and offline bundles since it gets emptied on reboot). After that, just fire wget away as wget <file URL>:

~ # cd /vmfs/volumes/ISO_images/
.. # wget http://releases.ubuntu.com/14.04.1/ubuntu-14.04.1-server-amd64.iso
Connecting to releases.ubuntu.com (91.189.92.163:80)
ubuntu-14.04.1-serve 100% |*****************************|   572M  0:00:00 ETA

Here we downloaded Ubuntu Server installation ISO directly to our “ISO_images” datastore.

Direct installation of VIBs from an URL

Downloading installation ISOs is far from a best practice, since it’s probably not the best idea to use your host’s resources for downloading large files from the Internet, but the wget approach can save you same time if you’re often manually installing VIBs or offline bundles on your ESXi hosts.

For VIB files, an alternative to wget-ing and then installing the VIB would be to directly supply the URL of the VIB file to the esxcli software vib install command:

esxcli software vib install -v <URL of the VIB file>

e.g.

/tmp # esxcli software vib install -v http://vibsdepot.v-front.de/depot/DAST/iperf-2.0.5/iperf-2.0.5-1.x86_64.vib --no-sig-check
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: DAST_bootbank_iperf_2.0.5-1
   VIBs Removed: 
   VIBs Skipped: 

Here we installed hypervisor.fr iperf for ESXi VIB directly from the V-Front Software Depot. --no-sig-check is there to bypass signature verification, since the iperf VIB we are installing doesn’t have one.

As esxcli software vib install --help informs us, the supplied URL can be HTTP, HTTPS or FTP. URLs can be supplied only to the command’s -v switch, so this approach is limited to VIB files and not available for offline bundles.

Search available ESXCLI commands

A way to search through available ESXCLI commands from ESXi Shell, vCLI or vMA:

esxcli esxcli command list | grep <keyword>

E.g.

~ # esxcli esxcli command list | grep snmp
system.snmp                                             get         
system.snmp                                             hash        
system.snmp                                             set         
system.snmp                                             test