12 Mar 2015
CoreOS is a minimal Linux distribution created to serve as a host for running Docker containers. Here’s some info on how to try it out on vSphere.
Since stable release 557, CoreOS comes packaged as an OVA and is supported as a guest OS on vSphere 5.5 and above. Deployment instructions can be found in VMware KB 2109161 and they are pretty much straightforward, with the exception of the need to modify boot parameters in order to change the password for the
core user before logging in for the first time.
vCenter customization of CoreOS is currently not supported and customization can be done only through
coreos-cloudinit. This is a CoreOS implementation of cloud-init, a mechanism for boot time customization of Linux instances, which can be employed on vSphere with some manual effort. More info on
coreos-cloudinit and its configuration files called
cloud-config can be found in the CoreOS Cloud-Config Documentation page, while for a tutorial on how to make Cloud-Config work in vSphere take a look at CoreOS with Cloud-Config on VMware ESXi.
CoreOS comes with integrated
open-vm-tools, an open source implementation of VMware Tools, which is quite handy since CoreOS doesn’t offer a package manager or Perl, so no way to manually install VMware Tools after deployment. According to VMware Compatibility Guide, the
open-vm-tools package has recently become the recommended way for running VMware Tools on newer Linux distros and vSphere editions (e.g. RHEL/CentOS 7.x and Ubuntu >=14.04 on vSphere 5.5 and above). For more info on
open-vm-tools head to VMware KB 2073803.
As for CoreOS stable releases prior to 557, releases 522 and 494 are supported on vSphere as a Technical Preview. Since they don’t come prepackaged in a format that can be used directly on vSphere, their deployment involves one additional step - converting the downloaded .vmx file to OVF via OVF Tool. Check out VMware KB 2104303 for detailed installation instructions.
After logging in for the first time, you’ll probably want to start building and running containers. Obviously, Docker comes preinstalled and CoreOS currently uses BTRFS as the filesystem for storing images and containers:
$ docker -v
Docker version 1.4.1, build 5bc2ff8-dirty
$ docker info
Storage Driver: btrfs
Build Version: Btrfs v3.17.1
Library Version: 101
Execution Driver: native-0.2
Kernel Version: 3.18.1
Operating System: CoreOS 557.2.0
Total Memory: 5.833 GiB
As mentioned before, CoreOS doesn’t offer a way to install additional packages directly to the OS, but provides Toolbox, which is by default a stock Fedora Docker container that can be used for installing sysadmin tools. Toolbox can be run using
/usr/bin/toolbox, first execution of which will pull and run the container:
fedora:latest: The image you are pulling has been verified
00a0c78eeb6d: Pull complete
834629358fe2: Pull complete
834629358fe2: Pulling fs layer
Status: Downloaded newer image for fedora:latest
Spawning container core-fedora-latest on /var/lib/toolbox/core-fedora-latest.
Press ^] three times within 1s to kill container.
After that, packages are
yum install <package> away, while CoreOS filesystem is mounted inside the container to
Compared to traditional Linux distributions, CoreOS has a fundamentally different approach to perfoming updates, which involves automatic upgrades of the complete OS as soon as the new release is available . Therefore, take a look at the CoreOS Update Philosophy in order not to be surprised when your container hosts start automatically upgrading and rebooting themselves.
For running containers at scale, check out CoreOS documentation pages on etcd and fleet.
06 Feb 2015
Just wanted to share a great tip by Ather Beg on how to display all of ESXi host’s advanced settings which are different than the default values via ESXCLI:
esxcli system settings advanced list -d
This will return a list of advanced settings similar to:
Int Value: 32
Default Int Value: 0
Min Value: 0
Max Value: 32
Default String Value:
Description: Initial size of the tcpip module heap in megabytes. (REQUIRES REBOOT!)
Int Value: 256
Default Int Value: 8
Min Value: 8
Max Value: 256
Default String Value:
Description: Maximum number of mounted NFS volumes. TCP/IP heap must be increased accordingly (Requires reboot)
where Int Value / String Value (depending whether the parameter stores its value as integer or string) shows the current value for the parameter, while Default Int Value / Default String Value show its default value.
Source: ProTip: How to remind yourself of Advanced Settings changes in ESXi
24 Jan 2015
A few months ago, VMware published information about a pretty catastrophic vSphere bug described in VMware KB 2090639. TLDR version: if you have expanded a CBT-enabled .vmdk past any 128GB boundary (e.g. 128GB, 256GB, 512GB etc.), all your backups since are possibly corrupted.
The above KB article provides workaround information for this issue, which involves resetting CBT for the affected VMs. This can be accomplished either via shutting down the VM and editing its Configuration Parameters or on-the-fly via PowerCLI / VDDK API. Veeam KB 1940 provides PowerCLI code which can be employed for resetting CBT for all CBT-enabled VMs in the vCenter inventory, without the need for shutting them down first.
I have adapted this code to a quick-and-dirty PowerCLI script, which can be found here (for those not familiar with Github, clicking on raw will take you directly to the script).
Just take notice that the script will create and instantly delete a snapshot for all CBT-enabled VMs, so it’s probably a good idea not to execute the script during work hours. Also, the first following backup will last longer, since backup software won’t be able to use CBT information and will have to scan the whole .vmdk for changed blocks.
15 Jan 2015
A limited set of familiar POSIX-like tools and utilities is available on ESXi, courtesy of the busybox executable:
~ # /usr/lib/vmware/busybox/bin/busybox --list
As you can see, one of the tools present is
wget which can be used for downloading files (e.g. installation ISOs, VIBs, offline bundles..) directly from the ESXi Shell, instead of first downloading locally to your desktop or jumphost and then uploading to hosts or datastores.
First, connect to ESXi Shell over SSH or DCUI and
cd into the destination directory, which can be e.g. a shared datastore available to all hosts in your cluster:
/tmp directory (common choice for VIBs and offline bundles since it gets emptied on reboot). After that, just fire wget away as
wget <file URL>:
~ # cd /vmfs/volumes/ISO_images/
.. # wget http://releases.ubuntu.com/14.04.1/ubuntu-14.04.1-server-amd64.iso
Connecting to releases.ubuntu.com (126.96.36.199:80)
ubuntu-14.04.1-serve 100% |*****************************| 572M 0:00:00 ETA
Here we downloaded Ubuntu Server installation ISO directly to our “ISO_images” datastore.
Direct installation of VIBs from an URL
Downloading installation ISOs is far from a best practice, since it’s probably not the best idea to use your host’s resources for downloading large files from the Internet, but the wget approach can save you same time if you’re often manually installing VIBs or offline bundles on your ESXi hosts.
For VIB files, an alternative to wget-ing and then installing the VIB would be to directly supply the URL of the VIB file to the
esxcli software vib install command:
esxcli software vib install -v <URL of the VIB file>
/tmp # esxcli software vib install -v http://vibsdepot.v-front.de/depot/DAST/iperf-2.0.5/iperf-2.0.5-1.x86_64.vib --no-sig-check
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: DAST_bootbank_iperf_2.0.5-1
Here we installed hypervisor.fr iperf for ESXi VIB directly from the V-Front Software Depot.
--no-sig-check is there to bypass signature verification, since the iperf VIB we are installing doesn’t have one.
esxcli software vib install --help informs us, the supplied URL can be HTTP, HTTPS or FTP. URLs can be supplied only to the command’s
-v switch, so this approach is limited to VIB files and not available for offline bundles.
28 Nov 2014
A way to search through available ESXCLI commands from ESXi Shell, vCLI or vMA:
esxcli esxcli command list | grep <keyword>
~ # esxcli esxcli command list | grep snmp