WARNING: I’m installing vCloud Director to a non-supported OS!
Ironically, I think this may well be my last BIG blogpost about vCloud Director for a while. I’m moving on to look at vCloud Automation Center. Although I am running the next version of vCloud Director in my vINCEPTION lab, so I do intend a series of “what’s new” blogposts when that is released (don’t ask me when, and there are no prizes for guessing!)
Yes, I know what funny blogpost title – installing vCloud Director. But despite being at VMware for just over 6 months I haven’t yet installed vCloud Director yet. I’ve been using the vCD Virtual Appliance all this time – which so long as your just working in a demo/lab environment is such a time saver. I’m beginning to look at the next-generation of vCloud Director – and at the moment we don’t have a VA of these “internal only” builds. They have to be installed. So, I thought this would be a good opportunity to do this in anger – and go whole 9yards – multi-cell with load-balancing, and trusted certificates.
My original goal came of the rails. I wanted to do this all with CentOS 6 and use Oracle XE 10f – get it all working and then distribute…. However, the setup of OracleXE was such a royal pain in the arse that I gave up, admitted defeat and resorted to Microsoft Windows.
- The OS Part – CentOS and vCloud Director
- Setting up Microsoft SQL 2008 for vCloud Director
- Configuring a Shared Transfer Location for MultiCell Configuration
- Installing vCloud Director
- Configuration Certificates for vCloud Director using a Microsoft ROOT CA on Windows 2008 R2
- Running vCloud Director Configuration with Microsoft SQL 2008 Server
- Configuring the Second vCloud Director Cell
- Conclusions
The OS Part – CentOS and vCloud Director
One thing I did after doing a “basic server” install of CentOS was the network cards weren’t enabled on boot. So I had modify the /etc/sysconfig/network-scripts/ifcfg-eth0 and make ONBOOT=no be ONBOOT=yes. My saddle/beard wearing Linux friends tell me this happens for security and also because a multi-homed Linux box could be setup for teaming. According some Linux nic-teaming is some ScaryVille experience that should be only undertaken if someone is there to hold your hand. So there we go. You make something more secure by disabling network cards – or alternatively burying it in concrete under a mountain. There’s usability and an out-of-a-box experience for you. [Joking apart I guess I’ve never seen this before cos all my Linux instances have been virtual ones, where the rather wonderful ESX host handles all the nic-teaming at the Hypervisor level. That’s something our friends over at Microsoft have just implemented in Windows 2012 Server HyperV. Something we have been doing since ESX 2.x days in 2003/4. Sorry, to make the dig – but I guess that’s progress for you… 🙂 ]
Anyway, I digress. The other thing I did was “yum update” to my CentOS system and then used RPM (rpm <rpmname> version) to check all the dependencies for vCloud Director were met. I used the KB2034092 which is called “Installing vCloud Director 5.1.x Best Practices” because it has a list of all the requires RPMs needed for the product. I also disabled the firewall and SELinux as well, as this will be just in my lab environment, I didn’t want the hassle of securing/hardening Linux…
My build for my two vCD Cells (CentOS) and the vCD database (Microsoft SQL 2008 R2) were pretty much standard – 2NIC for the Cells (for the HTTP and ConsoleProxy) and just a single NIC for the DB – and gave them all 2GB RAM and 2 vCPUs. After IP up the interfaces – I registered vcdcell01.corp.com, vcdcell02.corp.com and sql2k8nj.corp.com on my internal DNS and made sure I had reverse lookup zone for the IP range I’m using on my Organization Network (172.168.5.x).
Of course as I’m now beginning to work with the next releases of this stuff it worked easier to use ‘vINCEPTION’ to do this work – than it was dedicate hardware to it.
Note: Remember vInception is not official supported – and if you are working with next generation versions of vCloud Suite you should really install them as they would be in production – critically the ESX host should be installed to physical in production – the rest can be virtual, of course.
Setting up Microsoft SQL 2008 for vCloud Director
I installed Microsoft SQL 2008 with the usual settings (remembering to include the management tools) – remembering to enable “Mixed Mode” authentication – because vCloud Director doesn’t support Windows Authentication, but requires SQL Authentication. After the main install I download the Microsoft SQL 2008 Service Pack 3. I used the SQL Server Configuration Manager to enable “Named Pipes” and restart the SQL services. Without the Named Pipes enabled you will find the SQL Server Management Studio application (which allows you to create databases and setup permissions) will not work.
From this point onwards you have two choices – you could use the graphical tools to configure the database of vCloud Director – alternatively there is a series of SQL Queries you can run that handles that configuration for you which is available in the official admin guides – its easier for me to link you to the HTML version in the blogpost. I more or less stayed with these settings – except I’ve never been a fan of putting anything on the root C: so I adjust the paths accordingly.
To Create the Database:
USE [master]
GO
CREATE DATABASE [vcloud] ON PRIMARY
(NAME = N’vcloud’, FILENAME = N’D:\vcloud.mdf’, SIZE = 100MB, FILEGROWTH = 10% )
LOG ON
(NAME = N’vcdb_log’, FILENAME = N’D:\vcloud.ldf’, SIZE = 1MB, FILEGROWTH = 10%)
COLLATE Latin1_General_CS_AS
GO
Set the Isolation Level:
USE [vcloud]
GO
ALTER DATABASE [vcloud] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE [vcloud] SET ALLOW_SNAPSHOT_ISOLATION ON;
ALTER DATABASE [vcloud] SET READ_COMMITTED_SNAPSHOT ON WITH NO_WAIT;
ALTER DATABASE [vcloud] SET MULTI_USER;
GO
Create vCD DB User Account called ‘vcloud’ with a password of ‘vmware’
USE [vcloud]
GO
CREATE LOGIN [vcloud] WITH PASSWORD = ‘vmware’, DEFAULT_DATABASE =[vcloud],
DEFAULT_LANGUAGE =[us_english], CHECK_POLICY=OFF
GO
CREATE USER [vcloud] for LOGIN [vcloud]
GO
Set the ‘vCloud’ User to be the “Owner” of the “vCloud” database:
USE [vcloud]
GO
sp_addrolemember [db_owner], [vcloud]
GO
Configuring a Shared Transfer Area
In a multi-cell environment you should have a “Shared Transfer Area” this is la-dee-da word for NFS mount point accessible to all the cells in the vCloud Director instance. This transfer area is used a temporary location for moving files around – mainly into the vCloud Director catalog. I decided to create a NFS volume/export on my NetApp 2040, and then mount it to the vCloud Director cells. If you were doing this in a homelab you could give your vCloud Director cell accesss your home NAS (assuming you have iomega/drobo or such like) alternatively you could run something like FreeNAS as VM along side your vCloud Director cell.
1. I began by pinging from the vCloud Director cell my NFS to make sure I have IP visibility…
2. Then using System Manager in NetApp. I created a new volume like so:
3. Then I reviewed the permission on the NFS exports – granting root access to the NFS export as would if ESX was using the NFS share:
4. For added peace of mine – I made sure the export could be mounted from the vCloud Director cell and it could be written too as well
mkdir /nfstest
mount 172.168.4.89:/vol/vCDTransfer /nfstest
ls > /nfstest
ls /nfstest
Now we know the transfer area worked we can unmount it for now with umount /nfstest.
Installing vCloud Director
The next step is installing vCloud Director itself. This means copying the vCloud Director .BIN file over to the vCloud Director cell (or server) if you prefer and executing. I copied my .bin file to the host using WinSCP.
1. Before you jump in with both feet I would confirm you can ping/nslookup your SQL server. Initially, I found I couldn’t. For some reason my CentOS didn’t have the GATEWAY=172.168.5.1 entry in the /etc/sysconfig/network file. Using nano I fixed the issue and restart of the network (service network restart) I could ping the SQL box and nslookup worked fine..
2. Next I need to change the permission on the vCloud Director .bin file so I could execute it. If you do a listing of the files with ls – if the file is in white in PuTTy it’s just a file, if its in green its executable.
chmod u+x vmware-vcloud-director-X.X.0-<BUILDNUMBER>.bin
then I was able to execute with
./vmware-vcloud-director-X.X.0-<BUILDNUMBER>.bin
3. The installer will check your Linux Distribution and your configuration. I was suprised to see the install warn me about have 2GB. As my CentOS system was configured for 2GB. I’m thinking that clearly the OS itself takes up an amount of RAM which then stops the vCloud Director service getting the recommended minimum. So I reverted my snapshot on the vCloud Director cell and increased the memory allocation to 2GB.
4. Notice how it the install states “If you will be deploying a vCloud Director cluster you must mount the shared transfer server storage prior to running the configuration script. If this single server deployment no shared storage is necessary In my case I selected N for No, as we will use a directory structure created by the vCloud Director install itself as the NFS mount point. First, we make sure that the newly created “vcloud” user and group have rights to the transfer location:
chown -R “vcloud:vcloud” /opt/vmware/vcloud-director/data/transfer
then mount for this boot time the NFS share:
mount 172.168.4.89:/vol/vCDTransfer /opt/vmware/vcloud-director/data/transfer
followed by editing the /etc/fstab file to make sure this NFS location was mounted for future boot times:
nano -w /etc/fstab
172.168.4.89:/vol/vCDTransfer /opt/vmware/vcloud-director/data/transfer nfs intr 0 0
With the Transfer location setup we can now proceed with the post-configuration of installation – connecting vCloud Director to the database
Configuring Certificates for vCloud Director
For sometime I’ve ran my own internal Enteprize Root CA on Windows. I started doing this around the NT4/IIS.50 era when I used to teach Microsoft products – and I’ve kept my skills up-to-date ever since. It mean I can generate certificates at will for free for nearly any domain name I like. Of course these certificates are untrusted until they are imported into both the server and client – but by using a domain based root CA that’s generally the case. For my MBP I just download and installed the root certificate from my CA. Things are little trickier for stuff like the iPhone & iPAD where you have to use their management tools to get your own root certificate on to the device. In the real world I would probably recommend using a commercial certificate authority. I prefer to use my own CA to keep my CA skills sharp.
vCloud Director has two NIC interfaces which support SSL connections for the core HTTPS service, and for secure remote console console connections. I’ve come across some blogpost that recommend a third NIC to make the publishing of the vCloud Director cells (aka servers) easier. But I’m sticking with 2nics for now… The first thing we need to do is generate certificate requests for these two NICs, and then take those requests over to the root CA and have them processed. Once issued they will need “importing” into the vCloud Director certificate store to be used by the installation and the service.
You will notice each line of the “Keytool” commands has “-alias” entry. These are quite important. These alias values are used by the vCD installer to allocate the right certificate to the right interface – so HTTP is used by the interface for the core web-service and CONSOLEPROXY is used by interface that allows “Remote Console” session thru the vCD Cell and on to the VM…
[ TIP: If you don’t have a root CA and would rather just generate untrusted certificates then here’s a little tip. The default admin guide talks about using this command to generate these untrusted certificates:
keytool -keystore certificates.ks -storetype JCEKS -storepass passwd -genkey -keyalg RSA -alias http
By default this certificate will only last 90-days, you quite easily extend these certificates to last a lot longer by using the “validity” option:
keytool -keystore /install/certificates.ks -storetype JCEKS -storepass password –validity 9999 -genkey -keyalg RSA -alias http ]
As you can see certificate management can be done using the “keytool” utility based on Java Version 6. vCloud Director does after the install copy a version of this utility to /opt/vmware/vcloud-director/jre/bin/keytool. You will probably have a copy of keytool on your vCloud Director cell; Windows PC, Linux PC or MacBookPro. You can check your version with the command:
java -version
On my MacBookPro I discovered I had the 1.6 version:
On my CentOS installation I discovered I had the newer 1.7 version:
I problems using the 1.7 version. So I think the best bet is to ALWAYS use the version of keytool that ships with vCloud Director to avoid unsupported cryptographic errors occurring.
1. I created a directory with mkdir /certs and changed into that directory with cd /certs
2. Then issues the keytool command:
/opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks -storetype JCEKS -storepass Password1 -genkey -keyalg RSA -alias http
3. Next, I generated a certificate request to work this certificate keyfile.
/opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks -storetype JCEKS -storepass Password1 -certreq -alias http -file http.csr
This will generate two files – the keyfile and the request file to be submitted to the root CA. You can cat the contents of the http.csr file and its this text that’s normally cut and pasted (or uploaded) to the Certificate Authority for a trusted certificate to be issued:
4. We can repeat this process for the Remote Console Proxy connection/interface as well. All that changes here is the certificate alias and filenames:
/opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks -storetype JCEKS -storepass Password1 -genkey -keyalg RSA -alias consoleproxy
/opt/vmware/vcloud-director/jre/bin/keytool -keystore certificates.ks -storetype JCEKS -storepass Password1 -certreq -alias consoleproxy -file consoleproxy.csr
5. The next step is head of to my Root CA Authority and submit the .csr files it it. Given that CentOS is without a GUI the easiest way is to cut and paste the contents of the .CSR file, and submit that to the authority – although I guess you could use WinSCP to copy the files off, and then upload them that way. In my case I used the certsrv webpages on Windows 2008 R2 that are driven by IIS.
6. Select Request a certificate
7. Select Advanced Certificate Request
8. In the Saved Request edit box, paste the contents of the CSR – and from the pull-down list select “Web Server” as the type – and then click Submit button
9. If you have logged on to the certsrv as the administrator the submission should be immediately approved – if not then you will have to wait for the administrator the Certificate Root CA to either approve or decline your request. Click the “Download Certificate” option. I would recommend renaming the .CER files with something like http-DER-encoded.cer or perhaps include the FQDN in the file name like “hybridcloud.corp.com-http.cer”…
10. Next download the root.cer certificate from the certsrv – which make the vCloud Director server think your self-administered Certificate Root CA is trusted. You can click the Home link, and select “Download CA certificate, certificate chain, or CRL”
11. Select Download CA Certificate link:
12. Next upload the .cer files to the /certs director on the first cell. In my case I used WinSCP to get the files into their
13. Next I imported the certificates into my store like so:
/opt/vmware/vcloud-director/jre/bin/keytool -storetype JCEKS -storepass Password1 -keystore certificates.ks -import -alias root -file corphqrootca.cer
choose [Yes] to trusted the certificate
/opt/vmware/vcloud-director/jre/bin/keytool -storetype JCEKS -storepass Password1 -keystore certificates.ks -import -alias http -file hybridcloud.corp.com.cer
[This should pass with the message “Certificate reply was installed in keystore]
Now. Repeat and rinse for the Remote Console Proxy .CSR file – submitting it to the Certificate Root CA and importing it with:
/opt/vmware/vcloud-director/jre/bin/keytool -storetype JCEKS -storepass Password1 -keystore certificates.ks -import -alias consoleproxy -file consoleproxy.corp.com.cer
The command:
/opt/vmware/vcloud-director/jre/bin/keytool -storetype JCEKS -storepass Password1 -keystore certificates.ks -list
Will list all the certificates installed together with their certificate finger print – these finger prints should match the imported .CER fingerprint. In this screen grab the HTTP alias (which holds the hybridcloud.corp.com.cer certificate) has 29:DD as the last parts of the thumbprint, in Windows double-clicking the .CER file and under the “Detail” tab and “Thumprint” confirms there is a match. Notice how “root” is marked as “trustedCertEntry”.
The same process can be repeated on the second cell. Start at step 12 by copying the three certificates across (root, http, consoleproxy) and then use the keytool command to import them.
Running the vCloud Director Configure Utility for Microsoft SQL 2008
You can run the vCloud Director Configuration tool using:
/opt/firmware/vcloud-director/bin/configure
Note: You can re-run this utility at any time – but the affected cell must be stopped first using the command: service vmware-vcd stop
When you run the vCloud Director configure utility you will be asked a number of questions:
- What IP address to use for the HTTP Service (Defaults ETH0)
- What IP address to use for ConsoleProxy Service (Defaults ETH1)
IMPORTANT: One thing I realised in my configuration was a feature of the configurations script. The system lists the HTTP/ConsoleProxy by IP address NOT by ETH0/1. In my case I had actually allocated .250 to ETH1 and 251 to ETH0. So I was actually allocating my IP addresses in descending order (254, 253, 252) rather than ascending order (248, 249, 250). So in this screen grab above I’d actually assigned the WRONG interface by being too hasty in the configuration wizard – and assigned the wrong interface to the wrong service! Later on I inverted my IP allocations so 172.168.5.250 was assigned to ETH0 for HTTP, and 172.168.5.251 was assigned to ETH1 for the ConsoleProxy. That meant I could just wack the [ENTER] key and accept the defaults.
- Path to the Certificates.ks file and the Password
- Syslog Server name or IP address and Port Number (I use my vCenter Server Appliance!)
- What database to use (Oracle or SQL)
- Name/IP Address of database, Port Number, Instance Name, Database Username & Password
Once completed the vCloud Director cell will attempt to connect to the DB and populate it with tables, proceedures and such like. Its worth saying even though the vCloud Director “Cell” is up and running there are still background components being initialized for the first time. I’ve noticed if you a bit hasty in trying to connect to the cell you will get a “grey page” rather than normal “blue splash” welcome page. You can monitor this process by putting a watch on the main cell.log file like so:
Configuring the Second vCloud Director Cell:
The next step is setting up the second vCloud Director Cell. The installation uses a special “responses” file but before that installation is setup we need to handle the shared transfer area and stop the first cell…
1. First copy the vmware-vcloud-director-X.X.0-<BUILDNUMBER>.bin to the /tmp on the 2nd Cell
2. Next change the privileges on the .bin with:
chmod u+x vmware-vcloud-director-X.X.0-<BUILDNUMBER>.bin
3. From the 1st cell copy the /opt/vmware/vcloud-director/etc/responses.properties file from the 1st cell to the 2nd cell at /tmp
./vmware-vcloud-director-X.X.0-<BUILDNUMBER>.bin -r /tmp/responses.properties
4. Next on the 2nd vCloud Director Cell mount the share transfer area – first stop the 2nd Cell, and mount the share area:
service vmware-vcd stop
chown -R “vcloud:vcloud” /opt/vmware/vcloud-director/data/transfer
mount 172.168.4.89:/vol/vCDTransfer /opt/vmware/vcloud-director/data/transfer
followed by editing the /etc/fstab file to make sure this NFS location was mounted for future boot times:
nano -w /etc/fstab
172.168.4.89:/vol/vCDTransfer /opt/vmware/vcloud-director/data/transfer nfs intr 0 0
service vmware-vcd start
Once the 2nd cell has fully started it should appear in the “Cloud Cells” node in the vCloud Director “Manage & Monitor” tab under “Cloud Resources”
Finally, we need set the “Public Address”. Without this the vCloud Director will advertise URLs that based on the cell name (https://vcdcell01.corp.com/cloud/org/vINCEPTION) when in fact the URL is load-balanced generic name (such as https://hybridcloud.corp.com/org/vINCEPTION). This configured under “System” and “Administration” and “Public Addresses”. These URLs normally resolve to a l0ad-balancer on the network to distribute the load across each cell. If you not interested in configuring a load-balancer for your lab, you could just create aname/host records that resolve to the HTTP IP addresses of each of the cells.
The important thing here is to make sure your DNS is properly configured. One thing I neglected to do was register “consoleproxy.corp.com” in my DNS. That meant when I tried to connect to the Remote Console of the VMs in a vApp it failed – switching from Connecting to Disconnected almost immediately:
In the end I raised an internal support request – and the guys put me on a webex session and they took a look at it. I told them what I’d been up to whilst the webex software download (isn’t that always the way). We connected to the cells by raw IP address and found that a connection to the Remote Console was successful. Seems that in a single cell environment vCloud Director just knows to redirect you to the IP of the ConsoleProxy, in a multi-cell environment it will use the “Public Address” values – if they aren’t registered somewhere in DNS (public or private) than connection will fail. The guys open up the log files for the Remote Console application held in C:\Users\<username>\AppData\Local\Temp\vmware-<username>
2013-04-18T15:30:04.164+01:00| vmrc| I120: cui::vmrc::VMCnx::Connect: Connect to MOID “vm-84” on “consoleproxy.corp.com”
2013-04-18T15:30:04.193+01:00| vmrc| I120: Resolving IP address for hostname consoleproxy.corp.com
2013-04-18T15:30:04.193+01:00| vmrc| I120: Lookup failed (getaddrinfo returned 11001)
2013-04-18T15:30:04.281+01:00| vthread-3| I120: VTHREAD initialize thread 3 “vthread-3” host id 6596
2013-04-18T15:30:04.281+01:00| vmrc| I120: cui::vmrc::VMCnx::OnConnectAborted: Connect failed for MOID “vm-84” on “consoleproxy.corp.com”
2013-04-18T15:30:04.281+01:00| vmrc| I120: cui::vmrc::VMCnxMgr::EmitConnectionStateSignal: Emitting “disconnected” signal (requested) for MOID “vm-84” on “consoleproxy.corp.com” – reason ‘The server name could not be resolved’
2013-04-18T15:33:46.978+01:00| vmrc| I120: Clean exit
All I needed to do for my DNS Round-Robin to work was to add the hostname records for the consoleproxy IP addresses, and long side my round-robin address for the HTTP interfaces:
Yes, I guess you could say that was a bit of schoolboy error, but it took the excellent guys in our support team to spot it – which they did in less 10mins. It just goes show a different set of eyes helps every time – so I owe Leon Barron one of our vCloud Support Engineers a beer sometime!
Conclusions:
Once the cell is up and running you can connect to it. I temporarily add a DNS record under my certificate name (hybridcloud.corp.com) just to check the validity of the certificate.
Well, that’s was easy wasn’t it? Seriously, at the risk of sounding ironic/sarcastic (take your pick) the process of building a cell isn’t the walk in the park it could be. That’s why I always recommend to newbies (like I was 6 months ago) that you start with the vCloud Director Virtual Appliance which works out of the box from day1. The setup/installation is something you would need consider in a production style environment – or if your using a very new version of vCloud Director, and there isn’t a virtual appliance ready for it (which is my case).