This is the first post about my adventures in vCenter 6 appliances with external HA load balancers. The aggregator page is here.

For reference, this is what my architecture looks like:

VMware 6 Architecture

Using the method of joining an external, appliance-based HA PSC to an Active Directory domain as outlined in the VMware documentation here (i.e. via the vSphere Web Client) fails with the message ‘The “Join active directory” operation failed for the entity with the following error message. java.lang.reflect.InvocationTargetException;':


You’ll also see a yellow exclamation point over a Work In Progress item:


And opening the item shows pretty much the same thing:

open wip

I spent quite a while trying to track down the problem, and in the end I opened a case with VMware, which as of this writing, is still pending resolution (we believe it’s a bug in the vSphere Web Client for the moment).

In the process, I learned some things about the internals of the appliances. I’ll outline those in another post.

I did discover a workaround, which is to use the “likewise” scripts (which I believe is what does the domain join under the hood), located at /opt/likewise/bin/domainjoin-cli:

/opt/likewise/bin/domainjoin-cli join domain username

This will prompt you for your password, and then join successfully.

Two notes:

  1. ALL of the platform services controllers behind a load balancer (or balancers) MUST be joined to AD to ensure that AD communications work.
  2. This workaround does seem to join the domain successfully; the computer object shows up in Active Directory and communications work as you’d expect. However, the Active Directory domain and Organizational Unit fields in the vSphere Web Client never seem to populate.

Good luck.



So I’ve been running a proof of concept here at work for our new VMware vCenter 6 architecture, which leverages the new vCenter 6 appliances.

In a nutshell, our architecture has 2 sites, with each site comprised of 2 external platform services controllers (PSC’s) appliance version) load balanced with an F5 hardware load balancer, and a single vCenter appliance.

This is what it looks like:

VMware 6 Architecture

Getting this up and running has been a challenge, as I don’t believe too many others out there have set things up the way I have here. It seems that most of the documentation around the new 6 platform doesn’t address the particularities of using HA PSC’s, and so there was a lot of trial and error on my part, leading to these posts.

I’m going to use this particular post as an aggregator; I have already several distinct topics to cover. I’ll do my best to make them SEO friendly so hopefully people see these and they’re helpful.

Without further ado:


We’ve had a long-standing bug here at work where systems would appear to randomly drop out of DNS. It seemed to happen on Microsoft Windows-based server and desktop reboots, which is the exact opposite behaviour you might expect. Our environment has Server 2008 R2 and 2012-based domain controllers, but our functional level is still at 2003 Native (that’ll be changing soon). DNS is 100% AD-integrated and for the most part dynamic.

I had always suspected the mixed environment was the cause; some weird thing where DNS updates across the domain weren’t propagating correctly.

Recently, this came to a head (and management’s radar) when we discovered that whenever we fire off VMware SRM, a number of the servers fall out of DNS (one or two random ones from time to time wasn’t a huge deal, though we’d been looking into it sporadically for years). Obviously, this is a bit of a problem, as we would have people suddenly unable to access their systems after a planned failover.

A colleague of mine discovered the following Microsoft KB 2520155, which addresses an issue in Server 2008, 2008 R2, Windows 7, and Vista systems and their DNS client.

The cause, from the KB, is “When the DNS server configuration information is changed on a client, the DNS Client service deletes the DNS host record of the client from the old DNS server and then adds it to the new DNS server. Because the DNS record is present on the new server that is a part of the same domain, the record is not updated. However, the old DNS server replicates the deletion operation to the new DNS server and to other DNS servers. Therefore, the new DNS server deletes the record, and the record is deleted across the domain.”


When we thought a bit more about it, it seemed common when we used SRM (which in our case changes the DNS servers as part of the process), and also when we would change DNS information for our DHCP-based desktops.

Well, problem solved, and kudos to my colleague for finding the fix to a 4-year old problem for us (pretty much since we started having 2008 R2 and Win7 boxes in our domain).



Recently, we were doing some back end storage migrations to retire some old disk shelves off our filer heads at work. Most of this was seamless and involved provisioning new datastores for our vSphere 5 environment and then using storage vMotion to move the affected VMs from one datastore to another. All of our datstores are NFS mounted and have Storage I/O Control turned on, across an environment of 60+ ESXi 5.0 hosts.

However, one datastore in particular damn near gave me ulcers while I was trying to unmount it after we’d moved everything to a new one; our templates datastore, which holds (among other things) ISO’s and of course, VM templates.

After fighting with it for a while to get all the VMs that had their CD/DVD drives connected to an ISO on the datastore disconnected (a common problem), and then with a set of VMs that had snapshots that hadn’t been deleted while either running (erroneously) in the templates datastore or having a connected ISO (a problem prior to vSphere 5.1), I finally went to go through the steps to unmount it from my hosts, and came across this problem:

SIOC Error

This particular error message is slightly misleading; what I was trying to do was DISABLE SIOC, but apparently either way the same problem happens.

Given the size of our environment, I didn’t relish trying to figure out which host had things mounted read-only; so I fired up PowerCLI.

Knowing the datastore in question, after connecting to the relevant vCenter Server:

$datastore = Get-Datastore -name dsname

Next to find the offending host:

$datastore.extensiondata.host | where {$_.mountinfo.AccessMode -eq "readOnly"}

Which gives us something like this:

Key : HostSystem-host-742
MountInfo : VMware.Vim.HostMountInfo
LinkedView :
DynamicType :
DynamicProperty :

Now to correlate the Key to the Host:

$hosts = get-vmhost
$hosts | where {$_.id -eq "HostSystem-host-742"}

Which led me to:

Name ConnectionState PowerState
---- --------------- ----------
hostname.lan Connected PoweredOn


Except… how do I unmount a datastore that has Storage I/O Control turned on?

Fortunately, there’s this VMware KB; following the steps outlined there regarding stopping the host-based SIOC service (after placing the system in maintenance mode), unmounting the datastore USING THE CLI (the GUI didn’t work), and then restarting the SIOC service, I now had a datastore that was in a consistent state and I could shut off SIOC properly and unmount it.



Ran through the upgrade from vCenter Server 5.0 U1 to vCenter Server 5.0 U3 (protected by Heartbeat 6.4 U1) this week and ran into an issue that there’s not much information about out there, so here’s what I found.

After the upgrade, the heartbeat console was consistently displaying Critical – Service VIMBPSM failed check; VIMBPSM is the short name for the VMware vSphere Profile-Driven Storage service. 2014-01-14 4-34-35 PM

It took a while, but I figured out that because there’s multiple services in vCenter that use a java wrapper to start up as a service, there’s a race condition that occurs between the VIMBPSM service and anything else that uses the wrapper because of Heartbeat.

This blog post pointed me in the right direction, to see why it wasn’t starting: 2014-01-14 4-35-56 PM

As you can see, it’s trying to use port 31000 (and 31100) to start, but that port is in use by another process. After tracking down the offending process, I realized that it was the NetApp vSphere Plugin Framework service (that runs the NetApp Virtual Storage Console plugin on the server) that was taking the port.

EDIT 2 (yes, before Edit 1)Commenter Crheston Mitchell points out that they pushed the issue with VMware more successfully than I did, and there’s a KB published for it: which outlines how to actually implement a workaround on the VIMPBSM service itself. I recently had to do this on my infrastructure because as with the like I posted above about being pointed in the right direction, the VMware Web Client started causing problems too. I implemented the fix from the KB and all seems well now.

EDIT: It seems that BOTH NetApp services, the NetApp vSphere Plugin Framework AND the NetApp SnapManager for Virtual Infrastructure service need to be edited per the below instructions. File locations are below.


It could have been anything that used the wrapper, not necessarily the NetApp service. I’ve seen other posts blaming the vSphere Web Client service, so you might have to do some sleuthing to see where the issue is on your setup. Good commands to try are netstat -aon | find "31000" (which will show you the PID, which you can then find in task manager, which MIGHT give you some information, but more likely it’ll just say it’s java.exe or javaw.exe). Then, try wmic process where(name="java.exe") get commandline to see what’s starting what.

The interesting part is that inside the wrapper.conf file for the VIMBPSM service, you find the following:

# wrapper JVM port

# wrapper port

which would seem to indicate that the VIMBPSM service SHOULD be starting on port 31300. Fiddling around with those values has absolutely zero effect; and when I finally tracked down the user of port 31000 as being the NetApp service mentioned above, I stopped it, then started VIMBPSM and saw it glom to port 31000 as indicated. Stop it, start NetApp, and watch NetApp grab 31000.

So why doesn’t changing the file for VIMBPSM have an effect?

Near as I can figure (and I’ve submitted a case to VMware support), it’s because in some other XML configuration files, you see entries like:

<!-- need to be same as the sps.properties -->




<bean id="httpServerEndpoint"
<constructor-arg value="31000"/>

(as an aside, I tried fiddling with these values to see if I could get the service to start on 31300, for example, without much success. I didn’t give it a real hard go, however, so it might still be possible).

At any rate, it seems these other entries take precedence over the wrapper.conf file.

My solution was to modify the wrapper.conf of the offending service, in my case the NetApp vSphere Plugin Framework service, located at

"C:\Program Files\NetApp\Virtual Storage Console\wrapper\wrapper.conf"

And for the NetApp SnapManager for Virtual Infrastructure service, located at

"C:\Program Files\NetApp\Virtual Storage Console\smvi\server\etc\wrapper.conf"

with the following lines:

# restrict JVM port range for ESXi 5.0 U3 port race condition

Note that it shouldn’t matter that you put the same port ranges in BOTH NetApp (or other service) files, since it’s supposed to be a dynamic port “starting at” a particular port defined. The reason this causes a problem for the VMware PDS service is because while it’s SUPPOSED to be dynamic, in the case of PDS it ISN’T, so it can’t start if 31000 isn’t available.

This has solved the problem and lets the VIMBPSM service start, as well as the NetApp services. My storage admin is checking that I haven’t totally broken the plugin, but so far so good.



I’ve been fighting mightily getting our new SharePoint 2013 search application running at work with as many best practices as I can manage.

I should point out that I’m NOT a SharePoint guru of any description; I’ve just gained a lot of admin experience in that past little while and had a healthy dose of “not giving up”.

In particular, the new search application, when running under a service account that is

a) NOT the farm account and;
b) NOT a machine administrator

Causes a raft of problems and is difficult to get running correctly.

It seems that permissions are not assigned correctly in the RTM version (I have also installed the March 2013 PU and it didn’t seem to fix the problem) to service accounts that are NOT the farm account in SQL and NOT machine admins on whatever box the search host controller is running on. This prevents the service from starting correctly on the server and generates a number of errors in the ULS and event viewer logs (note that users running SQL 2012 and Server 2012 may not have this issue – my environment is Server 2008 R2 SP1 and SQL Server 2008 R2 SP2), including:

Event ID 2548 error message: Content Plugin can not be initialized – list of CSS addresses is not set

Event ID 3760, SQL Database is on SQL server instance not found

Event ID 1026 for .NET Runtime, hostcontrollerservice.exe, the process was terminated due to an unhandled exception (Microsoft.Ceres.HostController.WcfServer.WcfService.StartService())

Event ID 1000, Faulting application name: hostcontrollerservice.exe, version: 15.0.4420.1017, time stamp: 0x50672c2d
Faulting module name: KERNELBASE.dll, version: 6.1.7601.17965, time stamp: 0x506dcae6

Looking around, I found this Technet post that suggests that the answer is to give the account administrative rights.

This isn’t a particularly good answer, and while it works, it goes against security best practices.

The problem, near as I’ve been able to work out, is actually with database permissions as well as a few other things.

Here’s the layout:

DOMAIN\SPfarm – farm service account, regular user account on the server(s)

DOMAIN\SPcrawl – crawl account for search

DOMAIN\SPsearch – service account that runs the search host controller service (this may or may not be the same as the service account that runs other services – for example the Distributed Cache)

According to Microsoft’s list of account permissions for Sharepoint 2013 service accounts and this blog post the account that is used for services (i.e SPsearch above) needs the SPDataAccess SQL role for content databases, but it is actually not assigned when you change the service account using the GUI (I’m not sure if it does it correctly through powershell). Additionally, I have found that system security on the accounts isn’t set correctly by the farm account if the farm account isn’t a system administrator. This blog post explains the fix for that (note that his comment regarding the fact that “this service is being provisioned as the user who you’re running the PowerShell as” doesn’t seem to be accurate – it’ll be provisioned as whatever account is set in the Manage Service Accounts page in the Central Admin, under the Windows Service – Search Host Controller Service entry, so you don’t need to run as the farm account, which won’t work if the farm account isn’t a machine administrator anyway :) ; also, the permissions he assigns are for the SERVER\Users group, but I think this might be a bit excessive as well; it is likely only needed for one or two, possibly all three of the service accounts I’ve mentioned in order to work correctly; in my tests, REMOVING those privileges after the fact doesn’t affect the startup of the service in simple service restarts, but I don’t really want to keep playing with it as this thing is going production very soon and I have no desire to redeploy it… AGAIN); note that if you have more than one server in your sharepoint farm, you’ll have to run the code provided on every server that runs the search service instance, and modify the line:

$sh = Get-SPServiceInstance | ? {$_.TypeName -eq "Search Host Controller Service"

To be:

$sh = Get-SPServiceInstance | ? {$_.TypeName -eq "Search Host Controller Service" -and $_.Server -match <servername>} 

Finally, on the SQL server, check that the service accounts SPcrawl and SPsearch have the SPSearchDBAdmin roles assigned to them for all search-related databases including Search, LinksStore, AnalyticsReportingStore, and all CrawlStore databases (add the role to those accounts on those databases if you need to; here’s some SQL code as to how:

USE [SharePoint_Search_Database]
EXEC sp_addrolemember N'SPSearchDBAdmin', N'DOMAIN\SPsearch'
USE [SharePoint_Search_Database]
EXEC sp_addrolemember N'SPSearchDBAdmin', N'DOMAIN\SPcrawl'


Note that these databases won’t exist until you try to provision search; it’s a bit of a catch-22 in that you have to provision search (which creates the databases at least), let it fail while you fix permissions, and then let it go afterwards.

You will also have to ensure that a user is created for at least the SPsearch account for those databases. In all likelihood, the crawl account will exist and have a user, but the search account won’t. But if you see a red downward arrow in front of the account name in the SQL Server Manager for the SPsearch account, you need to create the login. Here’s the SQL code for that:

USE [SharePoint_Search_Database]
USE [SharePoint_Search_Database]

Repeat for all other databases.

Note: Originally, I assigned dbowner and dbsecurityadmin roles to those accounts. This worked, but I believed that it was higher than necessarily privileges; indeed, revoking those privileges after the fact lets the search service start and connect and search without issue. However, those roles MAY be needed in the beginning, after which you can reduce privileges to “just” SPSearchDBAdmin.

Implementing this will solve a number of problems:

1) Search Host Controller Service (both in the services console on the server AND on the Services on Server page in Sharepoint admin) stuck on starting – FIXED

2) All of the event errors mentioned above – FIXED

3) Running Search Host as the farm account – FIXED (as in, not needed)

4) Needing the farm account to be local admin – FIXED (as in, not needed)


As this has been now in production for some time, I’ve discovered a few things with my solution above, including one that’s a show-stopper once a month that seems to be related to a timer job (that I can’t find). It seems that that SPSearchDBAdmin job gets removed from the SPsearch account in the Search Database (i.e. SharePoint_Search_Database in my example above). When this happens, the account can’t log into the database anymore, and search stops working. The solution is simply to add it back in.

Happy fixing!



My company is working on a deployment of Brocade wireless access points that will be used for guest access only. To support this, I deployed a Server Core 2008 R2 Read Only Domain Controller out into our DMZ (where the APs sit) with the intention of providing DHCP and LDAP for 802.1x authentication.

The Brocade APs are model AP7131. Although a Brocade community post mentions that autoconfiguration of the APs via DHCP options was not possible; later in that same thread (after a software upgrade), someone mentions it IS possible, and Brocade’s documentation confirms (check page 480, 492 in the pdf) that.

In particular, the following DHCP options are needed, at least:

Option 189 – Controller IP, as a String
Option 192 – Autoconfiguration enable, a String, either 1 or 2

Without Option 192, the AP’s will NEVER attempt to autoconfigure, as the PDF mentions above (on the same page).

Originally, I simply defined options 189 and 192 correctly in the AP subnet. However, no autoconfiguration or AP adoption was occurring.

Using Wireshark on my server core install (for excellent instructions on how to do that, see here, but pay particular attention to the first comment), I saw the following in the DHCP request from the AP (IP addresses removed, obviously):

Basically, the AP was requesting a list of options to return; normally this set of requested options includes the likes of option 003 (the default router or gateway) and option 006 (a list of DNS servers). The AP is coded to ALSO request options 43, 191, 186, 187, 188, and 189.

Wait a second, what about 192? How is it supposed to autoconfigure without requesting option 192? I have defined it for the subnet, but as you can see from the next image, the server was dutifully NOT returning it because it wasn’t asked for:

Note: clearly the packets are longer; but you’ll have to trust me that there was nothing else relevant.

Well that’s not very useful.

The Easy(er) Way

I began searching the Internet. In particular, the documentation from several places (also in the PDF linked above) mentions that all of this can be encoded into sub options of DHCP Option 43. However, nowhere did it give me the encoding FOR that value, specific to Brocade 7131 APs (there is a lot of information out there with regards to using it for Cisco Aironet APs and Microsoft Lync 2010, but NOTHING for Brocade APs specifically). Sidenote: If I had bothered to read the Cisco link a bit closer, I might have deciphered what otherwise took me a while to get below. The Lync link (ha) was a bit more enlightening, but too specific to Lync and again, I read it too fast).

I’ll give you the easy way out; for more information on how the whole thing works, scroll down “The Hard Way” below.

Something else had been catching my eye (bothering me?) since I looked at the packet capture; and that was Option 60, as sent by the AP itself, with the Vendor Class Identifier listed as BrocadeAP.br7131. Since I was configuring this whole thing via the command line using netsh (remember, this is a Server Core 2008 R2 DHCP server), I had to do some more reading on things, and discovered adding a Vendor Class.

netsh dhcp server add class "Mobility7131 Options" "Options for Brocade 7131 APs" BrocadeAP.br7131 1

From my more reading link, that breaks down into: add class ClassName [ClassComment] [Data] [[IsVendor=]{0 | 1}]; or:

ClassName: Mobility7131 Options
ClassComment: Options for Brocade 7131 APs”
Data: BrocadeAP.br7131
IsVendor: 1

Note that the Data is converted into Hex by the server automatically.

Also note that in some of the documentation I found (including the PDF above), it states (possibly incorrectly) that the Vendor Class returned by the clients is Brocade.71xx, and in some cases even different from that. I’m not sure if this would have worked, had I used the provided Vendor Class. The value used comes RIGHT from the request made by the APs themselves. The downside to this is that if, for example, the Vendor Class listed by the documentation allows for multiple types to all be “caught” by a single vendor class definition on the server, then there’s less configuration to do as a whole for a multi-AP scenario. However, this doesn’t seem to be the case, and I’ve hard-coded the vendor class the APs are currently using into the server. Note that it could also cause problems if they ever decide to change the Vendor Class returned by the APs in the DHCP request.

For completeness, this is what it would look like if you did that using the GUI in server 2008:

Open up your DHCP control panel, expand the server, and right click on IPv4 (the same may apply for IPv6, but I’ve never tried it). Click on Define Vendor Classes:

Click the Add: button:

Enter in the relevant information:

Now that this is done, you can define options 189 and 192 for this specific vendor class:

c:\> netsh
netsh>dhcp server
netsh dhcp server> add optiondef 189 Brocade-Controller-IP STRING 0 vendor="Mobility7131 Options"
netsh dhcp server> add optiondef 192 Brocade-Autoconf-Enable STRING 0 vendor="Mobility7131 Options"

The syntax for that command is inside Microsoft Technet (more reading) link above.

In GUI-land, this looks like this:

Create the new option:

Repeat for option 192.

Finally, you can enable these options and set them from the scope.

netsh dhcp server> scope
netsh dhcp server scope> set optionvalue 189 STRING vendor="Mobility7131 Options"
netsh dhcp server scope> set optionvalue 192 STRING vendor="Mobility7131 Options" 1

(Note: I’m not showing how to do this in the GUI, because I didn’t do it that way and there are lots of resources out there to show you how; likely from my GUI pictures above you could probably do it).

Within seconds of me completing these commands (recall, I actually did this FIRST, as opposed to hard-coding Option 43 in below), the network guy popped his head into my cube and asked if I had changed anything; I said I had, and he said “because all the APs are starting to auto-register”.

Having a look at the Wireshark capture, I noticed that the server was now sending an option 43:

I thought that was odd; I hadn’t defined an option 43 anywhere.

As it turns out, Microsoft’s DHCP server, upon seeing an Option 60 in the REQUEST from a client, will return anything defined in the Vendor Class definition on the server as a properly-encoded, easy to manage Option 43 in the DHCP Offer/Ack. This is significantly easier than the hard way (below), though arguably more obscure. I don’t have any documentation to back that up (if I did, I might not have taken so long to figure this out), but I can see that it’s working, because of what I tried next (aka the hard way; also the “more information about how the whole thing works” way).

The Hard Way

Option 43 is defined in the DHCP standard (RFC 2132) as Vendor Specific Information (Wikipedia). After finding the links above, it took me a while to figure out that it is defined by a Type-Length-Value string encoded in hex. You can get a FANTASTIC breakdown of how the whole thing works here (that link is what finally got me going down the right path).

An important bit to remember: Because you are encoding SUB values into option 43, you have to think like a network packet; i.e. embedded or nested TLVs. Here’s a crappy MS Paint I made showing it off (click the image to see it, my blog format is cutting it off somewhat):

So, with a bit of math (or the help of calc.exe and a hex/ascii converter) you could construct your option 43 string. This is a bit easier to explain if I show you the completed hex string:


And the breakdown:

bd: Option 189 (Decimal 189 in hex)
0c: Length 12 bytes (Decimal 12 in hex; 11 bytes of the IP address, + 1 byte of 0-padding)
31:39:32:2e:31:36:38:2e:31:2e:31: IP address of controller, converted from ASCII, in hex (
00: zero padding
c0: Option 192 (Decimal 192 in hex)
02: Length 2 bytes (1 byte of data and 1 byte of zero padding)
31: ASCII 1 in hex
00: Zero padding

Note that the length of the entire packet has been left off – this is because the server calculates it automatically. If you capture this in Wireshark, you would have seen a 2b 12 in front of it; 2b is decimal 43 in hex (i.e. option 43) and 12 is the length, in bytes, in hex (12 hex is decimal 18).

I confirmed that you can do this by entering in this string directly into an option 43 for a scope (I also removed the options from the Vendor Specific Class I defined above):

netsh dhcp server scope> set optionvalue 043 BINARY bd0c3139322e3136382e312e3100c0023100

And it works the same, however ANY client requesting option 43 will get this information, not just those offering option 60 as part of their DHCP request.

So there you have it; how to get Brocade AP7131’s to Autodiscover and adopt their controller using Windows Server Core 2008 R2 DHCP, and Option 43.



This may be relevant to more than just Shavlik NetChk Protect 7.6.0 (i.e. possibly VMware vCenter Protect 8) and vCenter 5.0 servers protected by Heartbeat 6.4, but I banged my head against a wall for a bit trying to get this to work before finally figuring out the (admittedly simple) answer.

Whenever I would try to do an automated patch deployment using NetChk Protect 7.6.0 to our vCenter 5.0 server (Windows Server 2008 R2) nodes protected by Heartbeat, it would consistently fail with the message: “Scheduling job on machine failed. The patches will not deploy automatically. Execute the following remote file to initiate the deployment: C:\Windows\ProPatches\Silent.exe Deployment failed on the following machines. Their deployments were not scheduled. …”

The clue should have been the first word in the status message, but I poked around a bit before finally realizing that Heartbeat sets the Task Scheduler service on the nodes to Manual. Of course Shavlik can’t schedule a job.

Starting the Task Scheduler (but leaving it set to manual) prior to deploying patches allows the patching to run and will reboot the system if needed. However, it seems that if the system reboots, it will not report back success to the deployment server, so success will need to be checked another way (the most likely is probably with a subsequent scan of the system). I haven’t tried it yet to see if it reports success if the system DOES NOT reboot, only because I haven’t yet had a patch cycle that doesn’t include a reboot.

UPDATE: I did a bit more work and figured out that if you set the service to Automatic, for obvious reasons it reports back correctly. However, you need to be NT Authority/SYSTEM to be able to fully control the Task Scheduler service; so, after some searching I came across this post regarding just that very thing.

We also have UAC turned on, so I used some help from here to write a batch script that forces a UAC prompt, then runs the following command to set the Task Scheduler service to Automatic:

c:\Tools\psexec.exe /accepteula -sid cmd /C "sc config schedule start= auto"

Obviously, psexec is required for this, and it’s in C:\Tools on the server.

Here’s the whole script:

REM Set task scheduler service to Automatic script
REM By Ryan Jacobs
REM October 29, 2012
REM Help from https://sites.google.com/site/eneerge/home/BatchGotAdmin
REM Relies on psexec to generate a cmd prompt under NT Authority/SYSTEM
REM to be able to control the Task Scheduler service for Patching purposes

@echo off

:: BatchGotAdmin, from https://sites.google.com/site/eneerge/home/BatchGotAdmin
REM --> Check for permissions
>nul 2>&1 "%SYSTEMROOT%\system32\cacls.exe" "%SYSTEMROOT%\system32\config\system"

REM --> If error flag set, we do not have admin.
if '%errorlevel%' NEQ '0' (
echo Requesting administrative privileges...
goto UACPrompt
) else ( goto gotAdmin )

echo Set UAC = CreateObject^("Shell.Application"^) > "%temp%\getadmin.vbs"
echo UAC.ShellExecute "%~s0", "", "", "runas", 1 >> "%temp%\getadmin.vbs"

exit /B

if exist "%temp%\getadmin.vbs" ( del "%temp%\getadmin.vbs" )
pushd "%CD%"
CD /D "%~dp0"

REM starts the Task Scheduler service if it's stopped
sc query schedule | findstr "RUNNING"

net start schedule

c:\Tools\psexec.exe /accepteula -sid cmd /C "sc config schedule start= auto"


For a project at work, I needed a small router box to run inside our vSphere 5 environment. I decided to use ttylinux. However, the stock ttylinux install has a version of busybox in it that does not include dhcprelay, which I needed. Originally, I had also planned to include VMware tools in the build (as the proper building environment is not present in ttylinxu), or, alternatively, open-vm-tools, however as this went on I discovered that it was a bit out of my league. So, I rolled my own ttylinux, with a modified busybox, but not VM tools.

The recommendation is to build on Debian Lenny 5.0.7 (this comes from the doc included in the ttylinux build package, How_To_Debian-Lenny.txt in the ttylinux-build directory). So I snagged a copy of the 5.0.7 iso (here: http://cdimage.debian.org/mirror/cdimage/archive/5.0.7/amd64/iso-cd/debian-507-amd64-netinst.iso) and installed it onto a new VM purpose-built for this.

Because this is an older release, we need to fix apt’s sources.list file to point to the archives. My /etc/apt/sources.list has a single line in it:

deb http://ftp3.nrc.ca/debian-archive/debian lenny main non-free contrib

Check the Debian archive page for a list of mirrors close to you that include the archived packages, and update your line accordingly.


apt-get update
apt-get upgrade

We need the build tools:

apt-get install build-essential autoconf automake bzip2 mkisofs bin86 \
gawk flex bison ncurses-dev docbook-utils pkg-config gettext libglib2.0-dev \
libfuse-dev libpam-dev

Grab the build package for ttylinux:

cd /tmp
wget https://github.com/djerome/ttylinux/tarball/master -O ttylinux.tar.gz --no-check-certificate

Note that this creates a specific set of files; it’s worth investigating exactly what version of those files is being downloaded.

Unzip and untar:

tar zxf ttylinux.tar.gz

Modify Busybox Config

Busybox is the single binary that provides most of the utilities you’re used to seeing in a Linux system. In the install that ttylinux comes with on the x86_64 iso (here), a particular utility that DOES come with Busybox, dhcprelay, is not enabled. So, we will hack one of the configuration files so that it is built into the Busybox that gets built for ttylinux.

nano /tmp/djerome-ttylinux-9db9595/ttylinux-build/config/_bbox-stnd.cfg

Modify the following sets of lines:

# CONFIG_UDHCPD is not set



Compile the cross-tool chain

First, we need to compile the cross-tool chain.

cd /tmp/djerome-ttylinux-9db9595/xbuildtool
make setup
make x86_64-2.14-gnu

This downloads what’s needed for the crossbuild tool, then compiles it. It’s over 100mb, and the compile time totaled around 40 minutes on my VM.

Compile and build ttylinux

First, build our ttylinux config:

cd /tmp/djerome-ttylinux-9db9595/ttylinux-build
make getcfg

The system will ask you what type of system you are building; select 5 for pc_x86_64_defconfig.

As we don’t need all the tools that ttylinux compiles by default, we’ll modify the now-created ttylinux-config.sh file to exclude building those packages.

NOTE: For some reason, on my VM, compiling the gmp package fails. I have submitted a report to the ttylinux developer about it, but there’s a reason that package in particular is commented out (turns out it has something to do with compiling an x86_64 distro on x86_64 hardware; comically, if you did this with a 32-bit distribution, gmp and all the rest would compile fine. If you’re curious about any further results, my report to the ttylinux dev is here.) The rest I did simply because I knew I didn’t need them and could deal with a smaller installation.

I commented out the following packages (at the end of the file):

#TTYLINUX_PACKAGE[i++]="mpfr-2.4.2" - depends on gmp, so comment them both out
#TTYLINUX_PACKAGE[i++]="alsa-utils-1.0.25" - depends on alsa-lib, so comment them both out

With that done, we can proceed:

make dload

This downloads all the required source packages. It takes a while, so be patient.

Now build the whole thing:

make dist

This compile will take a while; probably close to an hour.

Afterwards, an ISO should be present in the img directory once this is done. Using a method of your choice (i.e. WinSCP or scp or… ?), grab that ISO and move it to wherever you like. You can now run/install ttylinux, including modified busybox with dhcprelay, as you would the distribution that is downloaded from the site.


I find myself seriously questioning the relevance of the iPhone 5. Android devices have been larger, faster, more powerful, and can DO MORE (video calls while NOT on wifi, anyone?) for some time. Certainly for those wanting an upgrade from an older device (and not wanting to admit that Apple is an innovation-killing company, with all the ridiculous lawsuits against companies like Samsung for such things as “rounded corners”), it will be welcome.

However, for the rest of us, Apple’s iPhone is no longer (actually, it hasn’t been for a while) a particularly great option given others out there that are far better, at least looking strictly at the (admittedly geeky) numbers.

On the other hand, Apple has a few things going for them:

1) a devoted fanbase. Whatever you call them (hipsters, fanboys, baristas), they ARE devoted, usually to a fault. People with no money to their names WILL line up starting approximately yesterday at Apple stores the world over for a chance at the first iPhone 5, even if it was nothing more than an aluminum brick with an Apple logo and an iPhone 5 stamped on it. Apple depends heavily on this, whether they admit it or not. (sidenote: in case you can’t tell, this is one of my BIGGEST pet peeves with Apple, or rather, their fanbase; that’s why I get so sarcastic (some may say caustic) about it. I have no problem with people choosing an iPhone over another device; what I DO have a problem with is people blindly choosing a phone because it’s “clearly the best” or just because it’s a certain brand, and/or comes with a certain brand stigma (hence the hipster comment). Choose what works for you, but be able to back it up with something other than “because it’s an iPhone”, “because it just works”, “because unlike brand X you can fix an Apple”, or “because it’s cool”, all of which are utter BS).

2) their UI. I use an Android, and I have also used iPhones, and while every OS (mobile or otherwise) has it’s quirks and learning curves, iOS is arguably the best out there for stability, usability, and “getting it done”. No, I’m not being sarcastic.

3) integration with other Apple devices. Arguably the LEAST understood (by the masses) and least used of their best features, iCloud and other integration means that a photo shot on your iPhone will show up on every Apple device you own (obviously as long as it’s configured). You can control your Apple TV from your iPhone, show a movie on your iPhone on your Apple TV. If your Macbook crashes, you can buy a new one (or an iMac, or Air, or whatever) and restore your Time Machined old system to your new one and have it look, feel, and operate nearly identically to your old one, including all Apps restored. That’s pretty cool. However, most people don’t take advantage of this, for a number of reasons.

4) App ecosystem. Obviously, Apple’s App Store with apps for iOS devices far surpasses the Android Play Store. Though, I would argue that 99% of the popular apps on iOS have an Android counterpart; also, a larger percentage of those same apps on the Android Play Store are free where they aren’t in the App Store (Angry Birds and WhatsApp messenger to name a few). So this is a quickly eroding advantage at best, assuming it even still is one.

The original iPhone, released approximately 5 years ago, was 5 years ahead of its time. Other devices are just now caught up. Apple changed the game back then, but if they expect to do the same again and stay relevant, they needed to pull something MAJOR out for this release.

This wasn’t it. This is just barely catching up to other devices, which was true of the iPhone 4 launch as well. This brings the iPhone’s continued relevance into question, because killing innovation by other companies by patent trolling to stay on top basically says “we know we have nothing new, so we’ll just prevent you from doing anything new too.”

Will the iPhone 5 sell? Yep, you bet. Millions of units. Apple will not be disappointed, and this may arguably be the best selling iPhone of all time, as more and more people move into smartphones (as well as people who didn’t upgrade to the 4 or 4S). But Apple’s current reign is, in my opinion, clearly at an end.

Then again, I thought the iPad was a complete joke that missed the mark, so what do I know?