Pages

Tuesday, 2 July 2013

NodeConf 2013

This past weekend I was fortunate enough to be able to attend NodeConf 2013. The event was held at the fantastic Walker Creek Ranch in Marin County. The Ranch certainly knows how to look after conference-goers — an amazing variety of good quality food, beautiful grounds and friendly staff made for a most enjoyable four days. It's also somewhat breathtaking to see how many stars are visible just a bit more than an hour north of the San Francisco city lights.

I had the privilege of assisting Max Bruning (along with TJ Fontaine and Emily Tanaka-Delgado) as he put on eight consecutive introductory DTrace-and-Node sessions for classes of around 40 people each. It is of utmost importance for us as Software Engineers to walk in the world from time to time, gathering feedback from folks that use the tools we build. I had a number of very positive and productive discussions with fellow conference-goers about Node, SmartOS and our new object storage and compute product, Manta, throughout my time at the ranch.

It was both encouraging and gratifying to see — with I think few, if any, exceptions! — everybody who attended the DTrace session completing at least one of the hands-on Node debugging labs. If you missed out on attending, the slides and lab materials are available on Github. I look forward to continuing to engage with community and customer alike as we strive to get out the Good News on DTrace, and to make it more accessible to developers of all skillsets.

Hooray for NodeConf, and I'll certainly aim to be back next year!

Tuesday, 25 June 2013

Interactive Manta Jobs with mlogin(1)

It is 6am on Tuesday the 25th of June, 2013 — at least, it is in US/Pacific — which means we at Joyent are finally lifting the covers off of our new product: Manta. Manta is a brand new system that spans the twin pillars of Object Storage and Compute to provide a revolutionary new way of operating on data in the cloud. You can read more about it in a write-up from Mark Cavage on the Joyent blog.

In this post I'm going to cover one of the pieces I recently added to Manta — namely: the Manta Interactive Session Engine. While Manta was primarily designed to run batch-style jobs across a large number of input objects, this subsystem allows you to run an interactive UNIX program in the environment of a Manta compute job and control it from your terminal.

The client utility that allows you to run interactive jobs is mlogin(1). To crib, briefly, from its manual page:

mlogin  allows you  to spawn an  interactive job  in
Manta.  Once running, your terminal will be attached
to the remote process running in the job via a shell
session tunneled  through HTTPS, similar  in concept
to SSH.

Interactive  sessions are a great way to debug a new
job  script in-situ,  or to  experience and  explore
the  compute zone environment hands on.  It can also
become  part   of  a  workflow   using   interactive
terminal utilities  on large Manta  objects  without
the need  to download  or transfer  the data -- e.g.
the use of mdb (the Modular Debugger) on crash dumps
and core files.

To get started with mlogin, you'll need to sign up for a Joyent Public Cloud account, which gets you access to the Manta Storage service as well as our IaaS cloud offering. Once you sign in to the portal with your new account, you'll find personalised instructions for adding your SSH key and downloading and installing the Manta command-line tools, the source for which is available at joyent/node-manta on Github) and includes mlogin.

Once you have the command-line tools installed and you've been able to use mput to upload a file into Manta, you can take a swing at mlogin. The simplest possible interactive job is what we call a Reduce task with no input objects. You can spawn one of these by just typing mlogin! Let's do that now, and while we're there we can take a look at the environment inside a Manta compute job.

As you can see, the default mlogin invocation attaches you to a bash shell running in the Manta compute environment — a regular SmartOS zone. Some environment variables (starting with MANTA_) are provided that you can use in the job programs you write.

Let's try something a bit more complicated: an interactive job that uses an input object, also known as a Map task. Here at Joyent we write a lot of software in Javascript using the node.js platform. As part of debugging this software we often take core dumps of the running program, which we are then able to analyse post-mortem with mdb(1): the illumos Modular Debugger. We have recently taken to uploading those core dumps into Manta, where we can now (via the magic of mlogin) run mdb(1) in-situ on the dump without copying it to another system!

In this screenshot I have started a Manta job on an uploaded core file, and invoked mdb(1) on it. In a Map task, the MANTA_INPUT_FILE environment variable contains the path to where the system has mounted (read-only) your input object. I have then used ::findjsobjects, as described in Bryan Cantrill's blog post on finding node.js memory leaks.

One of the more gratifying parts of building mlogin(1) was the realisation that apart from the WebSockets engine for forwarding the live shell traffic to and from the client, the rest of the subsystem was implemented in terms of existing Manta primitives. I was able to pass configuration into the interactive job through the use of a regular Manta object, the job itself is a regular Manta compute job, and the forwarding agent that runs in the job is delivered into the Manta compute zone as a regular Asset. You can see more details of the implementation in the source, and I have high hopes that our user community will start producing their own innovative tools on top of Manta that I haven't even begun to imagine!

We at Joyent Engineering encourage you to check out Manta today, and we look forward to hearing your feedback!

Saturday, 25 May 2013

Sending Postfix Mail Via Comcast SMTP

I recently had need of mail output from a cron job in a zone on my SmartOS server at home. My connection is via Comcast cable, and unfortunately they seem to block outbound SMTP (port 25). As it happens, they have an SMTP relay host that you can use from Comcast IPs, but unfortunately that service requires authentication.

First you must discover your Comcast username; something you have probably used at most once, ever. You can do this with the Comcast UID Lookup Tool. They list a few fictional street addresses, and hopefully your actual address. You should get something back of the form username@comcast.net. You will also need your password, which is obviously beyond the scope of this post.

Configuring Postfix is relatively straightforward. I'm using a SmartOS machine, with a base 13.1.0 zone dataset. This is the newest base dataset, but the instructions will likely apply with only minor variations to older (and hopefully future) datasets.

Create the password file for Postfix to use:

# touch /opt/local/etc/postfix/smtp_passwd
# chmod 0600 /opt/local/etc/postfix/smtp_passwd
# echo "smtp.comcast.net     username@comcast.net:password" \
          > /opt/local/etc/postfix/smtp_passwd
# postmap hash:/opt/local/etc/postfix/smtp_passwd

Create a canonical sender map file to rewrite all From: addresses to a valid e-mail address. This must be done, or else the Comcast SMTP server will reject your mail as coming from an invalid domain.

# echo '/^([^@]*)@.*$/    $1@yourdomain.com' \
    > /opt/local/etc/postfix/sender_rewrite

Edit the Postfix configuration file, /opt/local/etc/postfix/main.cf and add these lines:

## -- Comcast SMTP Relay
relayhost = [smtp.comcast.net]:587
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps =
        hash:/opt/local/etc/postfix/smtp_passwd
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous

## -- Rewrite all sender addresses:
sender_canonical_maps =
        regexp:/opt/local/etc/postfix/sender_rewrite

Now, install the SASL Authentication plugins and start up Postfix:

# pkgin -y in cy2-plain cy2-login cy2-digestmd5
# svcadm enable postfix

If you send mail at this point, you should see evidence of success (or failure) in /var/log/maillog (or, in older zone datasets, /var/log/postfix.log). You should also receive the mail! If you want an easy way to send a test e-mail, try mailx:

# date | mailx -s "test email #1" your@email.com

Thursday, 29 November 2012

DTrace and JSON: Together at last!

In August of this year I jumped on a plane and moved from Australia to San Francisco to work for Joyent. It's been busy and exciting, from learning about a new city to finding my stride in a new role. I thought I'd take a moment to talk about a new DTrace feature that I just added to illumos-joyent, the core of our SmartOS operating system!

At Joyent we have been implementing JSON-formatted log files in some of our systems programming work, mostly through the node-bunyan logging framework for node.js applications. Formatting our log messages as JSON payloads affords us a substantially easier time of analysing and processing log records programmatically. Extending log files with more fields no longer requires rewriting the scripts that understand how to parse them — we can simply add new properties to the log records!

Bunyan, like every other modern logging framework, provides for a notion of 'log levels'. Trace- or debug-level log messages generally provide information helpful during development — or later troubleshooting — but are usually too large in volume, and too much of a performance hit, to be left enabled in production at all times. Enter DTrace USDT probes for logging, added recently to Bunyan by Bryan Cantrill. Now, the developer or operator can dynamically enable logging at any level in any running process and receive the generated log records in the form of arguments to DTrace probes.

The only argument passed to the Bunyan log probes when they fire is a JSON-formatted string payload, containing the object passed to Bunyan for logging by the application. While this new feature is a definite boon for developers and operators alike in being able to better understand the operation of the system, it becomes clear reasonably quickly that the string manipulation subroutines available for use within DTrace actions are not the friendliest way to filter on or format for output the contents of a JSON-formatted payload.

When Bryan suggested that we should provide a subroutine for JSON parsing in DTrace my interest was immediately piqued. The challenge of writing a compact, bespoke JSON parser to operating in the tightly restricted environment that is DTrace probe context was a rewarding one. The strict, well-defined nature of the JSON specification was an immediate and welcome asset in completing this work, which I put back yesterday.

The operation of the new json() subroutine is relatively straightforward. It accepts two arguments: a string containing a JSON payload and an element selector string describing which value to pull out of the JSON payload. Element selectors support:
  • simple string keys for objects, e.g. "key"
  • dot-separated keys for nested objects, e.g. "nested.object.key"
  • array indexing, e.g. "nested.object.array[5].key"

As a practical example, consider adding a http.ServerRequest object as the "req" property on an object that you then logged through Bunyan. First, we have to increase the DTrace strsize option to be large enough for the JSON payloads we'll be inspecting — in this example I've used four kilobytes. This D script would then filter on Bunyan log entries pertaining only to GET requests:

#pragma D option strsize=4k

bunyan*:::log-*
{
    this->j = copyinstr(arg0);
}

bunyan*:::log-*
/json(this->j, "req.method") == "GET"/
{
    printf("%s %s %s\n",
        json(this->j, "req.method"),
        json(this->j, "req.url"),
        json(this->j, "req.httpVersion"));
}

All values extracted by json(), including numeric values, are returned as a string precisely as they appeared in the original JSON. As part of this work, I also added a strtoll() subroutine to convert strings to integer types in D. This enables you to take a numeric value from a JSON payload representing something like latency or request size and aggregate these values using functions like sum() or quantize(). For example, if you were to add the HTTP response size to the object logged to Bunyan you could now trivially print out the sum of sizes of all responses once a second:
#pragma D option strsize=4k

bunyan*:::log-*
{
    this->j = copyinstr(arg0);
    @ = sum(strtoll(json(this->j, "response_size")));
}

tick-1s
{
    printa(@);
    trunc(@);
}

These new features are in the illumos-joyent gate right now, and should make their way into the next SmartOS bi-weekly build. In the (hopefully near!) future I will seek to integrate them into the upstream illumos-gate repository for other distributions to get access. My thanks to Bryan for guidance and code review, and to Joyent for being an awesome place to work!

Tuesday, 1 February 2011

Disable the Swoosh animation for Mac OS X Spaces

So, as it turns out there's a way to disable the motion sickness-inducing Swoosh! animation that happens when you switch between Spaces (virtual desktops) in Mac OS X. This appears to work on my 10.6.5 machine, but I haven't tested it anywhere else.

defaults write com.apple.dock \
workspaces-swoosh-animation-off \
-bool YES &&
killall Dock

Sunday, 17 October 2010

OpenIndiana Automated Install Server

This is a draft set of steps for getting an automated install server configured on almost any platform using only Apache, DHCP and TFTP. It's very rough at this point but it functions well enough to PXE boot and install a copy of OpenIndiana (OI).

First up, you should make a directory /export/install and:
git clone git://github.com/jclulow/illumos-misc.git /export/install


If you don't have git you can grab a tarball of the repository at github's web interface.

Until an OI bootable Automated Install (AI) ISO is available you can use the distro constructor to create your own. If someone wants to host a copy of a functional ISO that I've built, please let me know! I've made a few modifications to the AI ISO build descriptor that comes with OpenIndiana. You should grab the distro_const/ai_x86_image_JMC.xml file from the github repository and (on an OI 147 host) run:
distro_const build ai_x86_image_JMC.xml


After a while you'll get a usable ISO in /rpool/dc/media that you can use to set up the rest of your environment. You should extract the contents of the ISO into /export/install/ai_image. Assuming your TFTP server is rooted in /tftpboot you'll want to:
cp -r /export/install/ai_image/boot /tftpboot/oi


You'll also want a local IPS repository containing the current OI packages. Fetch this 2GB tarball: oi_147_spin2.tar.bz2. Extract it into /export/install/repo.

You can use rsync to bring the repo seed files you got from the tarball up to date, thus:
rsync -a pkg-origin.openindiana.org::pkgdepot-dev /export/install/repo/


In order to simulate parts of the automated install server that ships with OI I'm using a few CGI shell scripts. There are two ksh scripts and a list of packages to be installed (cgi-bin/PACKAGES_LIST) in the git repository which you can customise to your liking. I've also prepared some responses to the /versions/0 and /publisher/[01] methods of a real IPS repository server. As these responses are essentially static I'm just using regular text files.

Configure Apache (I used version 2.2 from pkgsrc) on your system. You'll need two virtual hosts, each listening on a different port (e.g. 5555 and 10000). These vhosts will map the various service URLs onto local repository content and the cgi scripts. They should be configured as per the sample in the git repo: doc/apache_vhost_config.txt.

Make sure you set the correct URL to the IPS repository vhost in environment variable $REPO_URL_MAIN in cgi-bin/ai-manifest.ksh. This tells the AI client to use your new local repository instead of the one on the Internet. Unlike the public URL, yours will not end in /dev if you've used the exact vhost configuration I've provided. Note that the additional /legacy repository is, by all accounts, incredibly large and you don't need many packages from it so I'm just using the public remote copy.

You should also create a GRUB menu.lst from the example in the git repository using the IP address and port numbers of your Apache vhosts and put it in /tftpboot/oi.

Finally, configure DHCP (I use ISC dhcpd) to answer your host's PXE requests. If you're also using ISC then something like this snippet should suffice:
...
option grubmenu code 150 = text;
...
# Force grubmenu to appear in the request list...
if exists dhcp-parameter-request-list {
option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,96);
}
...
host odin {
hardware ethernet 00:13:72:17:39:d2;
fixed-address 10.1.1.30;
next-server 10.1.1.10;
filename "oi/grub/pxegrub";
option grubmenu "oi/menu.lst";
}


With all this together you should be able to PXE boot a host with OI 147! Feedback and corrections welcome.

NB: Credit where it's due, I started with this page on the OpenIndiana Wiki.

Saturday, 19 June 2010

Forcing PXE Clients not to broadcast for extra DHCP options

We have a site-wide PXE boot setup that manages workstations everywhere (using Altiris). Occasionally I want to netboot a specific non-managed boot loader from a specific TFTP server just by configuring the DHCP options for that host.

By default the PXE client will request (via broadcast) an address. Our primary ISC DHCP server answers this request. The PXE specification, however, allows for additional DHCP servers that don't provide addresses but do provide boot options (i.e. TFTP server and filename) to clients. These additional options (coming from Altiris) override those provided in the original DHCP response by our primary ISC DHCP server.

After a quick read through the PXE specification I discovered this workaround to force a specific client to use the provided TFTP server/filename in the original DHCP response. PXE clients will accept (in the original DHCP response) a discovery control setting. You can use this to disable the secondary broadcast behaviour and force the PXE client to do as it was instructed by the primary DHCP server. This option is an encapsulated vendor option so we need to configure it in dhcpd.conf, thus:

# PXE Vendor Option Space:
option space PXE;
option PXE.discovery-control code 6 = unsigned integer 8;


Then, when defining a client you toggle on the appropriate bits:

host jmcdesk {
hardware ethernet 00:23:ae:61:13:d6;
next-server 10.10.10.10;
filename "pxelinux.0";
vendor-option-space PXE;
option PXE.discovery-control 11;
}


The option in question is PXE_DISCOVERY_CONTROL from the Preboot Execution Environment (PXE) Specification Version 2.1. The value eleven (11) informs the client to skip broadcast/multicast discovery and to use the boot filename from the original DHCP request.