Ataraxia Consulting

Peace of mind for consulting

Binary addon design

When writing your application in Node you may need to communicate with an external library or device that may be impractical or impossible to do from pure JavaScript. It may be that the overhead added from ffi is also too much for your application. Or -- the least likely scenario -- you have demonstrable proof that your compiler can generate a faster version of your algorithm than the V8 JIT can (while this might be easy to confirm on the micro case, it's quite unlikely to be true for everyone given the varying CPUs and compiler combinations in use). If you find yourself in one of these rare cases, it is necessary to write a binary addon.

If you need to write a binary addon, here are some basic guidelines to keep in mind:

(When "native layer" is used, it's meant to refer to actions performed in the C/C++ side, vs what happens in JavaScript)

Opt Out

Just to reinforce what I said before, by writing this module in C/C++ you're explicitly opting out of any runtime benefit the JIT may provide you. Remember the JIT will watch execution of your functions and optimize and inline those functions on the fly.

You will need to strike a good balance between what is done in JavaScript and what is done in C/C++. I would vote to do as little as possible on the C/C++ side, just enough to unpack and pack the values you are passing back and forth between the library and JavaScript.

JavaScript for consumers

To help enforce the previous rule, it's best to wrap a method defined in C/C++ that's meant to be consumed by other users with a JavaScript shim. That is if you NODE_SET_METHOD(target, "foo", Foo) you shouldn't be doing exports.foo = binding.foo. This puts the burden of argument checking and validation in the native layer. Instead consider:

exports.foo = function(bar, baz) {
  if (isNaN(baz) || baz > 10)
    throw new Error("u did it wrong");
  binding.foo(''+bar, +baz);
};

Since we're passing bar coerced as a string and validating baz's input, we can make more assumptions about the safety of certain actions in the native layer.

We're here to write JavaScript, so let's actually do that whenever possible.

Cheap boundary

It is pretty darn cheap to cross the JavaScript and C/C++ boundary, this is unlikely to be the bottleneck in your application. Write and design what feels comfortable and is easy for you to understand and maintain. With the following caveat:

Primitives Please

You will get the best bang for your buck if you interact only with "primitives", like:

  • v8::Number or v8::Integer
  • v8::String
  • v8::Function
  • node::Buffer
  • v8::External
  • v8::Array

The methods you export from your binding should avoid (like the plague) creating, inspecting, or mutating complex objects from the native layer. It's extra ordinarily slow. It's fast in every day JavaScript usage because the JIT gets to do all sorts of fancy caching and inlining, which you won't get because you explicitly opted out.

Just in case there's any confusion, if the method defined in the native layer will be called often make sure it does not call ->Get and especially not ->Set on an object passed to it. Doing so will only make you sad. Actually, it's ok to call these methods on relatively small v8::Array's because you are just doing normal pointer arithmetic to get the values. It's the v8::String based lookups and sets you need to be wary of.

Remember it's cheap to cross the boundary, so if you'd like to have a method return a complex object, actually pass to the method a factory function that takes the representative pieces and creates the object in JavaScript and then returns it to the native layer, which is then free to do whatever it needs to with that object.

function createObj(a, b, c) {
  return {
    foo: a,
    bar: b,
    baz: c,
  };
}

var myObj = module.do_something('foobarbaz', createObj);
//myObj.foo
//myObj.bar
//myObj.baz

Don't throw

First and foremost, do NOT use C++ exceptions, ever, at all, in your code. Node is compiled with -fno-exceptions which doesn't prevent you from using exceptions, but does change how things are cleaned up after C++ exceptions are hit. Do NOT use them. Just don't.

Try to avoid throwing exceptions from the native layer. While it's certainly possible to do so (and Node does so in places) it won't be as helpful as you might want it to be, mostly you won't really know the what and where of when something threw on the native side, just that it did. (You can always use very explicit messages and things like __func__, __FILE__, __LINE__ to help in that regard ... eww)

Chances are if you let something through the JavaScript side that is wrong for the native side you want to assert and die a horrible death, instead of soldiering on only to have arbitrary memory corruption later.

Remember, we're here to write JavaScript, don't be afraid of it.

Handle wrapping

As an extension of the "JavaScript for consumers" section, consider the following pattern:

var assert = require('assert');
var binding = require('bindings')('mylib');

function Wrapper(opts) {
  if(!(this instanceof Wrapper))
    return new Wrapper(opts);

  assert(opts.foo);
  assert(opts.bar);

  this._handle = binding.libraryInit(opts.foo, opts.bar);
}

Wrapper.prototype.doSomething = function(baz) {
  assert(baz);

  binding.doSomething(this._handle, baz);
};

module.exports = Wrapper;

The idea here being that you have some sort of resource handle that needs to be reused in subsequent calls to your binding. Again, this is perfectly doable in C/C++ by following the node::ObjectWrap pattern, or implementing something similar yourself, but that's sacrificing most of what the JIT can provide you.

In this model we can do more of the validation in JavaScript before sending it to C/C++ and potentially crashing.

If you need to handle finalization of the handle after it goes out of scope in JavaScript, you should make the handle a v8::Persistent and then call MakeWeak to define a callback that will be triggered when the GC is about to collect your object.

Use bindings

Please oh please, use bindings. This simple library takes care of the naming and pathing changes that may occur from various configurations and versions of Node. Most importantly if I've compiled a debug version of Node, I'll actually build and run a debug version of your module and may be able to help figure out what might be going wrong.

Know your tools

Do not be afraid to compile Node with ./configure --debug and get your hands dirty with mdb or gdb to figure out just what is causing your module to crash. There are lots of resources out there to help you with that, but often just getting the stack trace from a core file will tell you quite a lot about what you've done wrong.

TL;DR

Write more JavaScript and less C/C++

TL;DR P2

  • This code can't be optimized anymore than the compiler being used
  • Make sure consumers are getting JavaScript functions
  • It's cheaper than you think to call between JavaScript and C/C++
  • Do NOT mutate objects in C/C++
  • Avoid exceptions in C/C++ whenever possible
  • Define your wrapper classes in JavaScript
  • Use the bindings module
  • Don't be afraid to debug
read more

Posted in 


You're going to have to rewrite it anyway

Node v1.0 is approaching, and v0.12 is imminent (as far as that goes for FOSS projects). As we work towards getting v0.12 out the door, there have been a lot of changes happening for node's primary dependency v8. Ben is working on moving us to the 3.20 branch, follow his progress here.

As you can tell this is a signficant change to the API, which requires a touch of virtually every file in our src/, this has been a huge headache for him, and will ultimately cause a huge headache for developers of binary addons.

You're going to have to #ifdef around significant portions of the API to keep your module working across different version of node, this is going to cause endless amounts of pain and issues for node and developers who have for the most part been accepting of the churn in our underspecified addon API.

This one is going to hurt.

A lot.

TL;DR -- A modest proposal

Since you're going to have to rewrite your module anyway, it's time for node to specify and export the API we are going to "bless" for addons. That is, just what API we are going to support and make sure continues to work from minor and major releases, as well as a deprecation policy.

More specifically I think we should be exporting a separate (and not equal) wrapper around (at the very least) javascript object creation, get/set, function calling.

Additionally we should package and distribute (if possible in npm) a transitional library/headers which module authors can target today which will allow their module to compile and work from v0.8 through v1.0

The Platform Problem

We currently allow platforms/distributors to build against shared (their own) versions of many of our dependencies, including but not limited to:

  • v8
    • Holy crap, we're about as tightly coupled to the version of v8 we ship as chromium itself is.
  • libuv
    • If we weren't strictly coupled to v8, we certainly are for libuv, there would be no (useful) node, without libuv.
  • openssl
    • This is a must for linux distributions, who like to break DSA keys and then make every dependency vulnerable as a result (sorry Debian, I keed I keed).
    • This actually allows distributors who know specific things about their platform to enable/disable the features that allow it to run best.
  • zlib
    • Meh, this isn't such a big deal, it doesn't really change all that often.
  • http_parser
    • Really? People ship this as a separate library?

This functionality was added to appease platform builders, the likes of Debian, Fedora, and even SmartOS. However, doing so has complicated and muddled the scenario of building and linking binary addons.

Currently node-gyp downloads the sourceball, extracts the headers from it, and makes some assumptions from process.config about how to build your addon. In practice this has been working reasonably well.

However, I'm very concerned about this as a long term strategy. It's possible for someone to have tweaked or twisted the node (or one of its dependencies) builds, which could lead to some unintended consequences. In the "best" case, you'll get a compiler error from a changed API or clashing symbol. In the worst case they have modified the ABI which will manifest itself in unexpected and often subtle ways.

Not to mention that we have no good answer on how to build and link addon modules against the proper version of a shared dependency (what if the system has multiple openssl's, what if they compiled against it in one place, but now run against it in another).

And last but not least, how do modules consume symbols from our dependencies that node itself doesn't consume. Consider a specific crypto routine from openssl that you want to provide as an addon module because node doesn't currently have an interface for it.

Enemies without, and enemies within

As if it weren't bad enough that platforms may ship against a version of v8 that we haven't blessed, we (and addon developers) have to fight against the beast that is the v8 API churn.

I don't really fault Google and the chromium or v8 team for how they are handling this, more often then not we just end up with ugly compile time deprecation warnings, letting us know the world is about to break.

However, there have been times -- like right now -- where node can't paper over the drastic change in the v8 API for module developers. And as a result we begrudgingly pass the API change to module authors.

To paraphrase, don't forget that execrement will inevitably lose its battle with gravity.

So what are we going to do?

Meat and Potatoes

This is where I don't particularly have everything fleshed out, and I'm sure I will take a considerable amount of heat from people on API decisions that haven't been made.

I want to export the following interfaces:

  • node/js.h
    • Object creation and manipulation.
    • Function calling and Error throwing.
  • node/platform.h
    • IO and event loop abstraction.
  • node/ssl.h
  • node/zlib.h
  • node/http.h

While I am not particularly attached to the names of these headers, each represent an interface that I think module authors would opt to target. I only feel strongly that we export js and platform as soon as possible as they are the primary interactions for every module.

Basic Principles

There are only a few principles:

  • Avoid (like the plague) any scenario where we expose an ABI to module authors.
    • Where possible use opaque handles and getter/setter functions.
  • The exported API should be a reliable interface which authors can depend on working across releases.
  • While a dependency may change its API, we have committed to our external API and need to provide a transitional interface in accordance with our deprecation policy.
  • The API should never expose an implementation detail to module authors (A spidermonkey backed node one day?).

Platform

The platform interface is the easiest to discuss, but the pattern would follow for ssl, zlib, and http.

This would just rexport the existing uv API, however with a C-style namespace of node_. Any struct passing should be avoided, and libuv would need to be updated to reflect that.

JS

I expect the js interface to be the most contentious, and also fraught with peril.

The interface for addon authors should be C, I don't want to forsake the C++ folk, but I think the binding for that should be based on our C interface.

I was going to describe my ideal interface, and frame it in context of my ruby and python experience. However, after a brief investigation, the JSAPI for spidermonkey exports almost exactly the API I had in mind. So read about that here.

Would it make sense, and would it be worth the effort, for node to export a JSAPI compatible interface?

Would it make more sense to export a JSAPI influenced API currently targetted at v8 which could be trivially extended to also support spidermonkey?

UPDATE 2013-07-08:

It's interesting and worthy to have a conversation about being able to provide a backend neutral object model, though our current coupling to v8 and its usage in existing addons may not make it possible to entirely hide away the eccentricities of the v8 API. But what we can provide is an interface that is viable to target against from release to release regardless of how the public v8 API changes.

Prior Art

A lot of these ideas came from a discussion I had with Joshua Clulow while en route to NodeConf.

Part of that conversation was about v8+ which was written by a particularly talented coworker, who had a rather nasty experience writing for the existing C++ API (such as it is).

There's some overlap in how it works and how I envisioned the new API. However, I'm not sure I'm particularly fond of automatically converting objects into nvlists, though that does solve some of the release and retain issues.

In general I would advocate opaque handles and getter and setter functions, with a helper API which could do that wholesale conversion for you.

Really though this matters less in a world where addon authors are following some defined "Best Practices".

  • Only pass and return "primitives" to/from the javascript/C boundary
    • Primitives would be things like: String, Number, Buffer.
  • Only perform objection manipulation in javascript where the JIT can work its magic

Dessert

Work on this needs to begin as soon as possible. We should be able to distribute it in npm, and authors should be able to target it by including a few headers in their source and adding a dependency stanza in their binding.gyp, and by doing so their module will work from v0.8 through v1.0

I mean, you're going to have to rewrite it anyway.

Discussion should happen on the mailing list on thread: https://groups.google.com/d/msg/nodejs/VlUJ68n6QBg/fPsuArtR0roJ

read more

Posted in 


Limit access to only Google Maps using Squid

Recently I needed a small kiosk for some truck drivers to easily use google maps to verify their routes. But I wanted to make sure that's all they were using the kiosk for. I had considered writing my own google maps portal, and I may still yet, but for now I implemented the limitation as an acl in squid.

I can't say this will always work, as it's at google's discretion to change urls and hostnames anytime, but it works for me as of now. I hope someone else finds this information useful.

These are the domains I've allowed so far:


# Primary domains for most traffic
acl GMAPS dstdomain maps.google.com maps.gstatic.com

# Some stock google images come from here
acl GMAPS dstdomain ssl.gstatic.com

# These aren't strictly necessary, but I didn't think it would be harmful to add
acl GMAPS dstdomain safebrowsing.clients.google.com
acl GMAPS dstdomain cache.pack.google.com

# Nearly every query hits this, I couldn't find good information about it
# Some suggest it's related to ads, things work without it but I couldn't
# find a good reason not to include it
acl GMAPS dstdomain id.google.com

# Map Images
acl GMAPSREG dstdom_regex -i ^mt[0-9]+\.google\.com$
# Earth/Satellite images
acl GMAPSREG dstdom_regex -i ^khm[0-9]+\.google\.com$
# Street view
acl GMAPSREG dstdom_regex -i ^cbk[0-9]+\.google\.com$
# Location Images
acl GMAPSREG dstdom_regex -i ^t[0-9]+\.gstatic\.com$

# Printing a map calls the chart api
acl GMAPSURL url_regex -i ^http://www\.google\.com/chart\?

#... further down near the end of the http_access stanzas

http_access allow GMAPS localnet
http_access allow GMAPSREG localnet
http_access allow GMAPSURL localnet

# And finally deny all other access to this proxy
http_access deny all
read more

Posted in  google sysadmin


Query specific DNS server in nodejs with native-dns

The DNS options available to node.js are a little slim. They only provide a wrapper to c-ares to the point of doing the simplest of record type lookups. There's not a useful or more granular interface for customizing your queries than to modify your platform's equivalent of /etc/resolv.conf.

With the idea of customization in mind I created native-dns. Which is an implementation of a DNS stack in pure javascript. Below you'll find a quick example of how to query the google public DNS servers for the A records for "www.google.com". To install native-dns you simply need to "npm install native-dns".

var dns = require('native-dns'),
  util = require('util');

var question = dns.Question({
  name: 'www.google.com',
  type: 'A', // could also be the numerical representation
});

var start = new Date().getTime();

var req = dns.Request({
  question: question,
  server: '8.8.8.8',
  /*
  // Optionally you can define an object with these properties,
  // only address is required
  server: { address: '8.8.8.8', port: 53, type: 'udp' },
  */
  timeout: 1000, /* Optional -- default 4000 (4 seconds) */
});

req.on('timeout', function () {
  console.log('Timeout in making request');
});

req.on('message', function (err, res) {
  /* answer, authority, additional are all arrays with ResourceRecords */
  res.answer.forEach(function (a) {
    /* promote goes from a generic ResourceRecord to A, AAAA, CNAME etc */
    console.log(a.promote().address);
  });
});

req.on('end', function () {
  /* Always fired at the end */
  var delta = (new Date().getTime()) - start;
  console.log('Finished processing request: ' + delta.toString() + 'ms');
});

req.send();

/* You could also req.cancel() which will emit 'cancelled' */

You of course can use native-dns as a drop in replacement of the builtin 'dns' module. And there is even a very basic DNS server which you can use to respond to DNS requests, semantics of which are beyond the module itself.

You can find the source and more information and examples at the github repository which is named node-dns because I created it before I realized that was taken in npm.

read more

Posted in  dns javascript js native-dns nodejs


Giganews VyprVpn on Linux with IPSEC and L2TP

I'm not a fan of PPTP, but unfortunately that's the only listed configuration option for giganews' VpyVpn service (http://www.giganews.com/vyprvpn/setup/linux/pptp.html). So the following are a few configuration files you can use to connect to vyprvpn using ipsec and l2tp. I tested with Ubuntu 10.04, OpenSWAN, and xl2tpd.

The /etc/ipsec.conf stanza

conn giganews
        authby=secret
        pfs=no
        rekey=yes
        keyingtries=3
        type=transport
        left=%defaultroute
        leftprotoport=17/1701
        right=us1.vpn.giganews.com
        rightid=@us1.vpn.giganews.com
        rightprotoport=17/1701
        auto=add

The /etc/ipsec.secrets stanza

%any us1.vpn.giganews.com: PSK "thisisourkey"

The /etc/xl2tpd/xl2tpd.conf stanza, be sure to replace giganews_username with your username

[lac giganews]
lns = us1.vpn.giganews.com
require chap = yes
refuse pap = yes
require authentication = yes
; Name should be your giganews username
name = giganews_username
ppp debug = no
pppoptfile = /etc/ppp/options.l2tpd.client
length bit = yes

The /etc/ppp/chap-secrets stanza, be sure to replace giganews_username and giganews_password with your username and password respectively

giganews_username us1.vpn.giganews.com "giganews_password" *

The /etc/ppp/options.l2tpd.client file

ipcp-accept-local
ipcp-accept-remote
refuse-eap
noccp
noauth
crtscts
idle 1800
mtu 1410
mru 1410
defaultroute
debug
lock
#proxyarp
connect-delay 5000

You can replace us1.vpn.giganews.com with any of the following end points, just make sure you replace all instances in the previous

  • us1.vpn.giganews.com for Los Angeles, CA
  • us2.vpn.giganews.com for Washington, DC
  • eu1.vpn.giganews.com for Amsterdam
  • hk1.vpn.giganews.com for Hong Kong

To connect you run the following commands ipsec auto --up giganews when that's successful connect l2tp echo "c giganews" > /var/run/xl2tpd/l2tp-control

If that's successful ppp will have replaced your default route to go out over ppp0 which represents your vpn connection.

Most of the instructions adapted from http://www.jacco2.dds.nl/networking/linux-l2tp.html

read more

Posted in  giganews ipsec l2tp l2tpd linux ubuntu vpyvpn xl2tpd


Steve Earle -- Every Part Of Me

Steve Earle recently released a new album entitled "I'll Never Get Out of This World Alive" (the title of an old Hank Williams song). I'm particularly infatuated with this album. Mostly for its simplicity. All the songs were recorded live, with little overdubbing. Just reinforcing that music doesn't have to be complicated and layered to be enjoyable and moving. The song that represents this ethos the best is "Every Part Of Me", the following are the lyrics and chords as I interpret them. Here's a video with him performing it and a brief audio clip of me playing the two major themes.

Every Part Of Me
Steve Earle
I'll Never Leave This World Alive
C
 
[C] [G] [Am] [G] [C]
[C]I love you with all my heart
[G]all my soul [Am]every [G]part of me
[C]it's all I can do to mark
[G]where you end and [Am]where I [G]start you see
 
[Am]living long in [G/B]my travails
I [C]left a trail of [G/B]tears behind me
[Am]been in love so [G]many times
didn't [F]think this kind would [Am]ever [G]find me
 
[C]I love you with everything
[G]all my weakness [Am]all my [G]strength
[C]I can't promise anything
[G]except that my last [Am]breath will [G]bear your name
 
[Am]and when I'm gone they'll [G/B]sing a song
a[C]bout a lonely [G/B]fool who wandered
[Am]around the world and [G]back again
[F]but in the end he [Am]finally [G]found her
 
[C]I love you with all my heart
[G]all my soul and [Am]every [G]part of me
[C] [G] [Am] [G]
[C] [G] [Am] [G]
 
[Am]cross the univer[G/B]se I'll spin
un[C]til the ending and [G/B]then I wonder
[Am]if we should get a[G]nother chance
could [F]I have that dance
for[Am]ever [G]under
 
[C]a double moon and sky lit stars
[G]shining down on [Am]where you [G]are
[C]and I love you with all my heart
[G]all my soul and [Am]every [G]part of me
[C] [G] [Am] [G] [C]
read more

Posted in  chords tabs


Linsides - LinCached - 3rd Party Linode Service

Prior to today I had considered this site, while not terribly popular (or frequently updated), to be relatively quick with little effort on my part. I may not host many high traffic sites, but I have my personal sites and a handful of others. I have a server specifically to handle the HTTP traffic, and a server to handle RDBMS. They use the private network afforded to me by Linode.com (my VPS provider of choice) so communication between the two servers is quick and doesn't count against my monthly bandwidth quota. It's worked for years, so I've had little desire to muck with the formula.

That is until I learned about Linsides.com -- a company that offers services specifically for Linodes over the private network.

The offering is young, but the service is delivered with slick ease. They currently offer NTP, APT Caching, and LinCached (a memcached frontend). The services are only available over the private network, so that means you have to be a Linode.com customer before you can take advantage. NTP and APT caching are offered for free and are conveniences to provide fast responses and to keep load on public mirrors low. That is you get the same quality as if you were connecting to them publicly but they're generally delivered faster and don't count against your monthly bandwidth quota.

LinCached is a private network memcached instance, which you can configure to be one of the following sizes: 32MB, 64MB, 96MB, or 128MB. Linsides uses a prepaid credit system for managing payments. Each size memcache instance costs a certain amount of credits per day, a 32MB instance is 1 credit, 64MB are 2 credits and so on. When you sign up you get 5 free credits, so you can get a free trial for 5 days of a 32MB instance. There's a dashboard that lets you see how many instances you have, what their sizes are, and your current usage on that instance. If you drill down further you can even see a snazzy progress bar to that gives you a visual way to identify how much of your instance you're using. You can even quickly flush the specific instance.

After you've created your LinCached instance, you simply need to add the private IP address of the node that you want to grant access to and boom, you're done. All in all it took about 10 or 15 seconds of simple data entry on Linsides.com clean site to add a new instance, and it was instantly available to me. I made the necessary changes so this site would take advantage of memcache and just as instantly I started to see the usage appear on the Linsides dashboard.

Simple, dead easy. A perfect no hassle way for me to increase performance of my site in under 15 minutes (realistically under a minute).

Now I'm perfectly capable of running my own memcached instances. But what's key here is that it's not using memory on my web servers or my database server, and I didn't have to spin up yet another node to achieve that. Pricing is affordable as well, credits come in packages that range from $10 for 100, to $300 for 6K. They are also planning on offering more services in the future. I'm excited!

Linode.com and Linsides.com -- A match made in heaven!

read more

Posted in 


Automatically generate AirPrint Avahi service files for CUPS printers

Last weekend I read Ryan Finnie's excellent article about getting CUPS printers to work with AirPrint (http://www.finnie.org/2010/11/13/airprint-and-linux/). I got a bit angry at Ubuntu/avahi/CUPS/Apple regarding some silliness involving the APIs that are used by CUPS internally for DNSSD announcement (like the fact it's broken in 10.04 and 10.10 because Apple changed APIs and the new API calls weren't packaged (yet?)). So after I finished ranting to myself I created the service file and boom, I could print from my iPhone.

Neat.

Sucks to have to create these .service files manually though, if only something could just talk to CUPS and spit out these files for me. So this weekend I decided to whip up a small script to do just that.

https://github.com/tjfontaine/airprint-generate

It's a small python script that can talk to a CUPS server (by default the local socket) and write out some xml suitable for use with avahi. It doesn't do much special, just grabs all configured printers that are marked shared and create files that when avahi exports them will make the printers visible from an iOS device. You are responsible to make sure your printer is configured properly in CUPS. You should make sure CUPS can send and print a test page, if it can't do that it's unlikely that the print you send it from your iOS device will work either. Your CUPS server should also have a working PDF filter, since most times that's what the iOS device will send.

Without any options, it will communicate with your local CUPS instance (or that is to say, will do what ever the cups client library will do by default, there may be environment variables at play here), after it learns about each printer it will generate an xml file AirPrint-[name of printer in CUPS].service, putting this file somewhere where avahi knows to load it will automagically make the printer available to iOS devices (the directory in my experience is /etc/avahi/services). You can also specify -p [PREFIX] if you aren't a fan of AirPrint-. There is also -d [TARGET DIRECTORY] if you wanted to specify the avahi services directory, if you supply this parameter all the xml files will be generated in that directory, otherwise they will be generated in the current working directory.

DNSSD has a limit of 255 characters for a txt-record entry, currently not all fields are verified for this, the one place where it is checked is the entry that generates the "pdl=" record, this is the hint record that specifies what content-types the printer will accept. There is an internal priority list (in the future you will be able to influence this) that keeps important content-types at the head and experimental/unnecessary ones out all together. The resulting entry is truncated to fit into 255 (without also creating malformed entries). If you're curious to see what will be truncated make sure to run the script with the rather ugly --verbose option.

In the future (with proper motivation) I will add the other hint fields that include things like duplexing, but it wasn't immediately obvious to me which CUPS printer attributes store that information in a consistent way.

It would also be trivial to take this script and instead of generating avahi service files directly, use a python binding for dnssd/avahi/bonjour to do the announcements directly (at least as a stop gap until CUPS >= 1.4 + Debian (and derivatives) get the packaging solidified [and add airprint announcments]).

read more

Posted in 


Linode API Python3 and GitHub

Josh Wright today contributed a Python3 branch of the API. I've pushed this to the repo and cherry picked a few of the commits that apply to master as well. I've also created a github repo that will serve as the main access point for the repo from here on out. Please if you have issues, file them. Also you hopefully will find the examples useful, I'll be moving those over to the wiki as well. If you want to contribute please don't hesitate, Josh already identified the need for unit tests. Thanks to Linode for being a great resource, and thanks to everyone who has used and/or contributed to the bindings!

read more

Posted in 


Updated Linode API

I've updated the python bindings to support the new Linode StackScripts method calls. An excellent feature to aid in the deployment of your new nodes. The documentation is also up to date, albeit in need of some verbosity. You can browse the source at my gitweb, or as usual you can access the source directly with git git clone git://github.com/tjfontaine/linode-python.git

read more

Posted in