04 Sep 2013
Node v1.0 is approaching, and v0.12 is imminent (as far as that goes for FOSS projects). As we work towards getting v0.12 out the door, there have been a lot of changes happening for node’s primary dependency v8. Ben is working on moving us to the 3.20 branch, follow his progress here.
As you can tell this is a signficant change to the API, which requires a touch
of virtually every file in our
src/, this has been a huge headache for him,
and will ultimately cause a huge headache for developers of binary addons.
You’re going to have to
#ifdef around significant portions of the API to keep
your module working across different version of node, this is going to cause
endless amounts of pain and issues for node and developers who have for the
most part been accepting of the churn in our underspecified addon API.
This one is going to hurt.
TL;DR – A modest proposal
Since you’re going to have to rewrite your module anyway, it’s time for node to specify and export the API we are going to “bless” for addons. That is, just what API we are going to support and make sure continues to work from minor and major releases, as well as a deprecation policy.
Additionally we should package and distribute (if possible in npm) a transitional library/headers which module authors can target today which will allow their module to compile and work from v0.8 through v1.0
The Platform Problem
We currently allow platforms/distributors to build against shared (their own) versions of many of our dependencies, including but not limited to:
- Holy crap, we’re about as tightly coupled to the version of v8 we ship as chromium itself is.
- If we weren’t strictly coupled to v8, we certainly are for libuv, there would be no (useful) node, without libuv.
- This is a must for linux distributions, who like to break DSA keys and then make every dependency vulnerable as a result (sorry Debian, I keed I keed).
- This actually allows distributors who know specific things about their platform to enable/disable the features that allow it to run best.
- Meh, this isn’t such a big deal, it doesn’t really change all that often.
- Really? People ship this as a separate library?
This functionality was added to appease platform builders, the likes of Debian, Fedora, and even SmartOS. However, doing so has complicated and muddled the scenario of building and linking binary addons.
Currently node-gyp downloads the sourceball, extracts the headers from it,
and makes some assumptions from
process.config about how to build your addon.
In practice this has been working reasonably well.
However, I’m very concerned about this as a long term strategy. It’s possible for someone to have tweaked or twisted the node (or one of its dependencies) builds, which could lead to some unintended consequences. In the “best” case, you’ll get a compiler error from a changed API or clashing symbol. In the worst case they have modified the ABI which will manifest itself in unexpected and often subtle ways.
Not to mention that we have no good answer on how to build and link addon modules against the proper version of a shared dependency (what if the system has multiple openssl’s, what if they compiled against it in one place, but now run against it in another).
And last but not least, how do modules consume symbols from our dependencies that node itself doesn’t consume. Consider a specific crypto routine from openssl that you want to provide as an addon module because node doesn’t currently have an interface for it.
Enemies without, and enemies within
As if it weren’t bad enough that platforms may ship against a version of v8 that we haven’t blessed, we (and addon developers) have to fight against the beast that is the v8 API churn.
I don’t really fault Google and the chromium or v8 team for how they are handling this, more often then not we just end up with ugly compile time deprecation warnings, letting us know the world is about to break.
However, there have been times – like right now – where node can’t paper over the drastic change in the v8 API for module developers. And as a result we begrudgingly pass the API change to module authors.
To paraphrase, don’t forget that execrement will inevitably lose its battle with gravity.
So what are we going to do?
Meat and Potatoes
This is where I don’t particularly have everything fleshed out, and I’m sure I will take a considerable amount of heat from people on API decisions that haven’t been made.
I want to export the following interfaces:
- Object creation and manipulation.
- Function calling and Error throwing.
- IO and event loop abstraction.
While I am not particularly attached to the names of these headers, each
represent an interface that I think module authors would opt to target. I only
feel strongly that we export
platform as soon as possible as
they are the primary interactions for every module.
There are only a few principles:
- Avoid (like the plague) any scenario where we expose an ABI to module authors.
- Where possible use opaque handles and getter/setter functions.
- The exported API should be a reliable interface which authors can depend on working across releases.
- While a dependency may change its API, we have committed to our external API and need to provide a transitional interface in accordance with our deprecation policy.
- The API should never expose an implementation detail to module authors (A spidermonkey backed node one day?).
platform interface is the easiest to discuss, but the pattern would
This would just rexport the existing
uv API, however with a C-style namespace
node_. Any struct passing should be avoided, and libuv would need to be
updated to reflect that.
I expect the
js interface to be the most contentious, and also fraught with
The interface for addon authors should be C, I don’t want to forsake the C++ folk, but I think the binding for that should be based on our C interface.
I was going to describe my ideal interface, and frame it in context of my ruby and python experience. However, after a brief investigation, the JSAPI for spidermonkey exports almost exactly the API I had in mind. So read about that here.
Would it make sense, and would it be worth the effort, for node to export a JSAPI compatible interface?
Would it make more sense to export a JSAPI influenced API currently targetted at v8 which could be trivially extended to also support spidermonkey?
It’s interesting and worthy to have a conversation about being able to provide a backend neutral object model, though our current coupling to v8 and its usage in existing addons may not make it possible to entirely hide away the eccentricities of the v8 API. But what we can provide is an interface that is viable to target against from release to release regardless of how the public v8 API changes.
Part of that conversation was about v8+ which was written by a particularly talented coworker, who had a rather nasty experience writing for the existing C++ API (such as it is).
There’s some overlap in how it works and how I envisioned the new API. However, I’m not sure I’m particularly fond of automatically converting objects into nvlists, though that does solve some of the release and retain issues.
In general I would advocate opaque handles and getter and setter functions, with a helper API which could do that wholesale conversion for you.
Really though this matters less in a world where addon authors are following some defined “Best Practices”.
- Primitives would be things like:
- Primitives would be things like:
Work on this needs to begin as soon as possible. We should be able to
distribute it in npm, and authors should be able to target it by including a
few headers in their source and adding a dependency stanza in their
binding.gyp, and by doing so their module will work from v0.8 through
I mean, you’re going to have to rewrite it anyway.
Discussion should happen on the mailing list on thread: https://groups.google.com/d/msg/nodejs/VlUJ68n6QBg/fPsuArtR0roJread more
06 Jul 2013
09 May 2012
01 Mar 2012
01 May 2011
29 Apr 2011
Prior to today I had considered this site, while not terribly popular (or frequently updated), to be relatively quick with little effort on my part. I may not host many high traffic sites, but I have my personal sites and a handful of others. I have a server specifically to handle the HTTP traffic, and a server to handle RDBMS. They use the private network afforded to me by Linode.com (my VPS provider of choice) so communication between the two servers is quick and doesn’t count against my monthly bandwidth quota. It’s worked for years, so I’ve had little desire to muck with the formula.
That is until I learned about Linsides.com – a company that offers services specifically for Linodes over the private network.
The offering is young, but the service is delivered with slick ease. They currently offer NTP, APT Caching, and LinCached (a memcached frontend). The services are only available over the private network, so that means you have to be a Linode.com customer before you can take advantage. NTP and APT caching are offered for free and are conveniences to provide fast responses and to keep load on public mirrors low. That is you get the same quality as if you were connecting to them publicly but they’re generally delivered faster and don’t count against your monthly bandwidth quota.
LinCached is a private network memcached instance, which you can configure to be one of the following sizes: 32MB, 64MB, 96MB, or 128MB. Linsides uses a prepaid credit system for managing payments. Each size memcache instance costs a certain amount of credits per day, a 32MB instance is 1 credit, 64MB are 2 credits and so on. When you sign up you get 5 free credits, so you can get a free trial for 5 days of a 32MB instance. There’s a dashboard that lets you see how many instances you have, what their sizes are, and your current usage on that instance. If you drill down further you can even see a snazzy progress bar to that gives you a visual way to identify how much of your instance you’re using. You can even quickly flush the specific instance.
After you’ve created your LinCached instance, you simply need to add the private IP address of the node that you want to grant access to and boom, you’re done. All in all it took about 10 or 15 seconds of simple data entry on Linsides.com clean site to add a new instance, and it was instantly available to me. I made the necessary changes so this site would take advantage of memcache and just as instantly I started to see the usage appear on the Linsides dashboard.
Simple, dead easy. A perfect no hassle way for me to increase performance of my site in under 15 minutes (realistically under a minute).
Now I’m perfectly capable of running my own memcached instances. But what’s key here is that it’s not using memory on my web servers or my database server, and I didn’t have to spin up yet another node to achieve that. Pricing is affordable as well, credits come in packages that range from $10 for 100, to $300 for 6K. They are also planning on offering more services in the future. I’m excited!read more
24 Mar 2011
21 Nov 2010
05 Aug 2010
09 Feb 2010