17 January, 2016
Eliot Miranda writes:
.. I expect this is the year [the threaded FFI] will be [production ready]. Spur provides pinning, so the VM infrastructure is there. The Pharo community plus some commercial relationships that have developed are providing funding. Esteban Lorenzano and I want to collaborate on this and I hope to get help from some other people, such as Ronie Salgado. And Mariano is working on an important part of the problem. So I feel there’s sufficient momentum for us to realize the threaded FFI this year.
.. and when Craig Latta tried to use it late last year it worked up to a point. The thing that didn’t work was callbacks from foreign threads. So it looks like the core threading code is not too far away from working.
Another really important part, bigger than threading, is marshaling. Being able to handle the full x86_64 abi requires a better approach than interpreting tops signatures. Igor’s NativeBoost gave an example of how to generate marshaling machine code, but alas only for x86. But Sista includes an extensible bytecode set for arbitrary instructions. Sista is close to production, and we know the bytecode set works. So the plan is to use these bytecodes to do the marshaling. That neatly solves the problems of a) associating marshaling machine code with a method and b) marshaling in an interpreted stack VM, since the bytecode set works in any Cog VM. So the plan is to write an ABI compiler from C signatures to marshaling code to replace the interpreted FFI plugin.
So this year I hope we will have an excellent high-performance FFI.
19 January, 2014
Clément Béra just posted an excellent article explaining the new Spur Object format. Definitely worth a read!
Find out more about the Squeak VM called Cog and the new memory manager called Spur at Eliot’s Cog Blog.
5 October, 2013
From Eliot: Clément Béra is, amongst other things, working on Cog performance, looking at adaptive optimization/speculative inlining (Sista in Cog, for Speculative Inlining Smalltalk Architecture).
I just wanted to welcome Clément Béra and to say thank you to everyone working on the VM for both communities, you know who you are (and we do to) and especially Eliot for working on Cog and keeping the advancements coming! Hip Hip … and all that.
14 September, 2013
I’ve just published a blog post on lazy become and the partial read barrier in Spur. I’d really appreciate criticism, preferably as comments on the blog page (Eliot’s blog not this one, please follow the link to comment). This is one of the riskier parts of the design so it really does need to be pounded on.
5 September, 2013
Moving the Squeak GC forward in COG.
8 July, 2010
If you’ve been following Eliot’s blog, you’ll know that he’s been working on this new VM for quite a few months now; well, it’s now ready for public consumption, and it’s blisteringly fast: up to three times faster than the existing VMs.
The VM selectively re-compiles code to native (Intel) machine-code, based on the size and complexity of the methods, and how often they’re called. This means that the benefits of the new VM vary from task to task, but Andreas Raab estimates that you should expect a 2-3x performance improvement generally, “more towards 2x when running primitive and I/O-bound stuff; more towards 3x when running ‘pure’ Smalltalk code”.
Eliot is interested in hearing from developers on other platforms who want to port the new VM to those platforms. In the meantime, he has also released the “Stack VM”, a cross-platform interpreter that uses context-to-stack mapping to achieve more modest performance gains.
See Eliot’s original post and the following discussion for more details of the new VM, some notes of caution, and how to get your hands on it and use it.