magma1.5

From Chris Muller:

I am pleased to announce version 1.5 of Magma for Squeak 5, now available on SqueakMap.  Magma allows multiple Squeak images to collaborate on a single, large object model, with the robustness and control expected from a database.  It offers the most transparent db access possible for Smalltalk, affording the user the ability to develop complex, performant designs, iteratively, on-the-fly.

It has been designed for “continuous flow” development, the way Smalltalkers like and expect to work.  For example, I could have connections open to 3 separate databases, open transactions in any of them, and having restructured a class hierarchy in the model, and stepping through the debugger when that “final boarding call” for my flight is announced.

Thanks to the image, this scenario has never been a problem for Smalltalkers and Magma is deliberate to ensure this flow is maintained.  Once at 10K feet, I can resume stepping through that same debugger within 5 seconds of restarting the image, DB connections intact, commit my transactions when I’m ready, done.  Magma handles every aspect of that use-case correctly even in multi-user environments, and has so many safety and integrity features, it is the safest way to develop and keep a model in Squeak.

This release coincides with the release of Squeak 5, and has many improvements and fixes over Magma 1.4.  Detailed notes about these improvements are available at http://wiki.squeak.org/squeak/6209.

– Chris

Squeak 5 is out!

12 August, 2015

squeak5.0

From: Chris Muller,

In the 17 months since Squeak 4.5 was released, a huge development effort took place to create the next generation virtual-machine for the Squeak / Pharo / Newspeak family of programming systems.  Squeak is the modern incarnation of the Smalltalk-80 programming environment originally developed at the Xerox PARC.

Bert VM Icon

“Squeak 5” introduces this new VM and associated new memory model, collectively referred to as “Spur”.  Presented [1] by Eliot Miranda and Clément Béra at the 2015 International Symposium on Memory Management, this new VM affords Squeak applications a significant boost in performance and memory management.  Among other optimizations, the #become operation no longer requires a memory scan.

Object pinning and ephemerons are also now supported.  The release notes [2] provide more details.

spur_gear

The new memory model requires a new image file format.  Although this new format results in about a 15% increased memory requirement for the same number of 4.x objects, a new segmented heap allows memory to be given back to the OS when its no longer needed, a great benefit for application servers.

As forward compatibility is as important to the Squeak community as backward compatibility, Squeak 5 is delivers an image with identical content as the recent 4.6 release.  Although this new Squeak 5 VM cannot open images saved under the prior 4.x Cog format, objects and code can be easily exported from the 4.x image and then imported into Squeak 5.  Applications whose code runs strictly above the Smalltalk meta layer will prove remarkably compatible with the new format, most applications will require no changes whatsotever.

Squeak 5 is the result of monumental effort by a tiny group of very talented people, but its also just the beginning of yet a new effort; Spur is just a stepping stone to a more ambitious goals planned over the next five years.

[1] — A Partial Read Barrier for Efficient Support of Live Object-oriented Programming http://conf.researchr.org/event/ismm-2015/ismm-2015-papers-a-partial-read-barrier-for-efficient-support-of-live-object-oriented-programming[2] — Squeak 5 Release Notes

http://wiki.squeak.org/squeak/6207

pittsburgDLS copy

http://www.dynamic-languages-symposium.org/

—————————–

C A L L   F O R   P A P E R S

—————————–

======== DLS 2015 ===========

11th Dynamic Languages Symposium 2015

October, 2015

Pittsburgh, Pennsylvania, United States

http://DLS2015.inria.fr

Co-located with SPLASH 2015

In association with ACM SIGPLAN

The 11th Dynamic Languages Symposium (DLS) at SPLASH 2015 is the premier forum for researchers and practitioners to share knowledge and research on dynamic languages, their implementation, and applications. The influence of dynamic languages — from Lisp to Smalltalk to Python to Javascript — on real-world practice and research continues to grow.

DLS 2015 invites high quality papers reporting original research, innovative contributions, or experience related to dynamic languages, their implementation, and applications. Accepted papers will be published in the ACM Digital Library, and freely available for 2 weeks before and after the event itself.  Areas of interest include but are not limited to:

Innovative language features and implementation techniques

Development and platform support, tools

Interesting applications

Domain-oriented programming

Very late binding, dynamic composition, and run-time adaptation

Reflection and meta-programming

Software evolution

Language symbiosis and multi-paradigm languages

Dynamic optimization

Hardware support

Experience reports and case studies

Educational approaches and perspectives

Semantics of dynamic languages

== Invited Speaker ==

DLS is pleased to announce a talk by the following invited speaker:

Eelco Visser: Declare your Language.

== Submissions and proceedings ==

Submissions should not have been published previously nor under review at other events. Research papers should describe work that advances the current state of the art. Experience papers should be of broad interest and should describe insights gained from substantive practical applications. The program committee will evaluate each contributed paper based on its relevance, significance, clarity, length, and originality.

Papers are to be submitted electronically at

http://www.easychair.org/conferences?conf=dls15 in PDF format. Submissions must be in the ACM format (see

http://www.sigplan.org/authorInformation.htm) and not exceed 12 pages. Authors are reminded that brevity is a virtue.

DLS 2015 will run a two-phase reviewing process to help authors make their final papers the best that they can be. After the first round of reviews, papers will be rejected, conditionally accepted, or unconditionally accepted. Conditionally accepted papers will be given a list of issues raised by reviewers. Authors will then submit a revised version of the paper with a cover letter explaining how they have or why they have not addressed these issues. The reviewers will then consider the cover letter and revised paper and recommend final acceptance or rejection.

Accepted papers will be published in the ACM Digital Library.

Important dates

Abstract Submissions: Sun 7 Jun 2015

Full Submissions: Sun 15 Jun 2015

First phase notification: Mon 27 Jul

Revisions due: Mon 3 Aug

Final notification: Mon 17 Aug

Camera ready: Fri 21 21 Aug

Program chair

Manuel Serrano, Inria Sophia-Antipolis,

dls15@easychair.org

Program committee

Carl Friedrich Bolz, DE

William R. Cook, UTexas, USA

Jonathan Edwards, MIT, USA

John Field, Google, USA

Matt Flatt, USA

Elisa Gonzalez Boix, Vrije Universiteit, BE

Robert Hirschfeld, Hasso-Plattner-Institut Potsdam, DE

Benjamin Livshits, Microsoft, USA

Crista Lopes, UC Irvine, USA

Kevin Millikin, Google, DN

James Noble, Victoria University of Wellington, NZ

Manuel Serrano, Inria, FR (General chair)

Didier Verna, EPITA, FR

Jan Vitek, Purdue, USA

Joe Politz, Brown University, USA

Olivier Tardieu, IBM, USA

Robert Hirschfeld

hirschfeld@acm.org

www.hirschfeld.org

ESUG 2015

It’s that time again.  For more information see this link: ESUG Conference 2015 in Italy.

Spur in 64!

20 November, 2014

bits

From Eliot Miranda:

Hi All,

I’m pleased to say that today the simulator got as far as redrawing the
entire display and finishing the start-up sequence for a bootstrapped
64-bit Spur image. That means it correctly executed over 26 million
bytecodes. So at least a 64-bit Spur Stack VM is not too far off.

best,
Eliot

Pyonkee (Scratch on iPad)

28 August, 2014

pyonkee

From Masashi-san:

Hi all,

 

I have just released a Scratch clone running on iPad. It is based on Scratch 1.4 from the MIT Media Laboratory.

The app is now called “Pyonkee” – freely available on App Store.

https://itunes.apple.com/us/app/pyonkee/id905012686

 

Pyonkee was originally started as a fork of John M McIntosh’s Scratch Viewer.

https://github.com/johnmci/Scratch.app.for.iOS.

 

While Scratch Viewer just works as a viewer of the existing Scratch projects, Pyonkee supports creation/edit of projects.

 

Other features:

– User interfaces are optimized for iPad

– Native font support

– Embedded camera support

– IME support

– Auto-saving project

– Sending projects via e-mail

– Project import/export through iTunes (currently disabled)

 

Moreover, source code is open on github. Feel free to fork it.

https://github.com/SoftUmeYa/Pyonkee

 

Enjoy!

[:masashi | ^umezawa]

First steps

 

(baby steps and giant leaps!)

From Eliot Miranda:

Hi All,

it gives me great pleasure to let you know that a spur-format trunk
Squeak image is finally available at
http://www.mirandabanda.org/files/Cog/SpurImages/. Spur VMs are available
at http://www.mirandabanda.org/files/Cog/VM/VM.r2987/.

Spur is a new object representation and garbage collector for
Squeak/Pharo/Croquet/Newspeak.

Features
The object representation is significantly simpler than the existing one,
and hence permits a lot of JIT optimizations, in particular allocating
objects in machine code. This speeds up new, new: et al, but also speeds
up blocks because contexts and closures are now allocated in machine code.
It also provides immediate characters, so for example accessing wide
strings is much faster in Spur, since characters do not have to be
instantiated to represent characters with codes greater than 255.

The garbage collector has a scavenger and a global scan-mark-compact
collector. The scavenger is significantly faster than the existing
pointer-reversal scan-mark-compact, hence GC performance is much improved.

The memory manager manages old space as a sequence of segments, as opposed
to the single contiguous space provided by the existing memory manager.
The memory manager grows the heap a segment at a time, and can and will
release empty segments back to the host OS after a full GC. Hence Spur is
able to grow the heap to the limit of available memory without one having
to specify the VM’s memory size at start-up.

The object representation uses “lazy forwarding” to implement become:,
creating copies of objects that are becommed, and forwarding the existing
objects to the copies. While Spur still scans the stack zone on become to
ensure no forwarding pointers to the receiver exist in stack frames (for
check-free push and store instance variable operations), it does not scan
the entire heap, catching sends to forwarded objects as part of the normal
message send class checks, hence following forwarding pointers lazily, and
eliminating forwarders during GC. The existing memory manager does a full
memory sweep and compact to implement become. Hence Spur provides the
performance advantages of direct pointers while providing a significantly
faster become.

While Spur uses moving GC (scavenging and compaction on full GC), just like
the existing memory manager, Spur supports pinning, the ability to stop an
object from moving. Old space objects will not be moved if pinned.
Attempting to pin a new space object causes a become, forwarding the new
space object to a pinned copy in old space. This allows simpler
interfacing with foreign code through the FFI, since one can hand out
references to pinned objects in the knowledge that they will not be moved
by the GC.

Finally Spur supports ephemerons in a simple and direct way, providing
pre-mortem per-instance finalization. Although the image-level support
needs to be written, it should soon be possible to improve the finalization
of entities such as buffered files (ensuring they are flushed before being
GCed), etc.

Future Work
Spur is as yet a work in progress. The 32-bit implementation is usable and
appears stable. The major missing component is an incremental scan-mark GC
that should eliminate long pauses due to the global scan-mark-compact GC
(which is still invoked at snapshot time). I hope to start on this soon.
But another key facet of Spur is that the object header format and the
sizes of objects are common between 32- and 64-bits. In 32-bit and 64-bit
Spur, object bodies are multiples of 8 bytes, so there may be an unused
slot at the end of a 32-bit object with an odd number of slots. Hence Spur
is close to providing a “true” 64-bit system, one with 61-bit
SmallIntegers, and 61-bit SmallFloats (objects with the same precision, but
less range that 64-bit Float, done by stealing bits from the exponent
field). I look forward to collaborating with Esteban Lorenzano on 64-bit
Spur and hope that it will be available early next year.

Experience
I am of course interested in reports of performance effects. Under
certain, hopefully rare circumstances, Spur may actually be slower (one is
when the number of processes involved in process switching exceeds the
number of stack pages in the stack zone). But my limited experience is
that Spur is significantly faster than the existing VM. Please post
experiences, both positive and negative.

Finally, caveat emptor! This is alpha code. Bugs may result in image
corruption. If you do use Spur, please try and back up your work just in
case. And if anything does go wrong please let me know, preferrably
providing a reproducible case.

Enjoy!
Eliot Miranda