The holidays may be over, but the presents are still arriving from the UK Smalltalk User Group! Videos of previous presentations have been released over the past month, covering a variety of interesting topicsâa total of 57 as of this writing! Be sure to check out their new YouTube channel at https://www.youtube.com/@UKSTUG. Also, be sure to visit their homepage at https://www.uksmalltalk.org/, and if you would like to attend any meetings, their Meetup site can be found at https://www.meetup.com/ukstug/.
Have a great time with Smalltalk and keep on Squeaking!
This article kicks off a series designed to introduce programming to beginners through the creation of a simplified space shooter game using Squeak, an open-source implementation of the Smalltalk programming language. By breaking down the process into a series of manageable lessons, the aim is to provide an accessible and interactive entry point into the world of programming. This series will guide learners through the process of building a game from the ground up while introducing them to essential programming concepts along the way. Upon completion, and with the accompanying resources such as source code, images, and a sound file, students will have everything they need to recreate or enhance the game.
Who This Series is For
The intended audience for this article and the accompanying series includes both young learners and adults who are new to coding or to Smalltalk. Whether you are a student, educator, or hobbyist, these lessons are tailored to make programming approachable and enjoyable. Throughout the series, we will dive into key programming principles â such as object-oriented design, the concept of âMorphsâ in Squeak, and more â at a beginner-friendly pace. Each lesson is supplemented with practical examples. If you are interested in starting with something fun and educational, and you are curious about how games are made or how Smalltalk can be used in a modern development environment, this series is for you.
Why a Game, and Why Squeak/Smalltalk?
So why a game, and why in Squeak/Smalltalk? Why not? Everybody understands the domain of games, or gaming. That is, the mechanics, interactions, and goals â such as managing player input, controlling game characters, defining win/loss conditions, and creating interactive environments. Games are a universal medium that spans across ages and cultures, from simple board games to complex video games. This familiarity makes games an excellent starting point for teaching programming concepts, as the mechanics and goals are intuitive to most people. Moreover, games require a variety of programming elements to function, such as managing user input, handling graphics, implementing game rules, designing levels, and even creating sound and music. These components provide a rich environment for introducing and practicing key programming skills â like decision-making, loops, object-oriented design, and event handling â while keeping learners engaged with a fun, practical outcome.
Why Squeak/Smalltalk is the Right Tool
Squeak makes all of this very easy, and you do not need to learn anything more than Smalltalk to get started. The language is intuitive and enjoyable to use, and Squeak provides a powerful and engaging development environment. While games may not balance your bank account, they can serve as an excellent resource for learning programming or a new language. Squeak/Smalltalk, in particular, makes this process both accessible and enjoyable.
How This Series Came to Be
This article introduces a series of lessons based on a simplified version of a space shooter game I developed using Squeak. The project, which was both fun to create and my first full Squeak program, turned out to be an ideal way to teach my middle school-aged son Smalltalk, as well as the fundamentals of object-oriented programming (OOP). Although he had prior programming experience, he quickly grasped the language due to its simplicity and the interactive nature of the environment. The natural syntax of Smalltalk, combined with the user-friendly environment, not only made it easier for him to solve problems but also allowed him to be more creative, without needing to alter his natural problem-solving approach. Both the language and the environment âgot out of his way,â enabling him to express himself more easily and effectively.
What Will Be Covered in the Series
The lessons in this series are designed to introduce key programming concepts â such as classes, methods, debugging, and user interface design â through the process of building and modifying a game. You can find the complete list of lessons on the Shooter Game site. Each lesson was written daily, and I discussed topics both before and during the lesson to provide additional detail and ensure a deeper understanding. This approach allowed me to offer relevant explanations exactly when they were needed, based on my sonâs progress and evolving needs.
For each article, I will aim to provide additional helpful information that may not be included in the lessons themselves. If you feel any details are missing or could be useful, please let me know, and I would be happy to provide further information.
Interactive, Live Coding Approach
The hands-on, live coding approach in these lessons encourages learners to experiment and learn in real time, making the process both educational and enjoyable. Each lesson also comes with a downloadable PDF version of the lesson page for easy offline reference or printing. The entire series progressively builds upon itself, allowing learners to gradually develop their programming skills as they move through the lessons.
Lesson 1: Creating and Positioning Morphs
For todayâs lesson, we will dive into the first step in building our game: âCreating and Positioning Morphs.â In this lesson, we will introduce the concept of a Morph and explore how to create and manipulate these visual objects within the Squeak environment. Understanding Morphs is a crucial part of game development in Squeak, as they serve as the foundation for all the interactive elements of the game.
What is a Morph?
In Squeak, a Morph is an interactive graphical object. Like everything in Smalltalk, a Morph can be interacted with through messages. It is not a static, lifeless image on the screen, but rather a lively object waiting to interact with its environment. You can send a message to a Morph to receive information about it or to perform an operation. Everything you see when running Squeak is a Morph object. This presents some very exciting capabilities, as it provides you with the ability to create graphical objects, which can interact with the world (the entire Squeak environment â its display screen in this case) and the world can interact with them.
The Squeak world works using a coordinate system. The coordinate values can be absolute or relative. Each coordinate value is represented as a point, which has an X coordinate and a Y coordinate. For example, a Squeak world (remember, this will be the full size of the display screen in the Squeak environment) with a display size of 1024×768:
The point 0@0 is the top-left corner of the screen.
The point 0@768 is the lower-left corner of the screen.
The point 1024@768 is the lower-right corner of the screen.
The point 1024@0 is the top-right corner of the screen.
Every point in between represents a location within the world. Points can exist outside of that world too; however, they would not be visible.
In this game, everything is a subclass of Morph, so you will be using Morphs a lot.
Before beginning Lesson 1, it would be very helpful to read Chapter 1 of the Squeak By Example book (available as a free PDF, SBE-6.0.pdf), an excellent resource for learning Squeak and understanding its environment. Afterward, you can go straight to the lesson here.
Additional Resources
To access the full series of lessons and resources for the space shooter game, Shooter Game, visit the lesson site at https://scottgibson.site/ShooterGame/. There, you will find the complete set of lessons and their associated PDFs, along with the source code, images, the sound file, and other useful resources. You can even play the game directly in your browser (using the awesome SqueakJS!). Whether you are a beginner or looking to learn more about Squeak/Smalltalk, its environment, and its tools, these resources will guide you through each step and provide everything you need to recreate or enhance the game.
In recent months, Craig Latta has been exploring the integration of AI language models, particularly GPT-4, into Smalltalk development environments, with the goal of enhancing workflows and improving conversational interactions with the language model. Rather than relying on traditional coding methods, Craig has focused on fine-tuning the model using English system prompts, with promising results. This approach aims to guide the AI’s behavior through prompt evolution, rather than direct coding, leading to surprisingly effective outcomes.
One of the key insights from this experimentation is that GPT-4âs pre-existing knowledge of Smalltalk and Squeak provides a solid foundation for further fine-tuning. By applying constraints to system prompts â such as instructing the model to avoid sending messages to access objects when direct access via instance variables is possible â Craig has steadily refined the AIâs responses.
Conversing with the Language Model in Squeak
A central goal of the project was to enable conversations with the AI model from within any text pane in Squeak, utilizing Smalltalkâs classic âdo it,â âprint it,â and âinspect itâ functionalities. To achieve this, Craig modified the Compiler>>evaluateCue:ifFail: method to handle UndeclaredVariable exceptions. When an undeclared variable (e.g., “What” in âWhat went wrong?â) triggers an exception, the model object underlying the text pane takes over, interpreting the next chat completion from the language model.
The model objects Craig has focused on are instances of Smalltalk’s Debugger and Inspector. A notable feature of this approach is that it logs all interactions â whether English prompts or Smalltalk code â into the changes log, just as it would with traditional code. Each model object maintains a reference to the most recent chat completion, allowing successive prompts to be interpreted in the context of the entire conversation, fostering dynamic and evolving dialogues.
Practical Results and Application
The system has already shown practical value in a live development environment. For example, when evaluating a prompt like âWhat went wrong?â in the debugger, the language model provides surprisingly accurate and detailed responses, as if debugging the code itself. Similarly, when tasked with generating Smalltalk code â such as instructions for selecting the most recent context with a BlockClosure receiver â the debugger manipulates the code correctly, demonstrating the viability of using the AI in real-time development workflows.
Future Exploration
Currently, Craig is expanding the scope of the AIâs functionality. He is experimenting with prompts that describe the applicationâs domain, purpose, and user interface, eager to see how the model can assist in these areas. This exploration has the potential to significantly improve Smalltalk development workflows by integrating AI-driven conversational interactions into the process.
As AI models continue to evolve, Craig anticipates further advancements in integrating these models with development environments, offering new opportunities to enhance productivity, troubleshooting, and even code generation. The future of AI-enhanced Smalltalk development looks promising, and this project represents an exciting first step. For more details on Craig’s exploration of AI in Smalltalk development, be sure to check out his full blog post at thisContext.
Growing Interest in AI for Squeak Development
There has been growing interest in applying both semantic understanding and generative AI to the Squeak environment, as these technologies hold great promise for improving development workflows. By combining the ability to generate code with deeper understanding and reasoning, developers are beginning to explore new ways AI can assist with both writing and debugging Smalltalk code. This integration could help streamline development, enhance debugging efficiency, and even assist in explaining complex code patterns, all within the context of Squeakâs unique environment. Or, it could take over the world. You decide. As interest continues to grow, a future article and video from a recent demonstration will explore these advancements in more detail, offering insights into how AI-driven approaches can enhance Smalltalk programming and provide practical demonstrations of these evolving tools.
Have a great time with Smalltalk and keep on Squeaking!
When developing complex systems, encountering errors that leave your project in a frozen state can bring much sadness and consternation. For developers working with Squeak/Smalltalk, a frozen image refers to a situation where the system becomes unresponsive. This can happen due to various issues, including deadlocks, infinite loops, or recursive operations in Smalltalk code, which can cause the VM to freeze, hang, or lock up.
A recent discussion in the Squeak community highlighted how one such incident, caused by a stuck semaphore during an image load, turned into an opportunity for both recovery and valuable practices in safeguarding against future mishaps. You can explore the full conversation in the Squeak-dev mailing list thread (here).
The Problem: Semaphore Deadlock in the Image File
In this case, a developer encountered an issue while attempting to draw in multiple background threads, using a chain of semaphores to manage the UI thread. After resolving an error in one of the background threads, the drawing process was resumed. However, it was found that the image would fail to load. Instead of opening, the image became unresponsive and effectively “frozen.” It had likely become stuck due to a semaphore waiting for a signal that would never come. This left the image unresponsive and effectively unusable â or so it would have seemed.
Recovery Options: DoItFirst and Debugging
The good news is that there are several options available for recovering from such failures.
For developers dealing with frozen images at startup, Squeak provides a mechanism called DoItFirst. This is a class that is processed at the very beginning of the image startup sequence, allowing developers to inject custom code or actions before the main system fully loads. By using command-line arguments like --debug, it is possible to force the image to enter the debugger at the earliest point in the startup process, allowing issues to be diagnosed and fixed before the image fully loads.
Additionally, for situations where the issue is well-understood, the --doit option allows Smalltalk expressions to be passed and executed before any problematic behavior occurs, giving developers a chance to resolve issues before they impact the image during loading.
Another option is to open the Virtual Machine (VM) in a debugger. In the specific case of a stuck semaphore, you can set a breakpoint in interpret. Run the VM until it reaches interpret a second time (the first invocation does some initialization). At this point, you can use printAllProcesses to locate the stuck process. This should display the semaphore the process is waiting on. Once identified, you can invoke synchronousSignal with that semaphore to unstick it and restore functionality without losing significant progress. While this technique can specifically handle a stuck semaphore, it could also be helpful for other cases of system hangs or unresponsiveness.
The Importance of Backups and Versioning
While debugging and recovery tools are invaluable, it is clear that prevention is equally important. Routine backups and effective versioning are essential practices that can mitigate the risks associated with frozen images and other failures. One practice that proved helpful in this case was maintaining regular system backups, which made it possible to quickly restore a previous version of the image with minimal data loss. However, more granular versioning, especially in environments with frequent risky changes, would provide an additional layer of safety.
Specifically, incremental versioning of the image can be a lifesaver when things go wrong. By saving a new version of the image each time a meaningful step in development is reached, developers can create a clear history of their work. This allows them to revert to earlier versions of the image if issues arise, minimizing the impact of failed changes and speeding up the recovery process. Incremental image saves help ensure that even if an image becomes frozen or corrupted, a previous, stable version can be restored with minimal effort.
The Robustness of Squeak: Multiple Recovery Options
One of the key takeaways from this experience is how robust Squeak is in terms of recovery options. Even when a frozen image occurs or the system becomes unresponsive, Squeak offers multiple layers of recovery mechanisms. Whether using DoItFirst to inject custom code during startup or interacting with the VM through a debugger to manually resolve issues with semaphores, there are a variety of ways to recover from errors and get back on track.
Additionally, Squeakâs development environment is designed to be highly flexible, encouraging experimentation while ensuring that recovery is always possible. One of the key advantages of Squeak is that all the code executed or saved during development is stored in the .changes file. This makes it possible to recover from almost any issue, as the system is built to retain all your code. Even if an image becomes frozen or corrupted, the code itself is never truly lost. With the resources mentioned in this article, developers can restore their environment and continue their work. In Squeak, code is safe; no matter what happens to the image, your code can always be recovered.
Conclusion
In conclusion, while Squeakâs development environment offers tools to recover from serious errors, the most effective defense is good preventive practice: maintain regular backups, version your work consistently, and always be mindful of the unexpected.
For more detailed information on using DoItFirst to debug and recover your Squeak image, visit the DoItFirst Swiki page (here). If you encounter issues like crashes or freezes in Squeak, there are various levels of potential failure to consider, and helpful guidance is available in the What to do if Squeak crashes or freezes resource (here), which offers insight into diagnosing and resolving different types of system freezes or crashes.
Have a great time with Smalltalk and keep on Squeaking!
In a recent email to the Squeak developers mailing list (here), Tim Rowledge shared insights from the Raspberry Pi team regarding beneficial tweaks to memory configuration in the firmware, specifically focusing on NUMA (Non-Uniform Memory Access). These updates are part of ongoing efforts to enhance SDRAM performance for both Raspberry Pi 4 and 5 models.
Performance Enhancements Explained
Recent testing revealed that the 8GB models sometimes performed worse than the 4GB models due to SDRAM self-refresh consuming bandwidth, especially since larger sizes require longer refresh times. Investigations showed that adjusting the SDRAM refresh interval could yield better results. Monitoring temperature indicated that a faster refresh rate was feasible, which helped reduce overhead. Micron confirmed that 8GB SDRAM could safely operate with 4GB refresh timings.
Ongoing tweaks to SDRAM and ARM settings have led to small but cumulative performance improvements, typically around 1% per update. A significant issue noted is the competition among multiple ARM cores accessing SDRAM, leading to inefficiencies when multiple pages in the same bank are accessed. Implementing NUMA can help manage this by splitting SDRAM into regions, allowing for more efficient allocation and improved performance in multi-core tasks.
Benchmarking Squeak Smalltalk
Tim tested these configurations on a Raspberry Pi 5 equipped with NVMe storage, running benchmarks from the Benchmark Shootout suite. The tests included:
nbody
binary trees
chameneos redux
thread ring
Results of the NUMA Configuration
The performance comparisons between a NUMA-configured Raspberry Pi 5 and a standard setup yielded notable results:
nbody: Improved from 5.157 seconds to 5.095 seconds (1.2% improvement)
binary trees: Improved from 3.398 seconds to 3.096 seconds (8.9% improvement)
chameneos redux: Improved from 7.274 seconds to 5.239 seconds (28% improvement)
thread ring: Improved from 8.347 seconds to 7.783 seconds (6.7% improvement)
These findings indicate that even minor adjustments in RAM timings can lead to substantial performance gains, particularly highlighted in the chameneos redux benchmark.
Conclusion
Tim Rowledgeâs testing of the Raspberry Pi teamâs memory configuration tweaks has revealed valuable performance gains. By implementing NUMA and optimizing SDRAM settings, Raspberry Pi users can unlock significant benefits. These small adjustments can lead to meaningful improvements, making the Raspberry Pi an even more effective tool for development and education. You can find Raspberry Pi team’s recommended tweaks here. Additionally, you can explore the bug report thread detailing the testing and findings here.
Have a great time with Smalltalk and keep on Squeaking!
Sandblocks is a projectional, block-based programming environment written in Squeak/Smalltalk.
Projectional editors are promising for tasks like language composition and domain-specific projections. Effective user interaction requires clear communication of program structure and supported editing operations. While making the abstract syntax tree visible can enhance clarity, it often leads to increased space usage and potential usability issues. Sandblocks is an early prototype of a tree-oriented projectional editor for Squeak/Smalltalk, which aims to minimize space while clearly visualizing the tree structure.
For more information on projectional editing, you can start by reading Martin Fowlerâs explanation (here), in which he describes it as an alternative to source editing.
Give it a Try!
You can find the simple installation instructions on the project page here. The page specifically mentions Squeak 5.3, but it seems to work similarly well with Squeak 6.0. Sandblocks is a research prototype, so be sure to save often while working with it.
Have a great time with Smalltalk and keep on Squeaking!
A âRoguelikeâ game is a sub-genre of RPGs, named after the classic 1980 game âRogue.â It is defined by features such as dungeon crawling through procedurally generated levels, turn-based gameplay, grid-based movement, and the permanent death of the player character. Roguelikes have evolved over time, inspiring numerous variations and modern interpretations, often referred to as âroguelites,â which may incorporate elements like permanent upgrades or less punishing death mechanics.
What led you to use Squeak to develop a game? How is Roguerrants different from something you would have created using another programming language?
I have been working with Squeak for the last twenty years. I could just not work with anything else. Iâve been spoiled.
I first came to Squeak to port GeoMaestro, a system for musical composition based on geometrical structures that I made in the KeyKit MIDI environment. In KeyKit there are classes and I first met object-oriented programming there.
Someone from the csound-dev list I think told me Squeak would be a good fit for me, and this is the best piece of advice I have ever been given.
So I first used Squeak for music. GeoMaestro became muO, which is a huge system that eventually allowed me to compose my own pieces, although I have no musical education and no playing talent whatsoever.
In muO I did a lot of graphical stuff, and notably a family of interactive editors that evolved into the ones I use for Roguerrants maps and geometrical structures (navigation meshes for example).
muO taught me Morphic, which I believe is an incredibly underestimated pearl. Itâs a beautiful framework. Itâs amazing. I know a lot of people in the Squeak community (and even more in the Pharo one) think of it as a pile of cruft that needs to be reconsidered completely, but to me itâs just a wonderful framework.
Roguerrants is 100% a Morphic application. Without Morphic, I could not have done it at all. And without the tools I made for muO, I would not have considered building a system that ambitious.
Regarding graphics and sound, how do you implement these elements in Squeak? What advantages does the environment offer?
So, graphics are pure Morphic and BitBlt. I just tweaked a few things to make them faster, and made a few fixes. I had a hard time with composition of alpha channels, notably.
The advantages of Morphic is the natural division of tasks, where each morph draws itself. Graphics are naturally structured; more about this below.
Sound is also supported natively in Squeak. In muO I did mostly MIDI, and some Csound, but also a little audio synthesis so I known the sound framework quite well. I fixed a couple bugs there too. And I made editors for sound waves and spectra.
In Roguerrants, each monster class uses its own synthesizer and actually plays musical notes. Its utterances are composed with muO. I can generate adaptive music, although this is still in an early stage.
The concept of free motion and an organic grid is intriguing. What motivated you to incorporate these elements in Roguerrants, and did you encounter any challenges during their implementation?
I like things to be free from grids, in general. But grids are useful, so the main point is to be able to treat them as a game component just like another instead of having them being the paradigm everything happens within.
In Roguerrants everything happens in real-time and is located via the plain morphic coordinates system. Thatâs the base. The grid comes second. The turn-based structuration of time also comes second. In fact, the whole of Roguerrants comes second to Morphic itself. The game playground is just a single morph. The time evolution is the Morphic stepping system, no more, no less.
Organic grids are relaxed Voronoi tesselations that take into account the surrounding of the controlled character. The challenge there is make them seem natural to the player.
For example, the grid should not feature cells at places the player do not see (because it may give the player hints about whatâs there) but this is a subtle issue, because some of these places have been seen recently, so why no allow access?
There are also different ways the grid adapts to what the player does.
For example, not all cells in the grid are reached at the same speed. If the player makes a small move, it will also be a slow move. This is to prevent the player from abusing the turn-based system by being too cautious. On the other hand, a long move is faster: the player is running. This makes sense if you remember that once the move is chosen, it cannot be interrupted; if a source of danger is encountered during a move, too bad.
How does the grid adapt to that? Well, the base navigation grid is generated within a specific radius around the player. If the player runs close to its border, the grid for the next turn will have a smaller radius: the player will not be able to run twice in a row. One turn is needed for resting somehow. This creates a nice ebb and flow in dangerous situations.
Another example: when the player character is stunned, its navigation grid has larger cells. The stunned condition has several effects, and one of them is to make the player more clumsy in the way it moves.
So a lot can go on when one thinks of what it means to be provided a navigation grid generated differently for each turn. I am still exploring what can be done there, and as I said the challenge is to make all the mechanics invisible to the player, yet there in an intuitively meaningful way.
Generating graphics without a tile-based system is a unique challenge. How did you tackle this issue in Roguerrants?
Letâs see this from the point of view of Morphic, again. A morph can have any shape and size. You just put it on the World, and thatâs it. It can draw itself, so it handles its own part of the overall display.
So in that sense it is not a challenge at all, just the morphic way.
Now there is a little more to it.
As I said above, the game playground is a morph (a subclass of PasteUpMorph, the class of the World). It has a very specific way to draw itself, unique in the world of morphs. For one thing it draws it submorphs layers by layers, allowing the 2.5D parallaxed display, and also it allows any of its submorphs to paint anywhere.
So in addition to drawing itself, a morph in Roguerrants can decorate part of all of the game world in its own way. Thatâs how the ground is displayed for example.
High-level components like activities and missions can significantly affect gameplay. How do these elements drive character behavior in Roguerrants, and what distinguishes your approach?
This is one of the most involved technical points.
First there is ModularAgency. This is a design for giving any object an arbitrary complexity, in a dynamic way. I do not have the room to discuss this further here, but there is a lot to say; it is the core design of Roguerrants, and definitely one of the things I am the most proud of. It is a kind of ECS (entity component system), but a very unique one.
Via its #agency, a SpriteWithAgency (the subclass of Morph that all game actors are #kindOf:) has a dynamic library of components, each attributed a specific responsibility. There is really a lot of them. At the time of writing, there are 165 implementors of #nominalResponsibility, which means there is that number of different, well identified, aspects of the game that have a dedicated component. A NPC has around 25 to 30 components.
Among them are the ones responsible for its #activity and #mission.
The #activity component directly drives the #deepLoop component, which is the one that handles the #step method of a Sprite.
For example, if the #activity of a goblin is a journey, it will ultimately add to the goblin #deepLoop, at each morphic step, a command for changing its position and its orientation.
Now this is just the end point of a complex computation, because to do so it needs to know where to go, and so it consults the goblin #destination component, it asks the game #cartographer to produce a navigation mesh and do some A* magic there [ed. A* is popular algorithm used to find the shortest path from a starting point to a goal point on a graph or grid], it asks its #collisionEngine if there is any obstacle in the way, and if there is one that hinders the journey it delegates the problem to the #journeyMonitor component. You get the idea.
But the journey may need to be interrupted, or even discarded entirely. An activity is a moment-by-moment thing, it does not have a broad scope in terms of the overall behavior of the agent.
When an activity signals that its job is done, the #mission component gives the agent another activity. It is the #mission that implements what the agent behavior is about. Two agents can have a similar activity, like going from point A to point B, but very different missions: one can be heading to a chest to fetch something, while the other one is actively hunting the hero. Their activities at a given time are what their respective missions told them to do; they will be given very different activities after they arrive at their destinations.
When a mission is accomplished, the #mission component removes itself, and in the process it installs a specific kind of activity, an idle activity. The idle activity gives the agent a new mission.
So there is an interplay between mission and activities. Both components taken together make an agent do something in a well-defined context.
Then there are quests. Quests are components that give an agent a set of goals. They push the narrative forward. They can give missions. At the level of quests, we deal with the âwhy?â of an actor behavior. Thatâs the level of the story, of the game scenario.
Implementing original systems often comes with its own set of difficulties. What challenges did you face while creating your geometry- based combat and magic systems, alongside a high-level architecture for actor behaviors?
Itâs not exactly a challenge, but computational geometry is tricky and it takes some time to get it right. Roguerrants uses convex polygons a lot, so I had to implement all related algorithms. The most complex one was Fortuneâs algorithm for Voronoi partition. It took a lot of revisiting to make it stable and understand its domain of usability.
So why polygons?
In roguelikes, combat happens upon collision: you move your character towards a monster, there is an exchange of damage according to your stats and the monster stats, and life points are lost.
Collisions in a grid system is based on the grid connectivity: you collide with neighbor grid cells.
When moving freely, with an arbitrary shape, collision is more a geometry test: are polygons intersecting? So at this point, it made sense to me to have weapons, armor and hurt boxes also collide, individually.
When a character yields a sword, that sword attaches an impacter to the agent. The impacter is a polygon convering the area where the sword deals damage.
A creature has one or more hurt boxes (also polygons). If a weapon impacter overlaps one of these boxes, damage is dealt. And then, the impacter enters a cooldown period during which it becomes inactive. Armor works similarly.
The magic system uses geometry in another way.
Letâs take for example the Ring of Blinking. When equipped, the player character can teleport itself to a nearby location. What are its choices? It could be a grid, like the one used for navigation. But blinking is a powerful ability, so itâs better to give it some limits, and even make it dangerous â thatâs much more fun. We can do that with geometry.
The places the player can blink into are a set of polygonal areas arranged in a mandala. When the blinking ability is not in cooldown, these places are small. Each time blinking is used, they grow. As time passes, they tend to get smaller again. If the player blinks too often, its mandala will feature very large regions. Blinking into a region only guarantees that you will land inside, not where. And so the more often you blink, the more you risk to teleport at a bad place, possibly even inside a wall or a rock (and then you die horribly).
Different abilities have different mandalas and different uses of their polygons. The exact mandala you get also depends on where you are, because magic is also a negociation between an actor and its surroundings. Some places forbid, enhance or degrade magic. This dimension of the game will be expanded a lot in the future, because it informs its tactical aspects.
The inclusion of biomes as first-class objects is a compelling design choice. How does this decision enhance the logic and functionality of your game?
This is a natural consequence of the way spatial partition is implemented.
Game maps in Roguerrants can be limited, or unlimited. Even when limited, they may be large. For this reason, they usually are not completely spawned. Parts of a map are suspended, waiting to be populated and made alive when the player, or another important actor of the game, approaches them. When they are not useful anymore, they get suspended again.
This means maps are modular. There is usually a top tesselation of large convex polygons, which may be hexagonal. Often each polygon is itself subdivided in regions, and this goes on down to the desired granularity.
At each region or subregion is associated a modular agency, called a local power. Local powers have many components, notably the component responsible for spawning and despawing game objects living in the corresponding region.
Local powers are very important. They are actors, invisible to the player, that inform a lot of what happen in the game, anything actually that is related to location. It is dark there? Who lives there? What is the nature of this place? Etc.
And so it makes sense that for a biome to be a component of a local power. Imagine a forest surrounded by fields, a forest that get denser at its core. Letâs say the whole map is an hexagonal tesselation. We give a biome to the hexagonal cells for fields, and another biome for the forest cells, plus yet another biome, probably a child of the former one, for the forest core. We then ask each cell to generate trees â thatâs one line of code. The component(s) responsible for spawning trees is looked-up via the biomes. Fields will not generate trees, forest cells will generate them, and dense forest cells will generate a lot of them. Rocks will be different in fields and forest, etc. The different creatures that appears in the game will also be looked up via the biomes â snakes in the fields, giant spiders in the core forest, etc.
How did your design philosophy for Roguerrants shape the features you chose to implement in the game?
The design philosophy can be summarized in a few principles:
Each notion, each concept, each dimension identified as orthogonal to the others in the game design must be reified into an object (often a component) responsible for its implementation
It should always be possible to go deeper in said implementation.
It is nice to preserve the variety of ways it can be implemented.
For example, collision. What objects in the environment should we consider as potential obstacles? How do we test for actual collision?
The answer is to not look for The One Way To Collide, but instead to provide the tools (including the conceptual ones) allowing to express the problem effectively, and then use them to build the different answers adapted to different contexts.
So for example, a large group of gnomes, of an army of goblins, will bypass a lot of collision tests, so that they do not lock themselves into some ugly traffic jam. They will interpenetrate a bit.
A projectile, which is fast and small, will not test its collision in the same way as a big and slow monster. The projectile will consider itself as sweeping along a line and look at the intersection of that line with surrounding objects. The monster will look for the intersection of unswept polygons. Also the projectile has a target, of which it is aware, so it will take special care of it.
When riding a monster, a goblin will delegate the collision responsibility to the monster. No need to do anything, itâs just the rider.
A character moving along a path computed from a navigation mesh do not need to test for collision against walls â the mesh already took them into account.
But a character driven in real-time, via the mouse, by the player, do need to consider walls. It has a different #collisionEngine component.
Now if this mouse-driven character is blocked, lets says when attempting to move between two trees, it is maybe because the path is narrow and the player did not find it (sometimes this is a matter of pixels). At this point the collision engine interacts with the #cartographer (the component responsible for computing navigation meshes) and checks if indeed a path exists. If it does, it follows that path and succeeds in moving between the trees. The player did not notice anything. Computer-assisted driving! Thatâs point 2 above: it is always possible to go deeper in the implementation.
So when implementing a new feature, the first task is to express what I want to do in terms of the notions already reified in the game engine. If a new notion is introduced by the new feature, I create the corresponding components, maybe refactoring and refining the existing ones.
Then I come up with a lousy implementation of the feature, and live with it for a while. When Iâm fed up with the ways it does not work well, I go deeper, I do it better. I am constantly revisiting existing features and the way all components interact together, which is only possible because refactoring in Smalltalk is so painless and easy.
Looking ahead, what enhancements or new features do you envision for Roguerrants?
First of all, I want to expose all the features that are already there. Thatâs why I released two projects on itch.io:
One is Roguerrants, the game engine.
The other one is a game. It is called, tongue in cheek, The Tavern of Adventures, and at the moment it is very primitive. I intend to grow it into something fun that will illustrate a lot of systems that are hidden in the engine at the moment. For example, you can fly. You can also control a party. You can play board games. There are rivers and lakes, lava pools, bottomless pits, basilics, dolmen, villages⊠You can trade and exchange intel with NPCs. You can have procedurally generated, evolving scenarios. Victory conditions that are not known in advance.
Then, for the future, I can see two main features coming for the game engine.
One is adaptive music. I would like the game to generate its own music. This is a long-term goal, and where I will go back full muO.
The second is a declarative API. A very simple format, even usable by non-programmers, to create custom games. I have already begun this, and the little I implemented already gives me a huge boost in the speed of game contents generation.
Player experience is a crucial aspect of game design. What do you hope players take away from Roguerrants, and how do you see their experience evolving as you continue to develop the game?
Well at the moment I do not have a game for them. I only have a game engine. I first need to upgrade Tavern of Adventures to a proper gaming experience, with tactical situations, exploration, meaningful decisions and a bit of strategizing. Weâll see how it goes.
Dear Squeakers, Smalltalkers, and friends of object-oriented, interactive programming environments!
We cordially invite you again to our annual meeting of German-speaking Squeakers (Squeak Deutschland e.V). It will be filled with exciting lectures, demos, and discussions led by both us and you, while also serving as the general assembly of the association. If you want, you can end the meeting with us in the evening in a cozy atmosphere in a nearby inn.
Should I register?
Do you want to participate or do you have any questions or do you even have an idea for demo / lecture / discussion? Get in touch with us so we can plan better. We would like to remain flexible in terms of time and would plan about 15 to 45 minutes for orientation for each lecturer.
For participation on site, please register by informal email to: marcel.taeumel@hpi.de
When do we meet?
Saturday, November 2, 2024, from 1 p.m.
Where do we meet?
We will hold the event in a hybrid form. After registration you can find us on site:
Hasso Plattner Institute for Digital Engineering gGmbH Prof.-Dr.-Helmert-Str. 2-3 14482 Potsdam
Alternatively, we provide a virtual connection to the meeting point via Zoom ready. Check out the day of the event again: https://squeak.de/news/.
How will it proceed?
The exact process is still being finalized. You may also choose to spontaneously give a brief demo or share an experience report during the meeting. đ
It was 28 years ago today when an email (Swiki copy here) was sent to comp.lang.smalltalk by Dan Ingalls with the subject âSqueak – A Usable Smalltalk Written in Itself,â marking the first announcement of Squeak to the world. In his email, Dan announced the release of Squeak with the intent to promote collaboration with academia and industry. Squeak began on a reconditioned Smalltalk-80 system from Appleâs ST-80. Notably, the implementation was written almost entirely in Smalltalk. Beginning with the Blue Book spec, a 32-bit direct pointer Object Memory and incremental garbage collection was added. Additionally it included a color BitBlt, a portable file system, and basic support for polyphonic music synthesis. Contributions were welcome, and the rest, well, is ongoing history.
We would like to acknowledge the invaluable contributions of all who have supported the project. In celebration of this milestone, we reached out to several current contributors, who then also shared this request with others to gather their thoughts on this occasion.
Alan Kay, for this celebration, emphasized several noteworthy contributions that the creation of Squeak and the achievements made for it brought to the community, which he believed deserved recognition. The highlighted contributions were:
Self-Bootstrapped â One of the most remarkable features was Squeakâs bootstrapping process, which allowed the environment to be constructed and run entirely on its own code. This self-hosting capability not only facilitated rapid development but also ensured flexibility and adaptability without reliance on external systems.
Freed from Xerox Ownership â The transition from Xerox ownership to independence was made possible by a special arrangement for the original developers of Smalltalk-80. This agreement enabled them to retain rights and forge a new path for the language, ultimately leading to the creation of Squeak. The developers were determined to continue the Smalltalk tradition while expanding its capabilities.
MIT Licensing â An important moment for Squeak came when design expert Don Norman helped secure a licensing deal that positioned Squeak under the MIT License. This permissive license encouraged a broad community of developers to contribute and collaborate, making Squeak accessible for educational and commercial purposes alike.
Rapid Multi-Platform Deployment â The combination of its innovative bootstrapping approach and strong community support allowed Squeak to achieve impressive cross-platform compatibility. Within just a few months of its initial release, Squeak was operational on all major operating systems, including macOS, Windows, and various Unix distributions. This rapid deployment not only expanded its user base but also solidified its status as a versatile tool for developers and educators.
We extend our heartfelt gratitude to Alan Kay for highlighting these significant achievements of Squeak and recognizing everyone who played a role in its development.
Göran Krampe â âI haven’t used Squeak or Smalltalk for many years. But I fondly remember the time I was heavily involved in the Squeak community, it was fun and warm and I got to know a lot of very sharp and interesting people. I got smitten by Smalltalk around 1993 when I studied Computer Science at KTH in Stockholm. I still consider Smalltalk to be my âhome languageâ which I feel the most at ease in. Eventually I got interested in other languages like Nim and Dart and I ended up trying to implement a simple Smalltalk-like language myself. Last time I really worked with Squeak I worked together with Ron Teitelbaum on the system called Teleplace (renamed many times!) originating from the Croquet project.
âNow, looking at the programming future and daring to make some observations or predictions, there are some obvious trends I think I see…there are so many different areas of software development where Smalltalk still can have an edge, for example in complicated simulation or interactive data exploration scenarios. I think that is where interesting options can be found…Squeak and Smalltalk with the image model and the highly flexible and malleable development environment has a great opportunity right now in tapping into AI…if I would be looking at improving the Squeak development environment â this is clearly where I would put the effort. The programming landscape is changing…I am very sure programming will not look the same in 5 years time. It is already happening.â
Chris Muller â âThe rise of cloud appâs made our lives âeasier,â but also married us to their limitations. As dependence on them grows, so does the importance of having access to open Personal Computing. âPersonal,â meaning an unfettered interaction between an individual and their computer. No sign-ups. No subscriptions. No data-collections. Just a computer, executable, open canvas, and human imagination. Beautiful. Happy 28th!â
Marcel Taeumel â âHappy birthday to the Squeak project! As we celebrate this milestone, I want to express my appreciation for its commitment to backwards compatibility and modularity, which ensures extensibility and readability. Over the past decade of using Squeak for teaching and research in object-oriented design, I have come to value its powerful tools for exploration and debugging. Looking ahead, I envision meaningful improvements that respect and build upon the content we have created. Smalltalk is a beautiful language, and Squeak enables us to learn and grow in ways that truly enhance our productivity. Here is to many more years of success for Squeak and its community!â
Ron Teitelbaum â âI joined the community 19 years ago. It seems like only yesterday. In so many ways the community has changed but in many essential ways it has stayed the same. In that time we relicensed Squeak, went through a few major splits, with Eliot Miranda we helped to develop the new VM and support for ARM, helped push forward Cryptography, and with Ronie Salgado helped to contribute to 3d graphics. We have lost some amazing Squeak people. We also gained some very talented developers that have stepped up in amazing ways to help keep the community functioning. Iâd like to say a big thank you to all of you that contribute in so many ways to keeping Squeak alive. It has been amazing. Feels like there is so much more we could do! Happy Birthday Squeak!â
Feel free to share your thoughts or wishes in the comments below!
On the Squeak developers mailing list, Lauren Pullen shared her experience (found here) using Squeak while working on a rendering engine for a first-person maze game, similar to the technique used to render the original Wolfenstein 3D game. Wolfenstein 3D used a rendering technique known as ray casting. Her project captured our attention prompting us to seek additional information from her about it.
What is Ray Casting?
Ray casting is an early rendering technique used in computer graphics and video games, particularly in 2.5D and 3D environments. It is a simplified form of ray tracing, where a ray is cast from the playerâs or cameraâs perspective into the environment, and the distance to the nearest object along that ray is calculated. This process is repeated for each column on the screen, creating a 3D representation of the 2D world. Ray casting was widely used in the early days of 3D gaming, particularly in games like Wolfenstein 3D (1992).
From Lisp to Squeak
Lauren, with her extensive experience in Common Lisp, initially chose it to create a GUI application. However, she faced significant challenges. The graphics library was unreliable, often failing to start, and the outdated documentation made it difficult to work with. She studied MVC while designing her application but struggled with basic functionality, such as displaying a simple window.
She decided to switch to Smalltalk, specifically Squeak, which had an immediate impact. The graphical elements worked seamlessly from the start, and although she missed some features from Common Lisp, like restarts and method combinations, Squeak provided a development environment that allowed her to focus on development without drastically changing her mental approach to problems.
Comparing Development Tools
For game development, she initially relied on a much earlier version of Game Maker Pro. When she explored Godot, she found its complexity overwhelming compared to Smalltalk. The disorganization in Godotâs tutorials made her question how to create a basic viable product efficiently. In contrast, though seemingly minimal, Squeakâs classes Form and UserInputEvent provided all that was needed.
Advantages of Squeak
In Squeak, she found it easy to work with graphical elements. Drawing interface components and importing graphics were straightforward, thanks to the source code access for built-in drawing functions. This simplicity was crucial for her development process. While working on the game, she realized that making changes and seeing immediate results was invaluable for debugging. Her experience with Forth taught her the value of functions that do one thing well, and Smalltalk’s debugging tools like Inspect-It and Debug-It further streamlined the process.
Challenges and Solutions
However, she encountered challenges. While most errors in Smalltalk were easy to handleâusually, closing the Debugger was all that was neededâsome issues could freeze the image, making recovery a bit more manual than desired. She found herself needing to use the Recover Changes window to restore unsaved changes more often than she would have liked.
In terms of rendering, she faced performance limits with BitBlt when texturing the floor and ceiling. To overcome this, she turned to the AbstractGPU library, leveraging the graphics card for drawing. She continued to use the ray caster to determine what the player could see to speed up the game, but introduced edge pop-in, where objects on the screen edges would suddenly appear while turning the camera, because of differences between the ray caster’s projection and the GPU’s projection. Increasing the field of view used by the ray caster resolved this issue.
Testing Using Morphic Dungeon
Morphic Dungeon is what Lauren developed and uses to test the movement and texturing code. She wants to work with textures that are not symmetrical, which requires mapping the top-left corner of the texture to different positions on each face of the 3D objects. This approach also allowed her to test back-face cullingâa technique that improves performance by not drawing faces of a 3D object that are not visible to the cameraâin the GPU mode. In this mode, the âback facesâ are flipped horizontally and appear further away, as if looking at the inside of a transparent painted box instead of the outside. Back-face culling will be essential for rendering the âwallsâ of tiles that the player can enter or see through, such as grass or support beams along the grid edges.
Lauren implemented three movement modes:
Free Movement and Free Turning
Grid-Locked Movement and Free Turning
Grid-Locked Movement and Grid-Locked Turning
Full Free Movement is similar to Wolfenstein 3D, allowing sub-pixel steps and small increment camera rotations.
Grid-Locked Movement is useful for first-person dungeon crawlers. Grid-Locked Turning forces camera rotation to 90-degree increments, similar to classic non-raycaster games like Wizardry or modern titles like Etrian Odyssey. Free camera rotation, with Grid-Locked Movement, is also supported which is similar to the modern title Operencia.
While using Morphic Dungeon to test the different movement modes, Lauren encountered an amusing floating point error whereby the player would step repeatedly through walls and out of the play area. This provided a humorous insight into the potential bugs she might encounter.
Additionally, Lauren tested the game with a family member, revealing that the 40×40 maze, though not difficult from an overhead view, proved challenging from a first-person perspective without an overhead view or compass. This feedback helped her adjust the difficulty of the first area to better suit new players.
Future Plans
Looking ahead, she plans to explore non-flat levels and dynamically stitching multiple maps together. This might result in overlaps while rendering, so the ray caster will be in charge of telling the graphics card what to draw. Meanwhile, she will focus on improving floor and ceiling loading performance, although this is currently less critical due to the few vertices involved.
Lauren believes that developing a game is a great way to introduce people to programming. While tools are useful, having something that you can play with is fun. Old tile-based games and raycasters are particularly appealing to her because they are simple to work with, even for beginners.
Overall, Lauren believes that Squeak has proven to be an excellent choice for her project, offering the simplicity and functionality needed for a successful game development experience.
Why Not Give It a Try?
If you would like to experiment with ray casting in Squeak, you can find out more about her project from SqueakSource here. To use the 3D accelerated package, you will also need AbstractGPU by Ronie Salgado, available (here). Ronie is the author of a number of terrific 3D development tools, including Woden (here) and Sysmel (here). Be sure to explore these excellent resources as well!
Have a great time with Smalltalk and keep on Squeaking!