The old Artifact is still kicking, not too long ago I created a Snap for it. Creating the Snap felt a bit like concocting a magical snapcraft.yaml and and hoping it works out. For the first few attempts the magic never works out, and the process of figuring things out tend to be tedious at best. This was no exception. Here are some of the problems I ran into, hope it helps someone else.
Xprop missing
My first Snap attempt immediately spit out this during startup:
java.io.IOException: Cannot run program "/usr/bin/xprop": error=2, No such file or directory
This was fixed by adding these lines to my snapcraft.yaml. The layout is needed since xprop is referred to by an absolute path. See this thread for more information.
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.newdawn.slick.AppGameContainer$1.run(AppGameContainer.java:39)
at java.security.AccessController.doPrivileged(Native Method)
at org.newdawn.slick.AppGameContainer.<clinit>(AppGameContainer.java:36)
at game.Artifact.main(Unknown Source)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at org.lwjgl.opengl.LinuxDisplay.getAvailableDisplayModes(LinuxDisplay.java:951)
at org.lwjgl.opengl.LinuxDisplay.init(LinuxDisplay.java:738)
at org.lwjgl.opengl.Display.<clinit>(Display.java:138)
... 4 more
This was caused by xrandr not being installed in the snap, which LWJGL2 uses to find display stuff. This was fixed by adding the x11-server-utils package, which installs xrandr.
stage-packages:
- x11-xserver-utils
For debugging snaps and finding packages these commands were great.
#This allows you to look at the system as seen from the snap
snap run --shell <your-snapname>
#This will give the package that installed an executable, in this case xrandr
dpkg-query -S /bin/xrandr
I hope this helps someone else looking to get their application in a Snap.
I have long wanted to try to observe a planetary nebula, but I think I had the size all wrong. I was looking for something larger.
In my finder scope M57 looked a lot like a star, and if it was not so easy to locate due to being in between two bright stars I would have probably scanned past it. Once I found it, the ring structure was clearly visible at medium magnification. Really neat!
I think that having calibrated my expectations now, finding other planetary nebulae will be easier.
Earlier this year I ordered a 2x TV Powermate mainly for planetary observation. To test it I tried using it together with my 2″ Aero on the globular clusters and M57. It worked way beyond my expectations.
I also took 15 seconds exposure of M13 below (the most my tracking was good for). The two red spots I marked are two variable stars, which I found comparing the chart here to my image.
These stars have a periodic change in brightness. The period of their change in brightness determine their absolute brightness. Then the relative brightness (the brightness observed from Earth) and the absolute brightness can be used to find the distance to the star.
It would be really nice to try and observe M13 with the same magnification over time, and see if I can catch the blinking.
I observed these galaxies visually some time ago, but this time I got to take a series of pictures as well. Out of like around 20 exposures of 15 seconds, I got 6 images that were fine. After stacking this was the result. Noisy, but some structure at least.
Some time ago I added an AppImage for my old game Artifact, as an easy way of installation on Linux. AppImages have a neat architecture, and hopefully will keep Artifact running for a long time on a lot of systems. AppImages can also be run sandboxed using Firejail if you have little trust in me or the source of the AppImage.
Now, AppImages do not really have a nice centralized location where applications can be discovered. There are some initiatives, but it feels a bit crude and lacking some polish.
Snaps and their central registry on the other hand feel way more polished. If anything the content in the store often feels unpolished compared to the store. Snaps also run sandboxed by default. While this is good, it sometimes causes unexpected issues, and can be opted out of.
Making a Snap of a LWJGL 2 application was a bit of a headache, but it worked out in the end. My experience with the Snap documentation in general was good, and most of the work was a breeze after I got the game running as a Snap.
My good friend Feda Curic recently came to me and wanted to make something together during his parental leave. At first we started out with something a bit too ambitious (for the time we had set aside), but then his wife came up with a great idea we could finish in our limited amount of time.
She wanted an app where you could quickly look up names for babies. Here in Norway Statistisk sentralbyrĂĄ (SSB) has statistics for all names used 4 or more times. The SSBs user interface leaves something to be desired though. It is either small lists containing some names, or a name search, where you input the exact name, and then get statistics about that specific name.
The SSB interface is not at all useful for exploratory search for names. In Babynavnvelgeren (the app we made) we have a lightning fast interactive search and filtering options. The data is available locally in the app, so it mostly works offline as well.
Babynavnvelgeren makes searching for names easy and fun. My 8 year old daughter loves searching while sorting for the least popular names. She found some very interesting ones like:
Sau – Sheep in Norwegian.
Lillebil – I initially thought this was a bug for sure, but it is a name. My daughter finds it hilarious (“Lille” means small and “bil” is car in Norwegian).
More generally I think Interactivity and responsiveness is essential for exploration, and the process of exploration often leads to accidental discoveries and new hypothesis building. UIs which support this kind of interactivity are very useful in my experience, and I think more applications/data providers should have this in mind when presenting their data.
I also think there is a lot of low hanging fruit in this space, and small improvements in interactivity often cause massive increases in efficiency. For Norwegian names, Babynavnvelgeren has got you covered. Go get it!
As a kid I was always very fascinated by astronomical objects. I vividly remember the first time I observed the Andromeda Galaxy in a pair of binoculars. I think I was around 14, and I had tried to find it several times before. I did not really know what it would look like in binoculars, and I remember freezing my hands off to keep it in view once I found it. Even if it was just a blurry blob it was amazing.
Fast forward 20 years, and I now have the knowledge, time and money to buy and use a telescope. I recently bought a 10 inch Dobsonian. I am still learning to to use it well, but it has so far been worth every NOK spent.
Last night was my first observing session with the telescope under not extremely badly light polluted skies. These are the objects I have observed so far.
M35 is huge, and looks great on low magnification. It was also not very hard to find by following the stars in Gemini. It would be useful with an even lower magnification eyepiece to observe this next time.
Since it is far away from the closest clearly visible star, it was a bit hard to find even though it showed up pretty clear in the finderscope once I was in the right region.
This is so far the only nebula I have been able to observe from the city. Sadly Orion was not visible when I observed in the less light polluted location. I have not yet tried filters for contrast, but M42 is still very visible, and it looks great. Looking forward to observe this with filters and less light pollution.
M13 gave M42 a run for its money. M13 was easy to find (shows up clearly in the finder scope), and at low magnification it looked great. Increasing the magnification made it awesome. Suddenly thousands of starts exploded into view.
With the light pollution at the time, M65 and M66 were visible but with little detail. They were not visible in the finder scope, and I just randomly found them scanning from the stars in Leo. They pretty much both looked like Andromeda in binoculars. I am hoping to see some more structure next time.
I also observed some galaxy in Virgo, but I have no clue which. It seems the area is filled with galaxies, and I found one just scanning around. Pretty neat.
TLDR; I’m writing a coop multiplayer game with my daughter, this is the current result! Works in Firefox and Chrome. Use arrows to move and space to fire. Share a URL to play with a friend.
Some years ago, my daughter figured out I made some computer games, and she even played one of them quite a bit. After a while she wanted something new, and we figured we’ll make a game together. She would draw concepts and come up with ideas, and I would try to make them happen in game.
The initial concepts she drew were these:
We then together made them into vector with some modifications.
Tips on kid friendly vector drawing programs would be very much appreciated, throw me an email or post a comment. We used Sketch, but Sketch is a bit overwhelming and distracting with all its features. I want a program which only have bezier patches and transformations on them, as well as fill, stroke and possibly opacity settings.
Going from concepts to a prototype
I had been wanting to try compile to JS with Kotlin for a while, so I started a project in IntelliJ and quickly threw something together using a plain HTLM5 Canvas.
We drew some more concepts, and after some evenings implementing we had an infinite randomly generated castle, an arrow firing princess, a hyperactive bow carrying giraffe, and a bunch of collision detection bugs (yay for rolling your own).
Wriggling out of hard requirements
After a lot of fun triggering bugs, my daughter came up with some new requirements.
I want to play with my friends, and we should all be princesses!
These are sort of hard to implement, disregarding networking, it would mean a total rewrite of how the world generation and camera worked. It would also need a solution for how to avoid someone getting stuck due to the camera movement of others and so on.
After some bargaining we made some new concepts, and we agreed to add a player controlled cloud, and a bunch of new giraffes.
Adding networking
For me this meant that I would need to add some kind of networking to the game. For browser games, the choices are:
Communicate with a server using WebSocketand have that relay state, or run the game on the server.
Negotiate a WebRTC datachannel, and send communication directly between the browsers.
Have players install a browser extension like netcode.io,and use it instead of WebSocket.
Since the game is cooperative, there is little reason to run the game on a server. Actually I really, really do not want to run the game on server, for a bunch of reasons, mostly for abuse and scaling troubles.
Using a server as a relay of state or input is also a bit funky, since it will introduce a lot of unnecessary latency. Since I am also willing to sacrifice some poor kids behind a symmetric NAT, I decided for option 2 and I have not regretted that.
I was cautious about doing this initially, since I had read this Gaffer on Games post which deemed WebRTC too complex, though that was in the context of server based architectures.
Having some more experience with WebRTC now, I agree a bit about the complexity, though I think it has gotten way better, especially with a more stable standard and more complete alternative implementations like rawrtc. I also ♥ how WebRTC abstracts away most of the P2P complications behind a very nice API.
Autorativepeer or GGPO?
To share state in the game, I needed to come up with an architecture for networking. Initially I evaluated using something like GGPO, but in the end I chose to go with using the princess peer as an autorative peer, and sync the state to the cloud playing peer continuously, while the cloud peer only sends input. I chose this mostly for simplicity and time constraints. Since the game is cooperative, a lack of fairness is also not really a problem.
For the amount of work i put in, I am very pleased with how the networking worked out. Right now it is not tuned at all, just JSON over the datachannel, but even without tuning and no extra speculative integration, it has worked fairly well.
Where to go from here.
While the game is in a state of continuous updates, I think it is mostly just going to be small changes from now on. Maybe some sound effects and new graphics when we feel like it.
Rendering is currently also quite slow, and takes a lot of the frame budget. I would like to migrate to a framework with a WebGL based renderer. But sadly that seems like quite a bit of work, mostly due to using SVGs for graphics.
For future projects game projects, I will for sure start with a WebGL based framework, or possibly Unity tiny, and raster based images.
That is all for now, go and see how far you can get in our game!
For a quite a while, I have wanted to try and create simple touch based interface for a 2D spaceship game. I want to allow the player to simply drag anywhere on the screen, and the spaceship moves to that position and direction in an efficient manner. Ideally the most efficient manner.
Spaceships in 2D games usually have one main engine that allows forward thrust, and some that allow rotation around the ships center of mass.
Moving from point A to B efficiently (in minimal time) is not trivial with such constraints, as changes to direction and thrust may have huge consequences for later possible movements due to inertia.
So instead of looking at the full A to B problem immediately, I wanted to look at something simpler first, namely to go from having velocity \(v_0\) and pointing in direction \(\theta_0\) to have 0 velocity as fast as possible.
The idea I use originally came from talking to a colleague, but something very similar sounding is mentioned in planning algorithms, though examples always seems to involve driftless systems. Anyway, my current approach involves these known quantities and assumptions:
\(a\) – Acceleration – The ship can only accelerate by a constant amount, and acceleration turns on and off instantly.
\(s\) – Turn speed – rotating the ship requires no acceleration, and the ship has constant rotation rate.
\(\theta_0\) – Initial orientation.
\(v_0\) – Initial velocity
These quantities allow me to find a legal, but very suboptimal way to stop. It simply involves to turn the ship to face its velocity vector, and then accelerate until it stops. Both the time needed to turn the ship \(t_a\) and the time \(t_m\) needed to turn and reverse the velocity are easy to calculate.
It is also easy to see that this is suboptimal, it would clearly be faster, to start burning some time before the turn is fully completed, but the question is when to start the burn.
To allow for this freedom in my model, I therefore introduce a third time variable \(t_s\). \(t_s\) is the time to start turning and accelerating at the same time. \(t_a\) now becomes the time when I stop turning and only accelerate.
Given these intervals, two integrals describe how the velocity will change when \(t_s\), \(t_a\) and \(t_m\) vary.
The most efficient solution to this problem, is the \(t_s\), \(t_a\) and \(t_m\) triplet with the lowest value for \(t_m\).
This information allows me to formulate this as a optimization problem.
Since I want to minimise \(t_m\), the objective function simply becomes \({t_m}^2\).
This is subject to the two equality constraints given.
Since the objective and constraints are non-linear, I plug i into Optizelle which is a framework for solving non-linear optimization problems.
The implementation can be found on github, it uses autograd, to calculate derivatives and hessians. This is an incredible time saver since calculating 9 combinations of partial derivatives would have been a major pain, not to mention having to recalculate them whenever I did something wrong.
Running the program with inputs \(a=2.0\), \(\theta_0=0\), \(v_0=[2,0]\) and \(s=\frac{\pi}{2}\) returns:
The optimal point vector contains the values for \(t_s\),\(t_a\) and \(t_m\). This means that for a ship with the given input, it should start turning immediately, then start the burn after approximately 1.43 seconds, stop turning and only accelerate at 2.43 and finally be at rest after 2.57 seconds, approximately 0.43 seconds faster then the naive version.
To test the result, I implemented a quick and dirty javascript program that simulates these choices and renders to a canvas:
Sometimes the ships end up drifting a bit after the simulation has finished. This is due to the discrete nature of the simulation not perfectly emulating the continuous solution (I do not integrate rotation analytically in the simulation). This could also have been a problem if I applied this style of planning to a game that did the same, from the simulation above it looks negligible though, which is great!
I am very happy with this result, it seems like it could work for the larger problem as well. The next step I’ll try, is to tackle some specific cases of moving from point A to B efficiently. For those cases there will be many more time variables involved, and possibly many constellations of safe initial starting points as well as possible freedoms to introduce in the model. It will be interesting to see how that works out.
My expectiminimax based solution works well for a game with random elements and perfect information, but it is not very useful for a game with imperfect information. Get1000 is played simultaneously by the players, and the opponents choices are hidden until the end of the game.
This meant I had to go back to find a new strategy for solving the game. I decided to try and find the correct brute force way first, and then see if that could be made faster in some way.
Exploring brute force
A solution to the game involves finding a Nash equilibrium from all the pure strategies of the game. A brute force solution could be done by creating a matrix where all pure strategies are pitted against all the other pure strategies.
A strategy here refers to a function which given any game state gives a Get1000 placement . Below is an illustration of what i mean by a state. A state could also include the history (order of placement), which would increase the count a lot, but that is hopefully not needed for a solution.
A gamestate can be represented as the current number (in this case 1), the entries in hundres (7), tens (5) and ones (12) as well as the amount of free positions for hundreds (1), tens (2), ones(2).
The total amount of such states is but in at least the choice is forced. This means there are at most. relevant states, probably quite a bit fewer.
Each state has at most 3 choices, therefore there is an upper bound of unique pure strategies.
This is of course not that helpful, since a matrix is enormous, and for each cell in the matrix all possible games would have to be played to find the payoff for the pure strategy pairs. On top of that, the best mixed strategy would then have to be calculated.
Subgame perfection and backwards induction
Modeling this in normal form as above seemed to get me nowhere, I therefore turned to extensive form, and something called subgame-perfect nash equilibria, and backwards induction. In the normal form solution I need to look at all possible strategies. Using subgame perfection, I hoped to get away with only looking at a very small subset.
While this sounds straighforward in theory, I found it quite hard to figure out where my information sets are, and whether I could consider each choice node in Get1000 a subgame. After struggling for a while, I ended up with an extensive form structure looking like this. Players are P1 and P2, and “move by nature” is the dice roll.
This structure means that only the roots of the tree are subgames, since all other nodes are part of larger information sets.
Attempting backwards induction
The above structure means that it is not practical to naively use subgame perfection and backwards induction to solve the game, but taking inspiration from it could still be useful to get a good strategy.
The algorithm for subgame perfection goes like this:
Consider the final subgames (those with no further subgames), pick a Nash equilibrium as solution there.
When considering the next subgames up the tree, the payoffs in the subgames already considered are used to create the payoff matrix.
Iterate step 2 until the root node of the extensive form tree is reached.
To get something working, I pretend that the other player is at the same state as me always. This means I can only focus on the branches below that state. To keep memory in check i also recalculate payoffs instead of storing the result for each combination of states and games. The final algorithm I ended up with works like this:
Consider the final subgames and pick a Nash equilibrium as solution.
When looking at subgames higher up the tree, I use the choices (not payoffs) computed in 1, and use those choices to play out the game. Then I compare end results to get the payoff matrix for that subgame.
As before, I iterate step 2 until I reach the root node.
This seems intuitively pretty reasonable.
Experimental results
The above method gives me a strategy that partly takes the imperfect information nature of the game into account. At many states it detected mixed strategies that had much higher payoffs compared to the pure versions. The strategies smashes all my previous best strategies by winning 1.75% more games.
At this point, I was not really sure how to approach the game in a better way. In fact I was pretty ready to admin defeat for quite some time. Of course, immediately after i wrote that, I found this thesis, and this report.
Following this post, this contains a review of the sci-fi books I have read lately in no particular order. Book explanations are very light on purpose, since I do not want to spoil the books.
The TLDR: The books I enjoyed the most of this batch, were Cixin Liu’s Three Body Problem books. The last two books in the series are not an easy read, but they are worth it, and I enjoyed them despite inconsistent pacing, and huge differences in style and scope.
I also read The Expanse and Rendevouz with Rama, but they are both great, and well known to most, I do not really have anything to add.
The last humans leaving a dying Earth reach a terraformed planet with a spider civilisation which has been helped along by a human scientist. We follow the spiders as their society advances, and the humans as they struggle to survive.
The book was a quick read. I enjoyed the chapters following the Spiders quite a bit, and the humans as as much. The book reminded me quite a bit of Vernor Vinge’s A Deepness in the Sky.
A security bot (an android overseeing a science expedition) has broken out of the system that constrains its behaviour. As a rogue bot it tries to keep to itself, but that becomes more and more difficult as the expedition makes some unexpected discoveries.
Very quick read. Mostly fine, but I feel like it resorts to breaking into systems as a quick fix for most problems encountered. This gets very predictable and feels too easy a lot of the time.
A sci-fi mystery, set in the same universe as Ancillary Justice. It explores inheritance, culture collisions, and planetary and interplanetary power struggle from the perspective of a very small player Ingray who has taken a huge gamble to become heir to her adoptive mother.
I enjoyed this. A turn to local small scale politics compared to Ancillary Justice/Sword/Mercy and not as memorable as those books.
Interesting exploration of a society where teleportation (jaunting) to anywhere you can visualise within a certain distance is possible. It follows a man who was marooned on a spaceship, and who when saved goes after the crew of the ships that left him behind.
Well pulled off. In general I think abilities like jaunting are a bit too powerful, and typically lead to silly logical problems very easily. The Stars my Destination, probably has those, but they are not very apparent.
Revelation space is a galactic scale space-opera where we follow the Lighthugger ship Nostalgia for Infinity, which Ultranaut crew is looking for someone to treat their captain from an illness. That someone is an archeologist studying the death of a long dead race called the Amarantin on the planet Resurgam.
Solid space-opera on a galactic scale. Contains an interesting explanation for the Fermi Paradox.
Explores a universe where technology is based on populations following specific patterns. In the society we follow, their technology is based on the populations belief in the imperial calendar and the associated culture. Calendrical rot (heresy) must be avoided at all costs.
Very hard to follow initially, and sometimes very confusing, but I sort of enjoyed it. It was very hard to tell if it is consistent with itself, since the concepts are so foreign. I think I have to read it again to form a better opinion.
Humanity has sent an expedition (generation ship) to a possibly habitable planet around Tau Ceti. The mission is to colonise this planet after traveling for more then a hundred years. We follow the humans and the ships AI as they struggle to survive on the way.
Robinson writes in his very (maybe overly) detailed style, with lots of details on environmental systems and specific challenges faced by the biomes on the ship. In some cases it works well, in others it feels a bit like the author researched this, and so it has been put in regardless if it fits or not.
It is hard to write anything about this book without spoiling too much. In the first book, we initially follow a Chinese police officer as he investigates the deaths of several scientists. These are connected to interstellar messages sent by a astrophysicist several years earlier.
The next two books follow up on the events of the first, but they are very different. The scope increases a lot as the books go on. The trilogy (especially book 2 and 3) is quite critical of human society, and explores how we fail to make good collective decisions as a species. It is mostly ok reasoned and well integrated in the story, so it never feels out of place.
The series is great, and I recommend it to everyone. Some parts are a bit slow and feel very obscure at first, but they are well worth the payoff.J
For quite some time I have been trying to completely solve the Get 1000 game. More specifically I am trying to find the strategy in a 1 on 1 game of Get 1000 that maximises the chance to win.
Solving for expected value rather then winning
My initial attempt at solving the game failed spectacularity, since I attempted to solve the game by minimising the average distance of the expected value to a 1000. This is an easy to compute strategy (using a sort of bottom up dynamic programming, where I start with the easy sub-games above, and calculate backwards to the top), but it is the wrong goal. This leads to a strategy minimising the distance to 1000 on average. This interestingly enough differs quite a lot from the goal I wanted to solve, which was to maximise the chance to win any 1 on 1 game.
Since Get 1000 is quite small I can calculate how much the two strategies differ by running all possible games. Above is the result of such a calculation. The two strategies draw 41.2% of the time, while the win based wins 30.8% and the distance based wins 28% of the time.
Trouble with situations where choices are equally good
After figuring out I had the wrong goal, I found a way to create a strategy based on wins rather than expected value (this is much harder to compute, even using the same bottom up approach, since it is not possible to collapse results to an average, and ever growing lists of results must be compared). These strategies I suspect are very close to optimal, but there was something funky going on.
There are situations in my calculations where two placement choices have equal amounts of winning sub-games. Initially I thought I could just set an order of preference of my choice for these, but the resulting strategies beat each other when applied to all possible games. If the order of preference did not matter this should not happen.
For the longest time I could not figure out why this happened. I started questioning whether the markov property held in the game (I am still not 100% sure it does).
Enlightenment
At this point I took a few steps back and looked at what would be the correct framework to model this game in. Turns out it can be modelled as a Markov Decision Process. That in itself was not very helpful, but it eventually got me reading about the expectiminimax algorithm. Expectiminimax is a version of minimax for games with chance involved. While I had to modify it a bit for a simultaneous turn game, I implemented it for some subproblems of get 1000, which I could calculate to the bottom.
While implementing it I realised that I again would have to code resolution for when two choices are equally good. While googling a bit about that, I randomly read about Nash Equilibrium, and mixed strategies. I was already aware of most of this, but it suddenly it dawned on me that my game might contain mixed strategies which could effect the outcome my expectiminimax calculation, and which I needed to take into account.
Wrong payoffs propagated in expectiminimax
Indeed, after searching for a bit, I found several cases where a mixed strategy is needed. The example below shows a expectiminimax situation where a mixed strategy is needed to get the best outcome.
While this exact situation will probably not arise assuming perfect play, the result still might matter since expectiminimax depends on all subgames propagating correct payoffs.
The road ahead
I need to include the support enumeration or theLemke-Howson algorithm for finding nash equlibrium in the placement situations that require it, and then I need to somehow make expectiminimax run for the full game. Currently I can only run expectiminimax (without Lemke-Howson) in reasonable time, for a game which has 6 placements left.
From AI: A modern approach, it seems A/B pruning can be used, but it seems to be less effective on games with chance. I guess it is worth a shot.