A Dance of Fire and Ice

I’ve never been quite so enamored with a game as I am with A Dance of Fire and Ice, an indie rhythm game that just released a remastered PC edition on Steam. It visualizes music in a really simple and interesting way that I don’t think I’ve ever seen rhythm games explore before– and I’ve played a lot of them!– but finds ways to make interesting rhythms challenging but immediately obvious in a way that I haven’t felt about the genre since the Ouendan series on the Nintendo DS.

I don’t know what else I have to say about it right now, but if you’re at all into rhythm games, you really need to watch the trailer:

Scrumwave: An Online Generative Art Exhibition

Here’s the short version: I’ve made a new thing, and it’s a unified home for my bots on the Discord platform. Discord is a chat server similar to IRC or Slack with a nice web-based client and desktop & mobile apps. With this new set up, anyone can log into Scrumwave, the “server” that hosts these bots, and see what they’ve posted over time. There’s a guestbook where you can write your thoughts, just like at a museum or art gallery, and that’s it.

You can check it out at https://scrumwave.com, or stick around for a bit of thinking around all this as well as some technical details.

Why do this?

In the wake of Twitter’s API changes last year, among a number of other technical and political events of various magnitudes, a number of people I know cooled on the idea of putting generative art, and more broadly automated processes we’ve referred to as “bots”, on Twitter.

I’ve done a minimal amount of work to keep them running, although a couple have gotten suspended due to more heavily automated tooling for copyright holders to issue DMCA takedowns on items that are clearly fair use and it’s absurd to think otherwise.

I also tried porting my bots to Mastodon, but honestly, the impact they have on that platform is much smaller and (this might be harsh but) it kind of feels like a waste of my time. I’m glad that there are some people who see my work there, but to convince someone I know that they should go look at my bots on Mastodon is a whole thing. I don’t think that Mastodon is, or should be, a drop-in replacement for Twitter, so I don’t think it’s reasonable to think that directly moving work originally intended for one place to another should necessarily make sense.

I had had in mind for a while now this idea of unifying my bots into one “place” online, where people could see this work passively or follow it using simple open web standards like RSS or Atom; where particularly good output could be flagged in some way and archived. Unfortunately, this is a lot of work for a hobby project and so it’s been languishing in my to do list application for years.

It occurred to me, though, that a platform like Discord provides just the right number of features to provide a sort of low-fidelity version of this. Using Discord’s webhooks, I was able to add posting to a Discord server with 5 lines of code. In the near future, I plan to add the ability for a channel to consolidate the best posts across all the bots based on “reaction emojis” people can assign to individual outputs—something else the Discord API should make pretty easy. If someone approached me and said that they wanted my bot to post in their Discord server (I doubt this would happen, but who knows?), it would be a pretty simple change to process an arbitrary number of webhook URLs each time a bot runs. Discord has the ability to set permissions on individual channels, so I can have spaces where bots can post but no visiting humans can. It’s like an automatically-enforced “quiet please” sign in a museum.

Putting the bots on Discord does mean that seeing their work requires a Discord login, but I’ve decided I’m okay with that, because Discord uses a single account to manage your access to any number of chat “servers”. It’s likely that a large number of people I know already have a Discord account, and if they don’t, the sign up process is simple and could be useful to them at some point in the future. It’s still a proprietary platform, but there’s some indication that they’re doing the bare minimum (1, 2) to address the problem of hosting hate speech on their service, something I have a hard time saying Twitter has done.

How does this work?

In the process of setting this all up, I finally accomplished a task I’ve been meaning to tackle for quite some time: merging my bots into a “monorepo”, a trendy concept in programming that I have a feeling is overapplied in a lot of companies, but seems fine here. All of my bots to date have been written in Ruby and I’ve generally relied on a lot of copy-and-pasted code to perform common tasks like managing command-line parameters, posting to Twitter and Mastodon, and doing similar operations in A/V manipulation libraries like imagemagick and ffmpeg. Over time that copy/paste has become kind of unmanageable, and trying to coordinate Ruby versions, gem versions, and installed binary dependencies (like imagemagick) has also been the source of a lot of unnecessary downtime for my bots. Since each bot is also its own git repo, I’ve been fairly neglectful in keeping the source code of my bots available on Github, even though some people have told me they’ve found that code useful in thinking about how to do similar work.

So with this I’ve started a new project, botter-heaven (with apologies to Hideo Kojima), to manage running all my bots. For the time being, the only thing that’s really unified beyond some code to handle command line parameter parsing is one set of Docker and Ruby Gem configurations, which makes the whole thing portable from one computer to another much more easily. Now that it’s all in one repo, though, I do aim to finally go back and clean up the code a bit, using the same methods for common needs like posting to Discord or adding text on top of images. This approach makes that work far, far easier than my original plan, which involved pulling code out into a Ruby gem and then adding another dependency to all my bots.

So far botter-heaven is running nicely on my ancient Mac Mini (“urza“), managing nine bots posting multiple times per day, but another benefit of this project is that I’ll soon be able to move this whole operation into the cloud and retire urza now that it can’t even install new versions of macOS.

Both Scrumwave as a Discord server, and botter-heaven as a piece of software, are absolutely works in progress. I’m not fully satisfied with either, but I’m excited to see where thinking about this work takes me over time.

Streaming Capture Card Input via VLC

I recently bought an Elgato Game Capture HD60 Pro, which provides me with an HDMI input on my PC that I can hook things like game consoles up to in order to stream/record their audio and video. Elgato provides a piece of software also called “Game Capture” that allows you to perform these tasks. It has a control center to mix audio levels between the HDMI input, your microphone, and the rest of your computer audio, as well as ways to manage overlays for webcams or other capture sources, and functionality for almost every livestreaming platform to let you stream straight from their suite instead of having to use something like the more complicated (but excellent) OBS Studio, the primary choice of most Twitch streamers nowadays.

To be honest, though, part of the reason I bought the card is just to be able to play console games in a window on my computer. I’m not sure if or when I’ll stream myself doing so, I just don’t have a separate TV in the same room as my computer and this seemed like a fun/easy way to get around it. Elgato’s software suite isn’t ideal for this; it has a bunch of extra controls on the screen to perform the functions I mentioned above and I’d really rather just have a simple resizable window with the game video in it.

Doing a bit of reading, I discovered that in Windows the Elgato cards make their input available as DirectShow sources (one for video and one for audio). Thinking back to how I could make use of that, it occurred to me that VLC actually allows you to read those as input streams!

I had to cobble together a few different tutorials to get what I wanted, which is the ability to have an icon on my desktop that I can double-click and have it stream the game console video and audio in a resizable window with no other chrome and minimal lag. I did manage to accomplish it, though, so here are the steps it required. All of this assumes that you are on Windows 10, have an Elgato HD60 series capture card, have verified that you can see/hear the input via the Game Capture software or something like OBS, and that you have VLC installed. You can probably use this information to adapt for other OSes, capture cards, etc, but I can’t help you with any more than what I’ve written below.

Make a .cmd file

We need to add a LOT of command line parameters to VLC, and they’re going to go far beyond the character limit in a Windows shortcut creation dialog, so we need to make an actual script that runs this command. Open a text editor (Notepad is fine). Start by putting the path to VLC into it (this might be different for you!), along with the parameter that will hide the play/pause/skip controls:

"C:\Program Files (x86)\VideoLAN\VLC\vlc.exe" --qt-minimal-view

Save the file somewhere (I have a “bin” folder in my home directory for miscellaneous files like this) and verify that when you open it, it opens VLC. There’ll be a terminal window showing the command, but don’t worry about that; we’ll deal with it later.

Add all the parameters to VLC

Here’s the fun part! The default parameters in VLC for viewing a DirectShow device assume basically nothing, and we need very specific values for a number of them. I’m going to first show you the full contents of my “elgato in vlc.cmd” file and then I’ll break down each of the non-default parameters and tell you which of them might need to be changed for your setup.

"C:\Program Files (x86)\VideoLAN\VLC\vlc.exe" --qt-minimal-view dshow:// :dshow-vdev="Game Capture HD60 Pro (Video) (#01)" :dshow-adev="Game Capture HD60 Pro (Audio) (#01)" :dshow-aspect-ratio=16\:9 :dshow-chroma= :dshow-fps=60 :no-dshow-config :no-dshow-tuner :dshow-tuner-channel=0 :dshow-tuner-frequency=0 :dshow-tuner-country=0 :dshow-tuner-standard=0 :dshow-tuner-input=0 :dshow-video-input=-1 :dshow-video-output=-1 :dshow-audio-input=-1 :dshow-audio-output=-1 :dshow-amtuner-mode=1 :dshow-audio-channels=2 :dshow-audio-samplerate=48000 :dshow-audio-bitspersample=16 :live-caching=1
  • :dshow-vdev: This is the name of the DirectShow video input stream. There are a couple ways to find this! The fastest is probably if you have ffmpeg installed on the command line. Just open up a terminal and run this command: ffmpeg -list_devices true -f dshow -i dummy . The Game Capture (Video) stream should be included. If you don’t, you can use VLC to figure this out. Open VLC and go to Media -> Open Capture Device... , then find it in the Video Device Name dropdown as shown below.
  • :dshow-adev: This is the same thing, but for your audio input. You can use the same methods listed above to find the correct string for this value as well.
  • :dshow-aspect-ratio: this should be 16\:9 for almost any modern device sending a signal over HDMI, but consult its manual and figure out the correct aspect ratio via its native resolution. I don’t know if you need to escape the : symbol with the backslash in front of it, but VLC put it there when I was copying parameters out of it and leaving it in doesn’t seem to hurt anything.
  • :dshow-fps: you can control this, but you probably want to just lock it at 60, from what little I know about these things.
  • :dshow-audio-channels: I set this to 2 to get stereo sound working. I’m not sure if you could make this work with surround sound or not; I don’t have a surround setup on my computer.
  • :dshow-audio-samplerate: 48kHZ is the frequency at which my card streams audio; if you have a different capture card you may want to double-check if it’s the same.
  • :dshow-audio-bitspersample: a tutorial I read said to put this at 16. I don’t really know when/why you would change this.
  • :live-caching: okay, this one is actually important. The default for streaming from a capture card is a 300ms buffer, which is wayyyy too much video lag to be able to play almost any video game. When I first set this up I set it to be 0 because obviously I don’t want any caching, right? Turns out zero is not a valid value and so it ends up inheriting the default value. If there’s a warning somewhere, it’s in a log file I haven’t looked at. Set this to 1 instead to add 1ms of video lag, which I’m sure is fine for all but the most hardcore of hardcore.

The rest of the values in this file are the defaults which I copied from VLC, which generates the list of parameters for you in a box under “Show more options” in the dialog screenshotted above.

Make an “invisible.vbs”

So we’ve got everything working, but you’ve got that ugly terminal window that opens alongside VLC, and closing it kills VLC. That’s not what I want! I think there are a few different ways to fix this, but the one I found involves a little bit… of VBScript. Aww yeah.

Open a new file and put this text in it:

CreateObject("Wscript.Shell").Run """" & WScript.Arguments(0) & """", 0, False

Save the file as invisible.vbs somewhere.

Make a desktop shortcut

Go in Windows Explorer to where you saved your invisible.vbs and right-click on the file. Choose Send To -> Desktop (create shortcut), then right-click on that shortcut and choose Properties. In the tab named “Shortcut”, in the box labeled “Target”, after the existing path to the VBScript file, add the path to your .cmd file. In my case the full target box looks like this:

C:\Users\casey\bin\invisible.vbs "C:\Users\casey\bin\elgato in vlc.cmd"

For the final touch, set whatever application icon you want via the “Change Icon” button below (I went with the VLC cone icon) and give the shortcut whatever name you want (I named it “Elgato”).

That’s it! You should now have the ability to invoke your desktop shortcut and see the input. One thing I’ve noticed is that the connection VLC makes occasionally has a barely noticeable lag to it– I don’t know enough about this stuff to understand why it only happens sometimes, but usually closing VLC and trying again fixes it.


Here’s a puzzle at the beginning of Hexologic.

Simple, right? Tap the only space until it has two dots in it.

Okay, let’s go a little bit further. Bigger board, throw in some spaces that are locked to a given number of dots:

Nice. This feels a little bit like a Sudoku puzzle, but with a ruleset that’s optimized for a phone interface. It’s just a matter of looking for the spot that has only one possible value and cascading from there. Not too bad.

The board can get pretty big, but the logic of figuring out whether each space should have one, two, or three dots remains straightforward. A good time-killer.

Once you start throwing other mechanics into the game, though, it starts to feel different. The dots in the long row have to total eleven, but the sum of the dots on each side of the > symbol have to satisfy the mathematical expression. Now we’re getting somewhere.

Hexologic has thrown a handful of different mechanics at me in the 67 levels I’ve played so far out of just short of 100 in the game. I wish there were more! The difficulty curve is extremely gentle but the back ~half of the game is enough to get you thinking.

It costs a couple dollars in mobile app stores and apparently came out last year! I stumbled across it on the App Store a couple nights ago and have been pleasantly surprised.

I Drank The Ghost???

Slay the Spire is a deckbuilding game I have become mildly obsessed with. It has roguelike elements, some of which (the powerful “relic” system) remind me of what I love about Nethack specifically and which very few roguelikes indulge in (but that’s another post), and one of them is a potions system. You have a specific number of potions you’re allowed to carry at a time (the number of which relics can augment); potions have a one-time effect; and you can buy potions in the shop alongside cards and relics.

Potions generally come in two varieties: potions that affect your character and potions that do something to your enemies. For example, a strength potion might give you 2 strength for the duration of the current fight, whereas a weak potion might apply the “weakness” debuff to an enemy. When you play a potion that affects your character, there’s a sort of “glug” sound effect that implies your character drinks the potion. When you play a potion that affects your enemies, a sound effect and animation occurs that implies you break the potion’s glass vial, presumably in a way where the contents touch your enemies.


Ghost in a Jar is a potion that affects your hero. It applies the buff called “Intangible”, which reduces any number of damage you take for a turn to 1. Let’s go back to the start of that explanation: Ghost in a Jar is a potion that affects your hero. As stated before, when you play a potion that affects your character, there’s a sort of “glug” sound effect that implies your character drinks the potion.

When I use the ghost in a jar, do I drink a ghost?

DO I DRINK A GHOST??????????????