Git LFS Sucks the Least: Prototyping and Version Control with Large Binary Assets

Here’s a story of my struggles with version control at Raktor as I push it to the limit for a variety projects in the Unity engine. Pour a drink and commiserate with me.

I love git. My background is in handling large, complex codebases that go all the way down to the metal, so distributed version control with branching and rebasing is essential. As I juggle many different third party libraries and projects while pumping out MVPs, a robust, well-documented repo history is important to diagnose when bugs appeared and why. I use git-blame, git-cherry-pick and git-bisect regularly.

For our large repo, I’ve found git + git-lfs to be “good enough but still terrible at handling large binary assets”> so here’s my experience over the past year and a half. It’s important to emphasize that this repo is intentionally messy; we’re moving at the speed of prototyping, and I’m not taking the time to worry about whether we’ll use an asset frequently before we add it. I’m also not taking time to cull assets that we haven’t used in a while, as we are often remounting old projects. We’re not worrying about a “shippable” state, we’re worrying about a “runnable” state as we move fast and break things for demos we are running ourselves.

Here’s a look into the repo, a total of 13 GB and 504 commits to date:

VR-Theatre-WinDirStat

Partway through development, as the repo started to get very heavy, I reorganized it so that any asset content that was updated infrequently was moved to the “Dressing Room” folder, which weighs in at 10.7 GB. I fantasized that, at some point, I’d move this content out of git and manage it separately. This is mostly Unity Asset Store downloads.

Multi-platform
This repo is used to ship to 4 separate platforms (macOS, Windows, Android, iOS) and we use third party libraries with inconsistently and naively documented compatibility with different versions of Unity (at the moment, 5.6.0, 5.4.3xEditorVR-p3, 5.4.2, 5.3.4).

UnityVersions

Since even individual projects need to be compiled on multiple platforms to run, I need to switch back and before between these quickly to build and test. Often, when switching platform or Unity version, this triggers an asset re-import. Unity Cache Server helps a bit with this. However, whether the asset is being imported from scratch, or “downloaded” from the cache server (I only ever used localhost), this can take up to 10 minutes on my faster Windows machine, or up to half an hour on my slower Macbook.

Whitespace
I switch back and forth from programming on macOS and Windows,
and MonoDevelop and Visual Studio have different default attitudes toward whitespace. I haven’t dumped enough time into figuring out the most smooth way to do things. Also, I haven’t been able to get a handle on git’s autoclrf settings in a way that “just works”. One time a bunch of ^M showed up in my .gitignore file and I had no idea why, and didn’t want to touch it.

Unity Cache Server
While Cache Server has been great, a different version ships with each version of Unity. It’s not clear to me what a given Cache Server version’s compatibility is going backwards and forwards. Note that since my machines move around physically, I’m only ever using a localhost cache server, and haven’t shared one between machines.
Scary Anecdote: I once had two copies of the big repo on the same computer, for two separate versions of Unity. (This was to handle another problem I’ll get to later.) Unity Cache Server was running, and both versions of Unity had been linked to it. I opened repo A with Unity version A, then closed it with no changes. Then, I opened repo B with Unity version B, then closed it with no changes. Then, I opened repo A with Unity version A again, and Unity downloaded changes from the cache server! What’s going on there!? I wish it was more transparent what the Cache Server was doing.

Special Characters
Special characters that appeared in an asset downloaded from the Unity store have been the bane of my existence and will not die.

Here’s the results of git status immediately after a fresh clone on macOS:
Screenshot 2017-05-25 19.14.01
Here’s an asset appearing twice in Unity because it has a special character:
Screenshot 2017-05-25 19.15.52

These files show as modified even when they haven’t been yet, and re-appear every time I have to git clone, or navigate forwards or backwards over the commit where I made changes to them. I don’t how to fix this problem, and how much effort I should put into it. I’m guessing it’s a macOS <-> Windows compatibility issue, but to solve it once and for all, I think I’d need to go and edit git history to excise them from ever existing, right? For all I know, the special characters that refuse to die may also persist in the Library or Cache Server cache and resurrect themselves after I naively believe they are gone, like some cyberpunk version of The Thing. I’d love advice on this.

Git LFS: Large File Storage
Git-lfs is, in principle, a great idea: for big binary files that aren’t going to change often, keep them outside of the regular git tree and only download them as needed. Don’t store the entire binary files’ history in the .git directory. GitHub charges a small premium for Git-lfs bandwidth, and if it worked 100%, it would be totally worth it ($5 per month for 50 GB of bandwidth). Git-lfs is open-source and managed by GitHub themselves, and clearly aimed at keeping git-familiar devs like me using git instead of switching to a more game-tailored version control system.

Installing and running git and git-lfs on Windows is fucked. By way of explanation, I’m used to Unix-based systems where there seems to be one agreed-on method to install and access programs. On Windows, I had to resort to using the GUI app GitHub for Windows to install git because it sets up GitHub’s 2FA right, and I couldn’t get the keys (via Putty, etc.) working without it.
When uploading or downloading large assets, sometimes the network would hang, or the git operation would fail for some other reason. This appeared to leave the repo in a corrupt state. While git status would finish execution, files would show as changed even if they hadn’t been, and git checkout . would hang indefinitely, even if the files were relatively small, like a jpg. Poking around in the git lfs issues, it appears that this is due to smudge errors (smudging, I think, is the process where a file tracked by git is replaced by a git-lfs pointer in the .git history). I would end up with a repo that was corrupt due to an unrecoverable smudge error. Hey, take a look at how many corrupt repos I have, each of which are ~13 GB and required me to freshly download all of those hot gigabytes!

vr-theatre-corrupt

To avoid having to freshly re-download, I tried “backing up” my repo periodically by zipping it, but this seemed to cause even more problems with OS-specific files getting added on unzip. Zipping itself took ~15 minutes due to the sheer number of files (29,542) and folders (1,460).

On further investigation, git-lfs 2.0 supposedly handled smudge error recovery much better. However, git lfs version showed I was on 1.5.5. I upgraded to git-lfs 2.0 and then continued to diagnose issues, but kept having them. Imagine my gaslight-y horror when git lfs version revealed I’d been reverted to 1.5.5 somehow! Imagine how horrifying it was to discover this when I was also trying to diagnose other reasons why the repo was corrupt, and everything I was tried had processing times from 15 minutes to an hour!
Turns out that the shell launched from GitHub for Windows uses git-lfs installed at %UserProfile%/AppData/Local/GitHub/lfs-amd64_1.5.5/git-lfs and if you update it to a later version, like I did, it reverts! So there’s no way to update the git lfs version with GitHub for Windows to a more stable version.

Next, I installed git-lfs via the terminal offered through Sourcetree. Somehow, first installing Github for Windows, and letting it make 2FA settings, and then installing Sourcetree, and then installing git-lfs 2.0 via Sourcetree’s terminal, made it work. Before, when I’d straight installed Sourcetree, I couldn’t get it to work without GitHub for Windows setting up 2FA right. Yes, I know about GitHub’s auth tokens and I know Sourcetree 1.8 and 1.9 sometimes cached server passwords in a buggy way.

(Let’s take a breath and remind ourselves that my goal in all this is to get to work, not diagnose git issues.)

As a final git-lfs puzzle, periodically, git-lfs seems to “discover” files that were already in commit history that should have been added to lfs a long time ago, but somehow have not been yet. Is there some git-lfs-doctor I can run? I’d love to know.

FYI, here’s my .gitattributes:
$ cat .gitattributes
*.psd filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.jpg filter=lfs diff=lfs merge=lfs -text
*.tga filter=lfs diff=lfs merge=lfs -text
*.tif filter=lfs diff=lfs merge=lfs -text
*.tiff filter=lfs diff=lfs merge=lfs -text
*.mp3 filter=lfs diff=lfs merge=lfs -text
*.wav filter=lfs diff=lfs merge=lfs -text
*.mp4 filter=lfs diff=lfs merge=lfs -text
*.fbx filter=lfs diff=lfs merge=lfs -text
*.xcf filter=lfs diff=lfs merge=lfs -text
*.bytes filter=lfs diff=lfs merge=lfs -text
*.dll filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.7z filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*LightingData.asset filter=lfs diff=lfs merge=lfs -text
*.exr filter=lfs diff=lfs merge=lfs -text

Alternatives: Plastic SCM
I know there’s game-dev-oriented version control systems like Perforce, but I’ve been resistant because git has been so powerful and anything I read about others indicates that they aren’t as much.

I’ve had Plastic SCM strongly recommended by a developer I trust, so I gave it a shot over a game jam, taking a copy of my existing big repo and making 98 commits over 72 hours, as a solo dev.

Here’s a peak behind the curtains at my commit history:
cm usage

Reactions:
– the output of cm diff is not helpful
– Files are often labelled as “changed” even if they’ve only been “checked out”, and there are no actual changes, not even whitespace.
– I don’t like that commit labels are incrementing numbers, not hashes. I didn’t try branching and merging, but this doesn’t make me optimistic that the results will be easy to read.
– Pre-commit, there’s no git-like concept of staging. While I’m working in git, I use staging to indicate to self what parts of a current chunk of work are “good to go” versus “still messy/working on it”.
– The Plastic SCM client I used, as far as I can tell, allowed for only one “active workspace”, aka a repository, at once. This limitation is pretty insane. While I’m all for big mono-repos, when I’m diagnosing behaviour of external libraries who have their own git history, I need to be able to examine and operate on multiple histories at once.
– Plastic SCM’s ignore format is not as regex-friendly as git’s .gitignore, so I could not rename the same file and get going.

Even Other Alternatives
I refuse to make my own version control system like Jon Blow. My needs as a developer can’t be that insane right?

Other Question: What shell should I be using on Windows?
Like I said, I’m used to using macOS or *nix systems, which have a one-stop-shopping shell. On Windows, we have: cmd, Powershell, Powershell opened via GitHub for Windows (which adds GitHub for Windows’ git to its path), the MINGW64 terminal launched by SourceTree (which, oddly, is missing fundamentals like man and which). Finally, there’s Bash for Windows, which installs its own Unix environment. However, anecdotally, I’ve found any git operations via Bash for Windows take about 5x longer than via Powershell. I’m not sure if this is due to some level of abstraction, but it makes it pretty unusable. Also, none of these shells support copy-and-paste as elegantly as macOS does, so I automatically feel disdain towards them.

Back to Git-LFS: As I was trying out different Windows shells, I once ran git checkout . on a repo using lfs in a git environment that didn’t have lfs. This corrupted the repo unrecoverably, so I had to download all 13 GB from scratch yet again. Please: I’d love a command like git-lfs-doctor or git-lfs-unbreak that can diagnose and repair repos.

Posted in technical | 2 Comments

State of Virtual Reality Venues in Toronto

For the last few years, I’ve been a “VR tech professional”, which means I have, on my desk, various pieces of Virtual/Augmented/Mixed Reality equipment. These will get cheaper, but at the moment are dubious buys for the average person. Despite companies’ best efforts, set-up is still a confusing pain.

Some room-scale Virtual Reality hardware like the Vive or Oculus Touch requires a dedicated, calibrated space. These don’t exist in a lot of homes. When I was making a multiplayer Kinect game a few years ago, one of the limiting problems was that clear space in the average person’s living rooms was just not enough.

So let’s talk about ticketed Virtual Reality Venues, both temporary and permanent, using specific examples in Toronto. VR venues can serve a few purposes:
– equipment rental
– a dedicated, professional space setup
– a night out away from your living space

Vivid VR ran a pop-up Virtual Reality Cinema on Dundas Street West in the Summer of 2016. $20, 1 hour. I was really excited to see their approach; having incorporated virtual reality into a couple performances with Raktor so far, I wanted to see what a company that called itself a “VR Cinema” would do. The space was filled with swivel chairs. They handed out Gear VRs preloaded with some 360 films and over-ear headphones. They told us to put the headset on, and then only one of the earpieces, so we could hear their instructions. The organizer then started a countdown and we were all to point at the same movie icon in the Oculus Video app and press the GearVR touchpad to start it at the same time, and then put the last earphone on. So we put the earphones on and watched “together”, but actually isolated.
I was disappointed by this – I was expecting bespoke multi-headset-synced-playback software. I was there with a friend and I wanted us to be able to talk to each other during the movie and call attention when we noticed interesting stuff going on. I can’t recall the movies themselves, but they were guilty of needlessly incorporating movement; in the first scene we were mounted on top of a car.
I had already owned a GearVR at this point for at least 6 months, so the value proposition (for me) of this cinema was negligible. The tickets were totally sold out, even though it felt like merely equipment rental.

The Toronto International Film Festival (TIFF) ran a summer 2016 VR event series called POP which has, so far, been the best-run VR event I’ve ever seen. They based it out of a gallery space in their big King Street East building. Cleverly, each Vive or Oculus station used a ceiling-mounted cable with a spring-loaded dog leash to keep the cable out of the way. Entry price was $23.75 for a few dozen VR experiences, and an attendant at each station ensured visitors did not get disoriented and things ran smoothly. Raktor premiered our asymmmetric multiplayer storytelling experience Inverse Dollhouse here, and having an attendant I could train to run the experience and smooth any bumps for new people was great. TIFF POP was very successful ticket-sales-wise, each in the series selling out weeks in advance.

TIFF POP

I currently live in Kensington Market, and within 500 m of two (TWO!) VR-only arcades.

Toronto VR Games, at 55 Kensington Ave., has a very genuinely cyberpunk-y feel. It’s a former Chinese fruit market, keeping some of the signage, and added some sick dragon art.

Toronto VR Games

Inside, it’s lit like a submarine (dark red) and the VR station dividers are black curtains. It feels exactly like the place I’d go if I wanted myself to become emaciated in VR while I did some 96 hour hack to steal a corp’s info in a William Gibson novel. There’s a fridge with sugar drinks I’m sure I could pay the staff to pour in my mouth so I wouldn’t have to take my headset off. This place has mostly Vive stations, and some Oculus stations in the back, but no Oculus Touch yet. Currently $28.25/hour.

To contrast, VRPlayIn at 294 College Street is well-lit and feels like somewhere I could take a risk-averse suit-wearing person or the kind of person who brings their kids around in a van. VRPlayIn opened quietly open a couple weeks ago.

vrplayin-store-front

VRPlayIn only has Vive stations, and is currently $29-$39/hour depending on day/time of the week, though considering the place is so nice I feel they could charge much more. They even have a large private room that legitimately feels like a private karaoke room. I’ve dreamed of that “bookable holodeck” setup for a few years now and this is the first time I’ve seen it.

Apparently, VRPlayIn is a wing of VNovus, a VR software studio. VNovus has made an in-headset VR app launcher and intro experience. This seems redundant to the work that Steam and Oculus has done, but I suppose everyone has a their own ideas of what users’ first contact with VR should be.

Toronto VR Games vs. VRPlayIn: VRPlayIn deserves your money more because they are a genuinely nicer space, but if you want to be confronted by cyberpunk aesthetic realness, Toronto VR Games is for you. They both have a very large selection of experiences, though if you want to play something specific, you can check in advance.

There’s another upcoming VR Venue: House of VR opening May 6th. An article claimed they were Toronto’s first VR lounge; tbd if they’re lounge-y enough to not count as an arcade. They do promise to have at least one Mixed Reality green screen area – though probably not as good as the state of the art: LIV.

These chairs though.😍 @_houseofvr #comeseethefuture #interiordesign

A post shared by Stephanie Payne (@sppayne) on

Check out the @citynewstoronto exclusive on #houseofvr tonight at 6!!!! #citynewstoronto #vr #wefamous

A post shared by House of VR (@_houseofvr) on

Raiders’ e-Sports Centre is a surprise: it looks, sounds, smells and feels like a Sports Bar, but it’s e-Sports, not, like, actual sports. Big-screen TVs on almost every wall show mostly League of Legends, but also various Twitch streamers’ channels. A big area with leather booths serves standard beer and pub food. To break from a normal sports bar, there are a few dozen bookable desks with PCs, just like an internet cafe. There are a few more bookable booths with multiplayer game consoles. There are a couple VR stations with Vives, called the Atomic District. Pricing is $25/hour. Unlike the other current VR arcades in Toronto, you can actually get food and alcohol here, so it’s approaching a real party venue. Here, at least one of Vive setups is surrounded on 3 sides with open space, unlike being in a booth at VRPlayIn or Toronto VR Games, so if you want to be performative, this is the spot.

Electric Perfume is a “studio and event space” near Pape Station that, full disclosure, I’ve run multiple events out of and taught workshops at. With a projector, wraparound white walls, and a single well-constructed Vive setup it’s the most holodeck-y of any setup I’ve seen so far. If you want to book out a space to exhibit something beautiful, this is the spot. In the land of traditional theatre, “black box theatre” is a space with totally black walls and drapes that you can make look like any environment with lighting. Electric Perfume is a perfect “white box theatre” space, if you bring your own projectors for the other walls. In the future, I hope for “green box theatre” spaces for wraparound mixed reality.

Electric Perfume

Professional VR Developer Post-Note: If I want to run VR events or playtests with custom or pre-release software, I need to be able to install my own executable or bring my own machine(s) and plug it into their VR rigs. So far, I’ve asked VRPlayIn about this, and they were a little resistant about me installing my own software on their machines. I’m hoping that House of VR or another venue is less so – this would enable VR release parties and other special events. I or someone else shouldn’t have to set up an entire temporary exhibit like TIFF POP when we want to show off something non-standard.

Posted in commentary | Leave a comment

Proposing the (Bill) Paxton Number

The Erdős Number measures how far you are from mathematician Paul Erdős via coauthoring academic papers*. The Bacon Number measures how far you are from actor Kevin Bacon via costarring in films.

In honour of the late Bill Paxton, I now propose The Paxton Number.
Bill Paxton has the totally-not-dubious honour of being killed, on screen, by The Terminator, an Alien, and a Predator.

See this compilation:

Paxton Number: Bill Paxton has a Paxton Number of Zero. Actors that have been killed on-screen by a monster who has also killed Bill Paxton, have a Paxton Number of 1. Anybody else’s number is m+1, where m is the lowest number of anyone else killed by the same monster. For every other actor, their Paxton Number is infinity.

Example:
Bill Paxton‘s Paxton Number is Zero, and he was killed by Predator in Predator 2.
Jesse Ventura‘s Paxton Number is 1, as he was killed by Predator in The Predator. He was also killed by Poison Ivy, as played by Uma Thurman, in Batman & Robin.
Ralf Moeller‘s Paxton Number is 2, as he was also killed by Poison Ivy in Batman & Robin. He was also killed by The Scorpion King, as played by Dwayne “The Rock” Johnson in The Scorpion King.
Randy Couture‘s Paxton Number is 3, as he was also killed by The Scorpion King, this time played by Michael Copon, in The Scorpion King 2: Rise of a Warrior.

Other Notes:
The rule is based on the monster in the story, not the actor. So, we can’t count Paxton Number when actors are killed by Arnold Schwarzenegger when he plays Terminator equivalently to when he plays Conan the Barbarian – those are separate monsters.

Looking For:
There are several murderously profiling villains. Let’s find a finite Paxton Number for one of their victims and then we can give finite Paxton Numbers to whole swaths of people. They are:
– Magneto
– Godzilla
– Darth Vader
– Dracula


Collab credit: Cian Cruise & Jan Streekska

Possibly useful resource: cinemorgue.wikia.com

*Fact: My Erdős Number is 4. Co-authorship via Ravin Balakrishnan; Michael Chi Hung Wu; Maria M. Klawe; Paul Erdős. co-authorship graph.

Posted in creations | Leave a comment

“What’s the most insane technical thing you did that actually worked?”

Or, How I Fixed A Real-Time Image Transmission Protocol For A Live Event By Making A Numbers Station

The evening before the GDC round table panel on Location-Based Stories, I was with most of them in a pub. I ended up here because I’m one-half of Playlines with Rob Morgan, who was on the panel.

As we got deeper in beers, we started telling war stories of the craziest things that went well or poorly with past projects in the wild. Here is one thing I did that went exceptionally well, but shouldn’t have.

~~~

It was autumn of 2011. Myself and a team of professors and students at the University of Toronto and OCAD were doing a project for the all-night art event Nuit Blanche. Tweetris was a multiplayer game where two players raced to match a shape from the game Tetris, as judged by a Kinect:

tweetris-kinect

The first person to match the shape, and hold it stably for a period of time, had their picture taken and tweeted to our live feed, @TweetrisTO, which you can still see! Then, anyone during the event could go to a now-defunct URL and actually play Tetris with the bodies of players. You can see what this looks like in the middle of this video.

Getting down to the wire, we had the web app pulling the twitter images correctly. However, over the course of any game of Tetris, blocks will have parts removed from them as parts of them are destroyed. The dev building the web app was also the producing the event, and we couldn’t find a javascript image crop function that did exactly what we wanted (it was also 2011, so any advice you give now may be out of date). The images captured of the game winners were meant to be split into 3×4 blocks, and we were hoping that could be done on the web app side. However, the web app was not doing what we wanted. The image transfer “backend” was already twitter, and we had no time to build something else.

So, instead of an image crop web-app-side, we built a numbers station with a secret twitter account. A Numbers Station is a way to describe a series of odd shortwave radio stations that have transmitted numbers, spoken aloud, for decades, presumably used in transmitting intelligence information to spies. You should really read the Wikipedia article.

The Tweetris team had already built the capability to download images from Twitter into a webapp, so we changed our backend to have a secondary, secret twitter account, to which the Kinect app tweeted pre-cropped square images at, including a formatted CSV of which tetris piece it corresponded to, including sub-piece coordinate and rotation. Here is what that looked like:

TweetrisTOShh

Unfortunately, all of the twitpic links for this account appear to have been cleared out. But, if you can imagine, these were all random portions of grainy images of human bodies; half of a face, a hand and a sleeve, a shoe, etc. All somewhat off-centre and with small amounts of motion blur. Miraculously, even though this account received 4x the image tweets of the main TweetrisTO account, it did not get tagged as spam during the live 12-hour event. We obviously didn’t announce or promote anyone following this secret account.

So, there you go. Your backend not working? No worries – just make a number station in plain sight!

Tweetris was really successful, leading to a talk I did at Gamercamp, and several subsequent projects and papers, including a Kinect multiplayer game I worked on for a while.

Posted in art, moments | Leave a comment

Questions to Ask after an Underwhelming Art Experience

Is this at 90% of being amazing and needs to be pushed/polished just a little or is it actually at 20% and there’s a ton more work?

If it’s a long way from being good, is the path to success clear or unclear?

If the extra work to make it good could be put in, does it then become uneconomical?

Do you think that this experience as it stands is sufficient for most audience members, and thus there’s no need to address your concerns?

If it’s underwhelming there’s 2 options:
1. It’s great but not for you
2. It’s not great for anyone
(#2 means it’s a failure for better or worse)

Could you salvage some parts of the experience into a more contained version that is higher quality?

Were your expectations harmfully different from what you experienced? If so, did these unhelpful expectations come from part of the intended preparation for the experience (e.g. marketing) or from yourself?

Is the essence of the experience (narrative, dynamic) still interesting despite the execution? Could this compelling core idea be implemented in a better way?

Is there a chance that your opinion could be influenced by an ‘off-night’ performance by any of the live actors or implementers?

Was there an arc (beginning, middle, and end) to your experience? Do you feel the experience could have been improved by the addition/sharpening of such?

Did you feel snubbed or irritated by the subject matter? Do you feel like the topic was irrelevant or impersonal to you? Is it possible that other individuals might agree/disagree?

Do you feel distances from the experience? Could it be made better by making you feel more involved?

What message/effect/phenomenon were they trying to convey? Could it be rephrased in a more effective and articulate way? How many versions do you think they have you considered?

Did you have fun doing what you did? Did you feel yourself come through in this work? Did you reach any new heights or cover new territory?

What mark were they trying to hit with their (desired?) audience? Does this relate to them well?

Perhaps something about comprehending the material – was it clear what the performance was trying to communicate? Was the plot / meaning / theme ever opaque in a way that seemed unintentional or didn’t add to the work?

Tied to the subject matter – is this a genre / medium that you particularly dislike?

– Please suggest additions –

Contributors: Joy, Randy, Patrick, Dat, Katy

Posted in art, commentary | 2 Comments

NYC Immersive Theatre Review

I finally set aside time over New Year’s to see all the immersive theatre in New York City that people have been bugging me to see. Here’s a terse listing of them all. NOTE: all of those shows are great and worth seeing. With my comments, I’m not trying to convince people to think about whether or not they are worth seeing or attempting to provide even a helpful information summarization of any show. These are primarily about reflecting on what I care about and what I felt experiencing them.

Sleep No More

snm

The primary reason I went to New York. I saw it twice. I enjoyed that, after the first show I felt that I experienced enough content to be worth the ticket price, yet also had the feeling that there was so much more to experience that it was worth going again. That is a difficult balance to strike. The first show, I mostly avoided following performers, as most audience members do, but rather wandered the level. In the last 10 minutes of the show, I came across a floor I had not seen and panickingly tried to explore it as quick as possible. Very cool to be struck with the immensity of the content. For the second time I saw the show, I mostly explored that level, and stayed stationary as scenes flew by me. I even managed to get to a private scene (see cheeky lipstick smudge above). Sleep No More is very sparse when it comes to spoken language, which I suppose helps when you come across scenes in the middle of them. With no spoken language, you don’t feel that you’re missing out on any factual narrative. I don’t think that’s the kind of puzzle-box-y show I’d ever want to make, but it is a clever hack. I don’t particularly care for innovative dance, though watching hotties use their bodies in interesting ways is nice, so the joy of Sleep No More for me was treating it as a clever content choreography puzzle. I would definitely go again.

Then She Fell

I was told that if I didn’t like Sleep No More, then I would like this one better. Then She Fell has gorgeous costume and set design. The density of production quality is so much higher because you’re guided on much more personal, intimate journeys. However, it somehow didn’t feel like theatre for me. All the small scenes were about intimacy of quirk, and I did not find myself caring enough about the characters and their arcs for it to feel like actual theatre – just static representations of well-dressed characters in a pretty time and space. The coordination of the audience moving through the space is interesting; a nice contrast to Sleep No More’s more chaotic approach.

Grand Paradise

Grand Paradise has my favourite structure but my least favourite plot. It’s set in a tropical paradise in the 70s, with several proxy audience members also on vacation. It’s about a decadent time away from the worries of our yuppie lives. I had drinks poured in myself several times, had actors invite me to slow dance, and spooned with one in a wooden cabin on a beach set. Which is all nice and interesting, but again the static-ness of the dramatic experience was frustrating to me. It just feels like set and moment design, and a string of interesting moments does not lead to a plot that, you know, should really being smacking me around psychologically. The soft-touch of the plot made it feel like I was at a theatre spa.

However, the structure was quite clever. The audience had a free wander that was controlled a little bit more tightly that Sleep No More (in Third Rail shows, you aren’t allowed to open doors). However, you’d get pulled aside for more intimate scenes. The leis we were innocuously given at the start served as markers for the actors for whether we had received an intimate scene or not. The intimate scene to audience member ratio was roughly 4:1, but using the leis as markers ensured that at least everyone had an intimate scene near the beginning, without it coming off as too controlled.

The Encounter

This used a binaural microphone head on stage, so the solo performer could whisper in your ear, and do other interesting effects. Everyone in the audience wore earphones. The effect was amazing, and I’m frustrated that I don’t see this used everywhere. Will probably try to use them in my own project. It’s simply magic.

CVRTAIN

CVRTAIN

This is actually “VR Theatre”. Like actually! Sort of. You go inside a little “back stage” area, put on a Vive headset, and in front of you in VR a virtual curtain opens into a large Broadway like theatre with an audience applauding you. You make grandiose gestures with the Vive controllers, which leads to different types of cheering responses from the adoring audience. Simultaneously to the VR curtain opening, a real red curtain opens to the gallery space the installation is set in, exposing you to whoever else is in there, either just hanging out or waiting for their own turn. When I visited midday, there was about a dozen people. However, you don’t see or hear these people while playing the game, yet they cheer for you anyway. Since you are immersed yourself in the game, and the VR headset is taking on the properties of a mask, you end up acting more confidently absurd to these real audience members because you think you’re acting to fake AI audience members. A sort of Ender’s Game of acting. The game seemed to be about guessing which gestures led to certain cheers. The screenshot after my run-through shows which gestures I “found”. CVRTAIN definitely follows one of the themes Raktor cares about, which is tricking non-performers into being performers.

The Accomplice

A lovely touring mystery theatre. I’ve done a few of these before, and this felt like one of the first ones I’ve seen with a really good production value. We were hilariously hampered by snow.

The Colour Purple

I saw the closing show. The effects of multiple generations of poverty and abuse are intense. I was reminded of reading Octavia Butler’s Wild Seed from earlier this year. By total surprise, Bill and Hillary Clinton were in the audience like 40 people away from me.

Blue Man Group

Blue Man Group

An absurdly polished, well coordinated funny show. Like seriously the level of polish is insane. The show is 100% on rails, but there’s some very clever audience interaction mindfucks. These folks are absurdly talented.

Ninja Restaurant

This is sort of like a higher-end more specific Chuck E Cheese with the strong belief that ninjas are what the internet believes ninjas are. There’s constant jump scares and yelling, which for the first 15 minutes is eye-rollingly terrible but then rotates around to being hilarious. I highly recommend for the absurdity, unless you’re like, too cool for fun or something.

Top Secret International – Rimini Protokoll

I saw this as it’s a higher-budget, less narrative-driven version of Playlines‘ work. They made some quite clever choices and obviously have spent a lot more engineering time on debugging tools, but it’s nice to see that the finickyness of bluetooth beacons is just as hard as them as it is for us. They also implemented some ideas that we were thinking of doing, but having seen them done in person I don’t think they make sense for us, which is perfect. Good field trip.

Posted in art, commentary, travel | Leave a comment

Books Read 2016

The best part about books this year was discovering Octavia Butler. But the most important book I read was definitely Hillbilly Elegy.
________

Aurora – Kim Stanley Robinson – Jan 2, 2016

Dawn – Octavia E. Butler – Jan 26, 2016

Her Smoke Rose Up Forever – James Tiptree, Jr. – Mar 20, 2016

Matter – Iain M. Banks – Mar 22, 2016

Queers Destroy Science Fiction – Apr 12, 2016

The Drowning Eyes – Emily Foster – Apr 13, 2016

Harry Power and the Methods of Rationality – Eliezer Yudkowsky – May 18, 2016

Escape the Game – Adam Clare – May 20, 2016

We Stand On Guard – Brian K. Vaughan – May 26, 2016

Superman: Red Son – May 29, 2016

Tuf Voyaging – George R. R. Martin – June 10, 2016

The Strange Case of Ambrose Small – Fred McClement – June 11, 2016

The Hero With A Thousand Faces – Joseph Campbell – June 12, 2016

The Compass Rose – Ursula K. Le Guin – June 25, 2016

The Clockwork Rocket – Greg Egan – July 15, 2016

Dust – Elizabeth Bear – July 23, 2016

Snow Crash – Neal Stephenson – July 28, 2016 (re-read)

A Fire Upon the Deep – Vernor Vinge – August 27, 2016

Blackmoor, the First Fantasy Campaign – Dave Arneson – Sept 4, 2016

How To Not Die – Michael Greger and Gene Stone – Sept 15, 2016

Alibaba’s World – Porter Erisman – Sept 19, 2016

Death’s End – Cixin Liu – Sept 27, 2016

Sirens of Titan – Kurt Vonnegut – Oct 9, 2016

Tiki Mugs: Cult Artifacts of Polynesian Pop – Jay Strongman – Oct 22, 2016

Diamond Age – Neal Stephenson – Oct 28, 2016 (re-read)

Wild Seed – Octavia Butler – Nov 6, 2016

The Year Of The Flood – Margaret Atwood – Dec 6, 2016

Bulk Food – Peter Watts & Laurie Channer – Dec 8, 2016

Axiomatic – Greg Egan – Dec 20, 2016

Hillbilly Elegy – J.D. Vance – Dec 31, 2016

Posted in Uncategorized | Leave a comment

IRL Deviations from Snow Crash and The Diamond Age

ADDED POST-US-ELECTION UPDATE AT BOTTOM

I first read Neal Stephenson’s duology of future cyber/* punk novels Snow Crash and The Diamond Age a decade ago. As my personal aspirations increasingly resemble some of the elements in the novel, I’ve given them a re-read. I especially want to look how the future in the novels resembles ours, or doesn’t, which like most applied futurology can lead to emotions of excitement, relief or disappointment.

2016-11-07-08-07-18

I shouldn’t have to do this and you should maybe skip this paragraph, but I want to add a disclaimer that the point of futurological fiction (or non-fiction) is not to attempt to accurately predict the future, and we shouldn’t judge fiction for how accurate it is, not because it’s unfair, but because it misses the point. Also, just because an author writes a piece of fiction doesn’t mean that they believe or want it to be real.

Some terms used in the books:
Metaverse – a shared 3D world that anyone can go to. When you are there, you have a virtual representation of yourself called an avatar. This is like if Second Life achieved the widespread popular domination as Facebook.
Phyle/Tribe/Distributed Republic – the continental United States, and seemingly the rest of the world, has been fractured from their formerly-defined geographical nations and instead have become a loosely-defined city states which are more like member’s clubs you apply for. Some of these own neighbourhood or areas, but the jurisdiction is hazy. You can easily emigrate from one to another and trade with one or another through something called the Common Economic Protocol.
Nanomachines and their ubiquity – In The Diamond Age, nanomachines, microscopic robots serving a variety of purposes, are everywhere. They’re as ubiquitous as data packets over cellphone networks. An employer might require that you inject some of their nanomachines in their bloo.d
Thetes – those that aren’t attached formally to a tribe that has its shit together, aka the poor. In the modern world, there are many sociological systems, and if you manage to fall out of them, you’re way out and considered an externality. Some of these people cannot read, yet are surrounded by matter compilers. The ultimate high tech, low life.

STUFF THAT FEELS INCONSISTENT FROM THE REAL WORLD:
– In the Metaverse, most people chose to look like themselves. Why? This is very inconsistent with almost anything else we’ve encountered. In Second Life, looking like yourself is considered an oddity.

– In real life, information is not valuable. People will not pay money for it. In Snow Crash, Hiro makes money selling gossip-ish information that he manages to capture. There does not seem to be much of a mechanism to verify the info he captures. Maybe he has like a 5-star eBay rating.

– In real life, the internet connection is very bad. I’ve collaborated with and bounced between San Francisco, Toronto and London in the past year and its shocking how bad something as old as videoconferencing is. (Ranting) and this is on wifi connections! In Snow Crash, Hiro from his car on a highway is jacked into the Metaverse and getting live audio and motion data from dozens of online users from all over the world without issue. Get your shit together, real life.

– In real life, there is not one social media platform; there are several and they have sort-of-begrudging ways to import data from one to another, with minor growth-hackery gotchas to get you to prefer their network over someone else’s. This door-to-door salesperson nervousness is what you have to keep in the back of your mind when interacting with modern internet megacorps. The Metaverse is represented as the only place, and everyone puts their stuff in that space. We know that one location in the Metaverse is a for-profit company (The Black Sun) but we don’t know if the Metaverse is some megacorp that killed all competition or (sounding too good to be true) civilization managed to get its shit together and made an open standard 3D worlds internet implementation that wasn’t terrible.

– Phyles as the major socioeconomic entities makes sense (being a world traveler myself who has to deal with odd border issues), but them being primarily divided based on very old ethnic/cultural lines seems odd, like Stephenson is really into ethnicity based racial traits in his future D&D RPG and he has these really racist character sheets he isn’t sharing with us. This fetishization of ethnicity also appears in the last third of Seveneves. To me this seems uncreatively anti-future.
On the other hand, I have trouble believing that anarcho-capitalist approaches such as phyles will succeed in a major way that has a visibly geographical effect. If you have property at an address, it seems that that address could be seized my an entrenched power. Why don’t larger phyles simply eat smaller phyles all the time?

– Where are the memes? In real life, internet memes are passed around at high speed as an in-crowd intentional incomprehensibility. They’re used as indicators of being part of a group, as a relaxing joke, and as a method of abuse. They take the form of text and images and more and their absurdity often correlates with the enthusiasm of their transmission. So where are the internet fads and memes? By comparison, Stephenson’s cultural worlds feel very static compared to ours.

– Where are the self-driving cars? In Snow Crash, there’s a paraplegic hacker who still has to drive his car semi-manually. It seems like we’re only 5 years away from them in real life. But, that may turn out to be a disaster.

– In real life, people get sim-sickness in VR easily. Maybe the characters who use VR in Snow Crash and Diamond Age are so genetically bred they don’t experience it? Maybe the headsets are way better? Where’s the people who weren’t able to be useful workers because their bodies just didn’t work with VR? I’d say the thetes, but they seem to use VR for trash entertainment frequently.

– In Diamond Age, its a major multi-year cryptographic hunt to track down a certain character (Miranda, Nell’s surrogate mother). There’s much better social engineering techniques to guess a performer behind a mic than brute-force breaking crypto. Particularly if both parties are willing. While tech will always get better, I think the weakest link in security is going to be people for the next 100 years or so until culture catches up.

– Where is the AI in either Snow Crash or Diamond Age, the latter of which is set about a century in the future? When most people ask this question, they assume there will be god level AIs doing things like managing society, or human-level AI android/gynoids acting as receptionists. I’m not sure if either will happen, and I strongly dislike when people mandate what progress will look like with confidence. And yet, there is no representation of AI in the books. In Snow Crash, there’s one research assistant, but that’s presented as a pretty dumb oddity.

– In The Diamond Age, I’m surprised that nanomachine sickness in humans isn’t a perpetual problem. Particularly because it seems each phyle has its own defense grid and thus humans moving from phyle to phyle should be having serious immune system flare-ups each time.

– How is the Earth’s climate doing in Diamond Age? Is it really messed up? Was global warming less of a concern when the books were written, and thus Neal Stephenson didn’t feel this needed to be addressed? The only widepread intra-phyle agreed upon that we know about is the Common Economic Protocol, which seems to allow for low-effort economic relations between phyles and individuals. However, there doesn’t seem to be any government regulations on top of that. So I assume the environment is real bad.

– Is the only way in to experience the Metaverse to put on a VR headset? Surely power-users (such as Hiro, but maybe not YT) would prefer even higher-throughput access, and would use some sort of overview interface when trying to get things done instead of fulling immersing themselves in a 3D skeumorphic representation. Why isn’t there something more like The Matrix’s ASCII-encoded overview of a 3D scene? In Diamond Age, I assume the visual component of racting happens via some light field method that beams images directly into your eyes, so if you need to get un-immersed in a hurry, you aren’t stuck hauling hardware off of your face. Seems like something a security-conscious hacker would prefer alternates to.

– Horse rides are typically really bumpy and strenuous right? So why are chevalines considered top-of-the-line in The Diamond Age? Oh wait – its probably because they have really great control-theoretic algorithms with a fast update loop to keep the second derivative of the rider motion smooth right? That actually sounds amazing; Boston Dynamics’ Big Dog optimized to maintain smooth changes to its rider.

– Given how networked everything is, how is everyone not stalked/doxxed constantly? Is there actually some of mandated network security system finally? Unlike the current crapness we have with our DDOSing Internet of Things crapware? How is this mandate being enforced given that Stephenson’s world is, again, some sort of anarchic independent city-state system? Perhaps the phyle system leads to a Babelian explosion slash Optimal Fragmentation of network architectures, and that makes it harder? Seeing as people seem to be able to contact each other all over the world easily, this does not seem to be the case.

– It seems absurd that a government could accomplish a large software project such as Snow Crash without having at least one programmer who would be susceptible to its effects. Large unweildy organizations being full of incompetent people rings true at least to me, but in my experience with them there’s at least a couple people at the core who are highly competent. How did they avoid the issues as occurred the Monty Python issue?

– Smart Paper is a little inspirational, but still seems like an excessive skeumophism. Smart paper is most often seen in the hands of neoVictorians, and perhaps its usage is over-described thanks to the nature of science fiction [ref I wanted to make but couldn’t: article about an overly-detailed booking of a modern flight in the style of science fiction]

– Where’s the sharing economy-style thing that takes advantage of the ubiquitous poor (especially the thetes)? Hiro’s dashboard pizza delivery interface is rideshare-like, but is there really no value to be found among the thetes? Is AI and matter compilation so good that they are not needed? Possibly.

– In The Diamond Age, the fact that one and only one person in the world (Hackworth) can possibly develop The Seed (as The Alchemist) is absurd. That’s not how Research & Development works at all.

STUFF THAT TURNED OUT TO BE TRUE IN REAL LIFE:
– Facial expression matters quite a bit (noted in the House of the Black Sun). I find most social VR experiences unsettling.

– Gargoyles are unsettling in real life, and represent a curiosity similar to an experimental fashion designer; interesting performance art, but not the path to mass adoption.

– Late-stage capitalism, particularly in Diamond Age. Instead of the Feed being a free, socialized service, its something that must be paid for. I know that dreaming about Basic Income is particularly in vogue in 2016, so I don’t want to get too optimistic about it, but I’m surprised there isn’t at least one phyle that we heard of that is some sort of digitally-enforced basic-income-y system.

– Despite a proliferation of tech, education in the real world is still hard and must be done carefully – hence the value of the Young Lady’s Illustrated Primer. Technology is not magically democratizing, and I’m sick of hearing people talk about it like it is.

Immersive Theater is increasingly popular and relevant. Obviously. Immersive Theatre will only become more popular. Obviously, I’m biased because I’m staking the next decade on predicting that Rakting will become a real thing.

– Customer service dominates in Western Culture for services. Hiro’s deathly fear of being late on delivery has the same level of stakes as dropping below a 4-star average if you’re a rideshare driver.

– Google Earth (directly inspired by Snow Crash)

– In Snow Crash, Computer security, it seems, is universally not done well enough and the people in charge rarely care and people who are actually good at it can get paid well. Security issues are oddly absent in Diamond Age; except for the notion that de-anonymizing ractors is “very hard”. However, as I said, I have trouble believing that and the book doesn’t try very hard to convince me.

– Opt-in phyles with private memberships & rights & responsibilities makes a lot of sense to me, given that I find myself engaging with these sort of orgs more often as I “grow up”, and orgs that care less about whether I have a physical address and have services that aren’t tied to a geographical location are more likely to get my money, more consistently. Could Github or Netflix expand into a phyle? Unclear. Could a private gym, like Equinox, which despite its douchey vibe I considered joining because its location and service were so convenient? They probably could. Except their “initiation” fee is 1.5x their monthly fee, and I’m not sure if I’ll be in SF a month from now so its just being inconvenient for me. What if there was a sort of legal allowance for a non-for-profit sort of company that had a membership fee but also for its citizens had a sort of notion of rights and responsibilities for them? Interesting. Could I join Google as a non-employee but just because I wanted to get their benefits and live on their property? Would that make me a Google citizen?

LOOSE ENDS AND QUESTIONS:
People often complain about Neal Stephenson’s books abrupt endings. This complaint is absurd and I find he always ends it just when it’s about to turn into a wrap-up of boring loose ends. That being said, here are some loose ends in terms of worldbuilding.

– The matter compilers apparently give stuff away for free (at least the thetes aren’t paying for them), but one of the reasons Mr. X sought after The Seed & encouraged the Fists to destroy Feed lines is that he felt that foreign powers were using their control of matter compilers over China. In which ways were they, overt or otherwise? Who’s paying? Is it only high-end matter compiler items that are being charged for?

– In Snow Crash, we learn that there is extra-terrestrial life infected with the metavirus. This may be the case in most of the universe. By the time of Diamond Age, does the world know more about this? Is it then known that the type of consciousness exhibited by Earth humans is fairly unique in the universe (a la Peter Watts’ Blindsight)? Shouldn’t the humans be shrinking back in existential horror at the alien cosmos and ideally be hiding underground?

– What’s human activity outside Earth during Diamond Age? Based on the chaotic situation on the ground, it sounds like LEO would be in a state of Kessler Syndrome. Are nanomachines being used to terraform things? Maybe everywhere outside Earth is out of scope by definition because a lightsecond delay would ruin any Metaverse or Racting interactions. Perhaps a round-trip delay of more than a few seconds is the new “they’re in the savage colonies”.

– I’d love to think more about how the post-scarcity depicted in Diamond Age is similar to Banks’ The Culture? In the Culture’s living environments, they’re definitely less dirty with semiotic junk and literal nanomachines than in the Diamond Age. Somehow it seems optimistic that the future we’ll be much cleaner than the present. But maybe we’ll get to that cusp some day.

POST-US-ELECTION UPDATE:
Okay, the phyles and tribes things totally makes sense. Centralized powers can’t be trusted. I really wanted a one-world government with free movement and cooperation across all cultures but I don’t know if it will arise from our currently governments coalescing. It may instead arise from some other system. For now, I’m vacillating about whether to stay in the US much longer – I was thinking of getting a generalized US VISA and actually scheduled a talk with a lawyer about it later this week. It would be silly to cancel that meeting on a whim such as this, but let’s say that I’m looking forward to that with less enthusiasm than I was 24 hours ago.

On the other hand, in the future to come, which in some ways may be about being part of the right clubs, being in more clubs is certainly better (?) because you can at least take a stand in how those clubs are shaped. So, I don’t know.

As pointed out by my partner in Rakting last night over a lot of whiskey, one of the larger flourishings of creativity, particularly in cabaret, was during the politically turbulent inter-war period in the Weimar Republic. When things go poorly, people will be hungering for novel entertainment, so I suppose I should be thankful for trying to make some.
As I walked around this morning in post-election San Francisco, I found everyone I saw much friendlier and more prone to begin conversations with strangers. It’s a bit of shell-shock and reaching out for cognitive hugs, but I like this effect. I think the lesson is to take care of each other, otherwise the people who feel they aren’t being taken care of will prop up someone who promises will take care of them. Thinking or rhetorizing about any group of people as if they’re an externality is bad.

Posted in commentary | Leave a comment

Game Mechanic Compression Ratio

After attending a conference on roguelikes over the weekend, I was talking with friend Randy Lubin about how players move from learning rules to playing a game. We discovered/invented this really cool concept:

Game Mechanics Compression Ratio: the ratio between the initial instructions for a game once understood cognitively, and the complexity that they create during gameplay.

This is a computer science-oriented analogy to data compression. If we think of a zip file as the “compressed” version of the game, then this is the initial instructions. The full complexity of the game is the “uncompressed” version, what happens after you open the zip. The compression ratio is the size factor difference between the compressed and uncompressed version. Compression ratios can vary alot between different types of data.

When we were taking, I used the example of Quixote Games’ The Racket as a game that has a surprisingly large compression ratio. For a very basic set of instructions & items you start with, you end up with a game that is surprisingly complicated, especially in the case of social relationships it creates. And you can get going immediately.

Usually Go is used as an example of a surprisingly complex game, given the simple, “elegant” rules. However, this has a multi-stage decompression:
0. The fundamental rules
1. The implications of those rules
2. Playing a game.
You can’t really get to playing Go until you understand the implications of the fundamental rules, for example, the notion of “eyes”. Understanding these implication properly can take a long time and a few mock playthroughs, which makes its compression ratio lower. Again, we’re considering “compression ratio” to be a cognitive concept, rather than the amount of information that the literal description of the Go rules would require.

Nethack and other roguelikes are examples of a bad compression ratio. There’s so much stuff you have to learn, and so many mechanics included in the base game, that while you don’t need to understand all the mechanics to start playing, they’re still there, and you’ll have to come across them eventually, to really master it.

Posted in commentary, games | Leave a comment

What I’ve been up to

It’s been a while! Lots of excitement is upcoming and I’m happy to finally share.

At the end of April 2016, after a year and a half, I left Occipital and returned to Toronto to focus on personal projects. I had contemplated this move for a while, and the timing just happened to work out. I had worked on some really amazing things at Occipital, including building out access into deeper features of the Structure SDK, increased Unity integration, and the early parts of the upcoming Bridge Engine for Mixed Reality.

Transitioning from a life in academia to professional software development is a big change. Almost all the software I’d written by the time I started at Occipital in late 2014 was for only me, and for a demo that I’d be present at, in person, in case something went wrong. At Occipital, I had to massively professionalize, and not just write code that exposed new features and interaction models, but also robustly work across several devices while being maintainable. I’m saying the above as both a warning for other academics that join industry, but also as a call to adventure! More academics should spend time in industry! I have since looked back at code I wrote while still in academia and it’s often embarrassing; the worst parts are over-rushed and, paradoxically, also over-designed. I am now a much more well-rounded programmer.

The more time went on, the more I found myself getting involved in extracurricular projects. These took up more and more of my resources – temporal, mental, emotional, etc. and I eventually realized that these all fit into a single theme. While previously, I saw myself as an interaction designer for novel interfaces, what I really, truly, deeply cared about was building novel participatory digital theatre-like experiences. With the magic of hindsight, one can see how the latter is a more fully actualized version of the former. At Occipital, I was working on a very powerful platform for other people to build content on. I realized it made more sense for me to be building the content myself, in ad-hoc collaborations as suited to a given project, rather than as part of a persistent company. At least for the next little while.

So, I quit Occipital, exercised all my options, then, due to being on a TN Visa and Toronto being much cheaper than SF, I moved back to Toronto. I established a sole proprietorship to do contract work, and made sure the keeping-me-afloat contract work never exceeded 50% of my time.

PLAYLINES
I met Rob Morgan at GDC 2015, where he styled himself as a video game writer for Virtual Reality. We chatted back and forth over the following year. When he saw Josh Marx and I’s project The Painting, he ecstatically recruited me to do something similar in the UK. And thus arose Coming Out, a locative audio narrative experience about the Future of Love, wonderfully digital and queer. This project was sponsored by the UK arts organization NESTA, and I got to fly to the UK twice over the summer to run a preview version at a the Last Word festival, and then do an in-situ installation for where the final project will take place, during FutureFest September 17-18. Rob and I iterated extensively on the UX and the wording of the script; one intentionally unusual aspect of this project is the user is not required to look at the screen at all – it’s meant to be freeing from our usual daily cellphone-focused experiences. An iBeacon cannot detect where someone is looking, so Rob has to write words, and I have to write robust software, so that we can subtly guide people around as if they’re listening to someone monologuing to them over a phone. We have excellent sound editing and voice actors, so I got to listen to lots of swoony British accents.

Playlines

Post-FutureFest, we’re looking to develop this format further, and are doing so under the label Playlines, a portmanteau of play and leylines. There’s at least one possible locative project in the pipe coming in the next year.

RAKTOR
I met Jasper de Tarr through Kinetech nearly a year and a half ago, and we kept talking about doing theatre in virtual reality. What would it look like? Would everyone be in VR headsets (no, why would you need to be in the same place then)? We could have done anything at all, and it would count as VR theatre since no-one else was doing it. We ran several small performances featuring a single audience member in a VR headset, and everyone else around them. It worked surprisingly well. We then did a larger-scale performance for DorkBot at the Grey Area, and this coming Tuesday, our 50 minute show is debuting with the San Francisco Fringe Festival. I don’t want to spoil too much, as I want you to see it. I will say that there are a couple VR headsets, and some of the audience spends time in them. This is our next iteration in meaningful VR Theatre interactions, and I’d love for you to see it and hear what you think about it. Say you’re going on the Facebook event here!

Raktor's Eccentricon

Right now, Raktor thinks of itself as an art collective, not a company. Every VR meetup we go to, we have to re-explain this to the deluge of VR startup folk – no, we don’t have VC funding, no we don’t have a 10x plan, we are simply very competent and experienced participatory improv theatre artists who also can build and run a futuristic digital show that isn’t a half-assed “X for Y” like most of the other stuff you see. Raktor takes its name from a short passage in Neal Stephenson’s Diamond Age, describing audience interaction with traditional theatre a century hence.

We have a Mixed Reality Live VR show in the prototype phase. This has taken a sideline as we’ve focused on polishing the Fringe show, but look for it again in the following months. We’re currently hunting for a dedicated venue in the Bay Area where we can set up a Vive, green screen, and host several audience members in person. Our intention for this show is to have half of its audience members in person, the other half coming in over Twitch or streaming service (highly suggestive hint – contact me if you have a streaming service that wants to work with us). We care a lot about meaningful interactions between the performers and the live and remote audience. There’s no point in live-streaming our show out to the internets unless we can figure out a way for those folks to meaningfully interact. At this stage, we’d rather get 10 meaningfully-interacting audience members than get really popular and get 10,000.

Most Mixed Reality VR videos use a colour camera with a known position relative to the VR rig and a green screen behind. While this works okay, it has two frustrating problems:
(1) props in VR that the performer might manipulate have to be explicitly declared as foreground, and the system chooses whether to render them behind or in front of the performer based on an arbitrary threshold, like the position of the performer’s head. This is broken for multiple performers, and has all sorts of other awkward cases if you’re doing any sort of performative prop manipulation.
(2) the system does not know the pose of the performer, merely which pixels the performer occupies.

The solution I prototyped this summer is to use a Kinect 2 to both determine the performer foreground and track the other skeleton. I’ve seen other people use this technique as an alternate to a green screen, but I’m pretty sure I’m the first to use it to replace the live person with a rigged virtual avatar. Check it out here, where I have a magic hat I can put on to change me from a performer to a virtual character:

Current weak points of this approach:
(1) The Kinect 2 has a low colour resolution at the scale you’d want to do a multi-person performance at.
(2) A single Kinect captures only one side of the performer and they will occlude performers behind them.
I’d love an approach that solves these problems, and if I can’t find one (please contact me) I’ll put one together myself if I have to, dammit.

The above Kinect 2/Vive Mixed Reality setup was put together in Toronto’s fantastic Art Incubator/Performance space Electric Perfume. They had all the hardware, and also purely white walls. As I have a background in theatre and installation art, I like to think of them as a white-box theatre space.

For Raktor, one of the other show-running tools we wanted was a live set designer someone could use from off-stage. I pictured this running on a tablet, giving an interactive top-down view of the small stage area. At around the same time, Henry Faber and TIFF approached me to do an installation for TIFF’s entirely sold-out POP series. TIFF is taking a pretty cool approach here. As someone who’s effectively been embedded in VR development for 2 years now, it’s good to be reminded that almost everyone else knows almost nothing about VR. In the normal installation art approach, you can visit the install space at any time over multiple weeks; it’s usually unattended and broken. Instead, TIFF had 3 time-ticketed intensive weekends showing off new VR platforms, with a staff attendant at each station trained in running them. This ensured that each station was non-broken, and also that people who had never worn a headset before and didn’t know what was going on could be hand-held through the experience.
For TIFF POP 3, I adapted the live set designer idea into Inverse Dollhouse, a 2-player asymmetric VR experience where one person is wearing a headset and passive, and the other is on a tablet viewing them from top-down and active. The person wearing the headset is the “doll” in their dollhouse, and the top-down person is playing with them in their dollhouse, arranging furniture around for them. I retroactively decided that this was inspired by that other piece of Canadian content, The Friendly Giant.
Raktor also uses the technology from Inverse Dollhouse in our upcoming Fringe Show.

IMPROV REMIX
I got to present the paper that came from my PhD thesis in Brisbane, Australia for DIS 2016. This 10-page paper is a short summary of the whole work, but as a recovering academic (will write more about that at some point), I must urge that you read my 180 page thesis if you really want to have a hope of understanding what the hell I was going on about.

The technical director for Improv Remix’s live performance, Montgomery Martin, is in the midst of a really cool thesis himself where, to put words in his mouth, he’s looking at software production practices and how they empower yet constrain modern theatre. He also works on Isadora, which is sweet. And he’s competent and nerdy, too.

Animal Odyssey
I haven’t talked much about this publicly, but for the past nearly three years, I’ve been working on a board game with former housemates Cian Cruise and Jan Streekstra. I can’t quite remember how it started, but now we describe it as a cooperative multiplayer animal adventure game, as inspired by the movies of our youth – Milo and Otis, Homeward Bound, The Land Before Time and All Dogs Go To Heaven. One of our clever innovations is dealing with the problem that cooperative board games have where, if everyone knows everything, such as in Pandemic, then one person tends to take over for the group, and it excludes everyone else. The game is hijinx and surprise-heavy, like a rogue-lite we hope any group plays a couple times in a session. We playtested the latest, printed version at ProtoTO and even got to demo it to some people from Hasbro. Conceptually, the game is already hella tight, and we’re polishing the heck out of it at a steady pace.

Animal Odyssey

Animal Odyssey

Animal Odyssey

DEVELOPMENT
At Occipital, my software development was primarily in C/C++/Objective-C, with occasional work in Unity C#. When I switched to personal prototyping projects over the summer, most of these were in Unity. Unity is great for throwing things together quickly that will Just Work (TM), and its multi-platform support manages to bluntly function surprisingly well, but it definitely feels like I’m on training wheels mode. I can feel my deep programming skills atrophying slightly, to the benefit of getting content out more quickly, yet with less flexibility. I would love to hear what other programmers’ experience has been switching from C/C++ to Unity C# as their primary language. When you’re doing low-level rendering work in Unity, such as I was doing at Occipital and for other contract work this summer, its closed-source nature and big architectural changes between versions can be crazy frustrating. I have almost considered doing most of my development work in a native plugin, while treating Unity as a high-level rendering and logging wrapper.

I moved back to SF this time around to do the Raktor Fringe show, and I was very fortunate to find a contract day job for at least the next 3 months that is flexible enough to let me run errands during the day for side projects. I’ll say more on it later, but it’s managing to meld my love of recombinant content, artificial life, and experiences playing Dwarf Fortress…

Posted in creations, me-news | Leave a comment