GPT-2 and Culture Ship Names

Brief Personal Update: I accepted a new job and moved to Manhattan – more on that later. Critically, I signed a lease on an apartment after 3 years of cybernomadism. I intend to live here for a while, but I want it to feel like a spaceship on the move. Thus, it needs a proper spaceship name patterned after the names in The Culture Series, a post-scarcity intragalactic space opera where each ship has its own advanced AI personality.

GPT-2 is the newest hot shit in text generation, courtesy of OpenAI. I seeded it with the following incredients:
* the entire list of Culture Ship names appearing in literature
* relevant poetry from a witchy collaborator
* my worldbuilding docs

The final name of the ship I chose I’ll hold back for now. Here are the other names GPT-2 and I generated that I didn’t end up using. Please enjoy.

Fittingly, This One Is Still Working
Outlandish Situation (But Not Too Outlandish)
Another Nonsense Factory Item
Painted By A Genius
To Be Continued
This One Looks A Little More Like An Artifact
From The Other Side Of The Universe
About A Very Mysterious Entity
A New Lover From Another Side
Of the Universe This One Will Help Me
The Wizard’s Folly (A Very Serious Thing)
This One Looks Really Fancy
Maybe It Was Something From Another World
In The Woods Where I’m From I Found An Empty Coffin
(Note To Self: Don’t Tell The Kids I Did This.)
End Is Nigh
The Lying Kind Of Love
For You I’d Not Want To Let Go
For In An Attempt So Grand That It Exceeds Comprehension
Perfect Match for My Mood
Determination Of The Dormant Soul
(When You Are Not Looking That Is.)
Aspects About It I Found Out Later
Of Course I Also Liked To Eat
One of the Most Fun Kinds of Travel
You Could Say I Enjoy My Own Company
Outlandishness Aside, What Could I Not Like?
I Love It, It Just Does What It Says
Painted By Its Colors and Form
A Very Serious Case Of “Positivism”
Folly is Fortunate And Irreversible
Aspects Of The Folly Problem Are: The Idea That There Exists an Ultimate Truth
Uncontrollable, but Very Focused On The Present Moment

Some of my favourite space denizens:
(When You Are Not Looking That Is.) sounds like a spy ship,
Determination Of The Dormant Soul is a storage ship in deep sleep,
Painted By Its Colors and Form is an art party ship,
Aspects About It I Found Out Later is an investigator.

Posted in creations, me-news, technical, wordplay | Leave a comment

Upcoming Topics in Computing (that I’m interested in)

I have spent a lot of time in academia, and the last four years at commercial startups, including running a few myself. This gives me a special vantage point on upcoming computing areas – I want to jump over what’s interesting now to what will be interesting in 5 years. My current job hunt has been a really great excuse to do an industry overview and talk to everyone; I honestly wish I could do a two-month sabbatical each year and go spend a day talking to every single interesting company or research organization.

In the next decade, the most interesting computing trends to me are:
– Synchronicity and Liveness
– Cyborgized Cognition
– Telepresence Embodiment

I want to work on these problems whether in research, or commercially. If you work in this area and want to just chat, or hire me, give me an email, or tweet.

I’m going to talk about the next decade shortly, but first here are a few trends I participated in that changed significantly over the last decade:
Tabletop Computing used to be a big deal, with the Microsoft Surface, DiamondTouch and interactive wall displays. My Master’s Degree was all about tabletop computing. However we don’t really have anything exciting happening in this area now – nobody does day-to-day computing on or around horizontal surfaces. The only interesting game in town is maybe TiltFive.
Spatial Computing as a term has managed to get out of tech and academic circles and into general discussion. This is great – it’s a useful summary of all the interesting work happening in Computer Vision, AR, VR and Volumetric Capture. Previous attempts to summarize the field fell into useless 360 vs Stereo or AR vs VR vs XR vs “Cinematic Reality” hair-splitting.
Gestural Interaction was how I thought (and many people still think) we’d interact with computers in the future. The idea of implicit input, or that maybe people will learn American Sign Language to talk to computers, is such an enticing idea that it’s drawn too many people in. But gestural interaction has failed to get real traction. There’s just too much noise in real-life gestural environments. Most gestures actively used in hand-tracking UIs are indexical instead of symbolic, and keyboards and voice interaction still being most important.
Deep Learning unfortunately means too many things right now. Anything that involves some statistical analysis is now called Deep Learning. Marketers that used to use SPSS to analyze Likert scores instead use differently-branded software packages to do deep learning, with an unclear change in actionability. Doing an analysis of an image based on edge detection could be construed as using a small CNN, so it is now called Deep Learning, too. I’m not trying to gatekeep, but I do wish that in general people were more specific, and didn’t use an obfuscated deep learning technique when a much clearer tool like linear regression works totally fine and is easy to understand.

Let me tell you what I’m excited about for the next decade of computing

Synchronicity and Liveness
We are so used to the convenience of asynchronous communication that it’s now common to text or schedule in advance of calling, and that calling someone without warning is considered either rude and requires an apology, or a potential emergency. In timezone-distributed tech circles, almost every meeting is a Google Calendar invite. Messages are sent slightly before or after the scheduled time, with either an apology that someone is only slightly late, or seeking confirmation that the meeting is still on. There’s just too much overhead.

To have quality synchronous time, people still fly in to meetings. This has been a bit bizarre during my current job hunt as companies have spent probably mid-five-figures now flying and hotelling me various places, only to have many of my “onsite” meetings happen in a single-person conference room with a remote employee over Zoom.

Slack usage is pervasive at tech companies of all sizes, since instant messaging is better than emails for quick, time sensitive communication. However, if an employee gets overwhelmed with notifications, they tend to turn them off. Incentives for communicating honestly about your availability are misaligned, and in-person cues about availability aren’t properly translated.

In real-person meetings at someone’s desk, you can get a sense of how busy they are, or if they’re talking to someone else that you can interrupt if you need to. Currently our interaction states are binary: I’m in a call, or I’m not. I’m “online” in Slack, or I’m “offline”, and I may have set the “offline” status artificially because I need to concentrate on something and can’t be bugged at the moment.

I hope new research or businesses explore making synchronicity at a distance more friendly. Calendly has been incredibly useful for me to schedule time to speak with recruiters, as it reduces all the back-and-forth scheduling iterations to a single click. I wish when I called someone, and they’re busy, that we could set up a system to automatically initiate a call again when their current call is done, or we’re both free again. Dialup – sort of a synchronous, audio-based social network, is doing very interesting things with calls that aren’t invoked by either callee. There’s interesting research or a product to be done in communicating and requesting availability – this is handled well in-person with body language, open/closed doors, quick, ephemeral interruptions without disrupting ongoing meetings. Computing doesn’t handle this well at all. I almost want an always-on video conference setup that people can request to join, and then a queue if the join isn’t immediate.

When I was running ticketing for The Aluminum Cat, one of the surprises was there was no online ticketing service that handled time zones well. Turns out, most tickets purchased are for in-person events, so this use case isn’t handled well at all. I’d love a company to build an equivalent of Doodle + eBay + calendly for bidding on group activities, where a booking transaction occurs automatically once quorum (financial, headcount) is met.

Cyborgized Cognition
The average person still refers to artificial intelligences, algorithms and machine learning as “the other”. These often feel a lot like “the other” because when we interact with them, they’re run by large, impersonal, companies whose interests aren’t the same as ours. An AI may be branded as a decision engine, but if that’s done by a company that also serves ads, it doesn’t generate great consumer trust that my behaviour isn’t being influenced outside my best interests.

Once we fix this trust problem (which might require fixing capitalism, tbh), we can start by forming better more useful, actionable relationships with AI – a cyborgization.

The AIs I actively engage with most on a regular basis are my Gmail spam/important filters, and my Facebook and Twitter feeds. Training my Gmail spam/important filters feels like gardening, in that I know it takes extra work to mark messages as important or not instead of deleting or archiving them, but I think it will make my inbox more manageable in the future. When I tell Facebook, Twitter or Instagram that an ad or a post isn’t relevant for me, I don’t have the same relationship; because I know these social networks prioritize engagement (ad views) over what I may want. “We’ll show you less of these in the future” is the best possible phrase to describe the change, but it still feels passive aggressive, since we know the feed algorithms aren’t ultimately trying to serve you.

I have enough presence of mind to spend time gardening algorithms, but from when I talk to other less tech-savvy people, they don’t understand (or believe) that algorithms can change. There is definitely interesting ethnographic research to be done on how the average user can be educated about interacting with algorithms – currently they’re treated like omniscient oracles, or malicious agents able to read your mind. I’d also love to have a browser plugin that lets me know when a news feed is changing on me, sort of like sousveillance for A/B testing.

For cyborgized cognition, I think eventually our day-to-day lives will more resemble a continually negotiated tradeoff between tasks your AI handles, and you handle. Cyborg chess is a good model for this. There are startups that use the term “personal assistant” or “concierge” which I feel a bit grossed out by, since I grew up middle-class and those are things that only useless rich people use. I think there will eventually be lots of interesting research and design work to be done in the area of users and an AI negotiating implicit permission and agency, but none of that can be effectively be used in the wild yet until the trust relationship is better handled.

Maybe Apple will figure it out first. Historically, engaging with an algorithm you don’t own with large compute resources has meant that you give up a significant chunk of your privacy, however there’s a recent Apple Machine Learning Journal publication that has demonstrated remote model training that’s privacy-conscious.

Telepresence Embodiment
From spatial computing, and remote presence via drones such as Double, or wall displays such as Tonari, or live volumetric capture such as Mimesys (acquired by Magic Leap), people are going to spend more time interacting with other people and things at a distance with varying degrees of embodiment.

There’s interesting problems to be solved around scheduling, which I already covered in my section on Synchronicity, but embodiment and expression is an interesting area as well.

Embodiment could be for the purposes of self-expression, such as has been common in online games for a long-time. Wearing a different avatar is like simply wearing a mask. To appear older or of a different gender in voice chat, people have often used audio filters. In social spatial computing environments with live motion capture such as VRChat or Rec Room, it could be foreseeable that users may want motion filters – to appear more masculine, or less clumsy, or mobile at all when they may be quadraplegic.

For human-negotiated attention in remote presence, eye contact is important. Pluto is currently exploring using depth sensor data to correct remote eye contact during video calls so that face-to-face eye contact is preserved.

Update: Attention Correction is included in the upcoming iOS 13:

This works pretty immediately in a one-on-one video conference setup, if you want to preserve literal eye contact. But what if you’re in a multi-way video chat, like you’ve just formed Voltron:

From Perry Bible Fellowship

In this case, it becomes clear that just faithfully reproducing eye contact is overly literal. If I’m person A, I may be able to see when person B is looking at me, but when person B looks at person C, I may not be able to tell. It may be better to adjust person B’s video, as it appears to person A, so that person B’s eyes are oriented towards person C.

The interesting conclusion here is that with remote presence, there’s many reasons to transmit more higher-fidelity expressions of embodiment, but not to reproduce them literally at the remote end.

Posted in research | Leave a comment

The Aluminum Cat Documentary Released Now!

When you make an interactive show, it’s hard for the audience to tell just *how* interactive it is. This is part of the rare magic of any participatory theatre; even Keith Johnstone said don’t bother trying to convince your audience that a show is improvised, because they’ll never believe you. So we at Escape Character decided to show you.

For the Aluminum Cat’s run, we had 35 shows, with a total of 131 audience members. We trained up 3 actors to run the script (Original Cast Stephanie Malek, who took the first pass on all the characters, then later me and Ted Charette). Our script (by Natalie Zina Walschots) had a few possible endings, but these are more like directions that you can depart from a forest, not set paths like choosing between a few roads. We watched all the shows, made *spreadsheets* of player choices, and assembled it all in this 23 minute documentary. Enjoy!

If you played the show and were surprised what other people did, let us know! Subscribe to Escape Character’s mailing list.

Posted in art, creations, games, improv, theatre | Leave a comment

Taekwondo does not support Double-Jumping

I started Taekwondo in November 2018. I’ve wanted to try a martial art for years, but it’s been one of those hobbies in the backlog that required a chance encounter to nucleate.

I used to do more cardio and cognitive intensive physical activities: gymnastics, figure skating and parkour (amateurishly), but the last few years have only been biking, weightlifting and rock climbing.

Being a programmer, my body goes through long daily phases of neglect where it’s kept catatonic while my mind is in Infinite Fun Space. This takes a toll over time, and weightlifting has been great to work my body through a range of motion. Whenever I let my weightlifting regime slack, I can tell that it has been keeping my posture in check.

Unfortunately, I get bored easily with exercise. With both rock climbing and weightlifting I…somehow drift away since my mind feels unengaged, and I stop pushing myself as hard as I could. I listen to podcasts, but they put me in a contemplative state, not an engaged state.

I love biking at high speeds through urban environments. If I could somehow program while also doing this, I think this would be peak activation of all my pleasure centres at once (Sidenote: I should prototype this, and maybe I can make a more visceral version of Mecha Trigger).
Sadly, biking is not really possible through the winter, so I’ve been antsy for proper exercise for months.

But back to our main topic, which is Taekwondo. My first major observation is that punching and kicking consistently is surprisingly difficult. It reminds me of the time I was learning archery. It was only two or three months into Taekwondo that I got to properly spar with someone, where we’re both actively trying to kick and punch each other, in a “fire at will”.

To my surprise, jumping is way less effective than video games and dreams have been teaching me my entire life.

You see, I’m energetic, and flighty, so jumping excitedly out of the way when I’m in danger is a natural response. A lifetime of playing videogames where this is rewarded and encouraged has not helped, but rather has reinforced this instinct. In most of my dreams, I fly. My dream flight takes two different forms: a) the muddy, drifting hovering a metre above the ground b) soaring on air currents high in the sky. Over my life, I’ve become highly familiar with how to fly in these two configurations. I’ve even had Inception-style recursive dreams, where I’m flying, then wake up and discover I can still fly, and celebrate that all this practice has paid off and I can fly in real life. And then I wake up one more time, back in the real world, and find myself staring at the ceiling, asking why I would do this to myself.

To my surprise, when you jump out of the way of a punch or kick in real life, you, midair, are bound by Newton’s First Law, and have negligible effect over your momentum. There’s no double-jumping, mid-air steering, or glide mechanics at all.

I’ve discovered this thanks to every time I jump in the air, I get kicked or punched in the fucking ribs, and land, on my side, on the ground.

I have had heated conversations with my instructor where he’s informed me that there is no way to “train enough to charge up my chi so I can fly Dragonball Z style”. Harrumph.

So my current dodge strategy has had to change to sliding abruptly across the ground. This is way less cool, but has meant I don’t get knocked out of the air as much, which is nice. When you’re in the middle of a floaty jump, you’ve effectively removed yourself from combat, so now that I’m spending more time in combat, I can opportunistically make crazier moves. My current favourite is disruptive axe kicks that force my opponent to take a step back and blink a couple times.

My feet have had trouble dealing with all the sliding, and so I’ve had to get special foot moisturizer for them. My strength and reflexes are apparently fine, so currently the three major things holding me back in Taekwondo are:
– flexibility
– feet aren’t moist enough
my instinct to point my fingers dramatically

Posted in commentary, me-news | Leave a comment

Player Character Bios in Participatory Media

Originally published in Escape Character’s Newsletter.

Question: What’s the best way to hack someone who’s never LARP’d[1] before to get into character?

Our goal here is to have the player buy into the stakes of the show before they cross the threshold [2] into the space of the show. In Escape Character’s projects where the players talk directly to the actor, our initial moments have been charming, but the player engagement with their characters hasn’t persisted.

Stuff we’ve tried:
– Give them a profession: “You’re an inspector, your job is to investigate a dead body”
– Give them a unique characteristic: “You’re a nautical prodigy”

A new approach we’re trying is “You’re going to be a spy, and need to come up with a cover story”. The player does this in collaboration with the performer, and the performer pulls an appropriate prop out of the invisible Prop Box. (Suggested by performer Anders Yates)

Answer: Using collaborative “Cover Stories” seems to be a little better, but we’re still iterating.

The video above is from Sparasso [3], our in-development telepresence immersive theatre toolset for XR environments.

[1] LARP = Live Action Roleplaying
[2] From the Hero’s Journey
[3] Dionysus’ rebirth via disembodiment

Subscribe to our newsletter for more Escape Character updates.

Posted in commentary, creations, games | Leave a comment

Novel FTL Flavour Profiles

Ways faster-than-light travel could be fun, while also trying to solve Fermi Paradox.

1. It’s actually hard to go slow.

I’m going to call this a Tachyon Drive approach. Once you spin up a tachyon drive, you actually go infinitely fast, and it takes precision and energy to slow the heck down. Most of the first ships are lost because they just go beyond the bounds of the visible universe. Even when you eventually want to make a jaunt over to Alpha Centauri, a tachyon drive jump lasting a few seconds, followed by a year-long slow boat inter-system, is considered normal and expected.

2. Better Space

In the hyperdrive model of FTL travel, the drive system temporarily puts the ship elsewhere – into a type of space different than our universe. So, we put all this effort into making this drive system, and the first time someone spins it up, turns out that hyperspace is just…better in like every way. Most hyperspace is fiction is unlivable, harsh, or full of evil beings. But in this version, it’s just better: stars and planetary systems are way less farther apart, space has atmosphere so you can breath in it (it’s just kind of chilly), there’s frequent entropy reduction events. Every non-luddite of every civilization emigrates from our universe to that universe as quickly as possible.

3. Wormhole Dispersion

An Einstein-Rosen Bridge, as a singularity, allows space to be irrelevant, so by passing through it, you can exit through any other Einstein-Rosen Bridges in the universe. Well, turns out that passing through all the other ones is mandatory, so when you enter, you are split into infinitesimal pieces, and exit as radiation from the event horizon of every other black hole in the universe. Normally you’d think this formal of travel would be useless, and you’re right, but if you quantumly-entangle the entire ship to…itself…somehow…before entering, your vessel, and your body, can still technically be attached to the other parts of itself. So, it’s more of an ascendance to an ethereal plane than travel to be honest. If you don’t do the quantum entangling step before entering the black hole you die though.

Posted in wordplay | Leave a comment

Telepresence Immersive Theatre with Mice instead of Voice

This past year at Escape Character has been quiet, but very busy. Me + several collaborators have been iterating on telepresence interactive theatre. We have written and debuted three scenarios and are in the middle of writing our fourth. We’ve put on in-person shows in San Francisco, Toronto and London. We just started remote invite-only shows in December 2018, and will be publicly releasing our first show in February 2019 (announcement to come)!

What we’re making is pretty new, but you can think of it as:
* A premium role-playing video game, where you play as a party and every NPC is played by a real actor.
* A Narrative Escape Room
* A Choose Your Own Adventure, with a live actor and extremely open-ended choices
* Immersive Theatre you can access from anywhere.
* Dungeons and Dragons lite, with less prep required for audience members
* Training Wheels for LARP

The biggest leap this past year is moving audience remote interaction from from voice to mouse. I’ll explain why, but first watch this excerpt from a recent playtest. In this video, I play two different NPCs. Every player can see each other’s mouse position, and hover over conversation options, or click to move the group around on the world map.

From the very beginning of Escape Character, the goal has always been to use streaming and other telepresence technology to enable performers to put on interactive narrative shows for intimate-size audiences. Immersive Theatre is a great medium which will define much of the next stage of entertainment, but currently it is difficult to access. It’s expensive because it requires custom physical venues, or because the audience for it tends to only exist in big entertainment cities (e.g. London, LA, NY, SF, Toronto).

For most of 2018, our setup was to have one performer play all the characters in a scenario, while 3-6 audience members were in the digital space as players. The players communicated with the performer by voice. Check the following video for excerpts from our scenario The Sea Shanty, by Tom McGee. The performer used VR equipment to play all the non-player characters, and all players used video game controllers. We did this in closed rooms with only the players, and at events where 40+ audience members were watching the players.

Why does voice not work?

  1. The Pressure of Acting. Many regular people are uncomfortable having to “act”. If you have a background in improv, or playing Dungeons & Dragons, it’s easy to forget how common this is. These people are still quite eager to participate, but often terrified of the (perceived) pressure of performing.
  2. Internet Lag. Think of any video call you’ve done. If you increase the number of people in the call to 4-8, the peer-to-peer lag, even if it’s a relatively low ping like 50 ms, gets so high as people negotiate trying to speak without interrupting each other.
  3. Moderation. You’ll always have people who are trolls, hecklers, or simply ignorantly impolite who don’t know how to share the space with others. Audio as a medium is single-channel; you can’t really have more than one person talking at once. We could build a muting system, but it’s way easier for the moment to avoid audio altogether.
  4. Environment. If the show requires you to speak, you can’t participate somewhere where it isn’t appropriate to, like an airport lounge.
  5. Anonymity. Part of the joy of engaging in immersive entertainment is the option to present as someone else. Theatre has for a long time known of the transformative power of mask, and having to use your real voice omits that option.

How audience use mice to communicate

Systems using live actors should take advantage of live actor’s ability to respond improvisationally to novel audience behaviour. If any audience communication system involved a poll, yes/no, or multiple choice, that’s an impoverishedly simplified form of live expression. One of the curious things about the depiction of ractors in Neal Stephenson’s Diamond Age is that actors were mostly used as mere voice actors, and it was AI systems that actually wrote and managed the interactive narratives in the Young Lady’s Illustrated Primer. This doesn’t take advantage of the skills that improvisors have a-plenty! There’s a massively underutilized skill set of performers able to manage live storytelling and Escape Character exists to give these people a performance platform, and give audience access to immersive theatre remotely.

Our current conversational UI design is just a static image, almost like a Ouija board. The actor responds to where and how the audience positions their mice, as a whole but also as individuals. If you’ve been a live performer, you know this is like reading the room – something you say may elicit a whole-audience guffaw, or a chuckle from just one person, or made the front row gasp. This subtle input is currently missing from remote audience engagement systems. We’ve seen really clever behaviour audiences figures out on their own, like gesturing between two different options to indicate they want to combine them.

From one of our audience members:
I always knew where my colleagues were positioned in the decision space, and I could easily express my own positioning by moving my cursor or placing it in a default position (e.g., over on the right). The movement between options and movement on the map was parsable to me as a kind of continuous decision making, and the fluidity really underpinned the aesthetics of the team experience for me.

Get notified about when Escape Character opens up public tickets! Email us at contact@escape-character.com.

If you want to read more about our prototyping process, check out the article after a grant to work with UK-based artists GibsonMartelli: RealityRemix – Prototyping VR Larping

Posted in commentary, creations, games, improv, me-news, theatre | Leave a comment

Books Read 2018

The year’s best books: Live from New York and Hugh Cook’s Wizard War

________

Gardens of the Moon – Steven Erikson – Jan 1, 2018

Starfish – Peter Watts – Jan 3, 2018

The Glass Castle – Jeanette Walls – Jan 24, 2018

Surface Detail – Iain M. Banks – Feb 17, 2018

Steal the Stars – Mac Rogers – Feb 28, 2018

A Wizard of Earthsea – Ursula K. Le Guin – May 7, 2018 (reread)

Shadow Ops: Control Point – Myke Cole – May 12, 2018

The Practice Effect – David Brin – May 21, 2018

The Collected Stories of Vernor Vinge – June 6, 2018

Live From New York – James Andrew Miller And Tom Shales – June 9, 2018

The Freeze-Frame Revolution – Peter Watts – June 13, 2018

Alien War Games – Martyn Godfrey – June 23, 2018

The Dispossessed – Ursula K. Le Guin – July 4, 2018 (reread)

Against a Dark Background – Iain M. Banks – July 24, 2018

The Startup Playbook – Rajat Bhargava And Will Herman – July 29, 2018

We Have Always Died In The Castle – Elizabeth Bear – July 30, 2018

Gnomon – Nick Harkaway – Aug 28, 2018

Crisis in Zefra – Karl Schroeder – Aug 30, 2018

3 Essays on Virtual Reality: Overlords, Civilization, and Escape – Elliot Edge – Sept 9, 2018

28 Seconds – Michael Bryant – Sept 15, 2018

Revelation Space – Alastair Reynolds – Oct 1, 2018

The Forgotten Forest of Oz – Eric Shanower – Oct 13, 2018

Saturn’s Children – Charles Stross – Oct 29, 2018

Debt: The First 5,000 Years – David Graeber – Nov 7, 2018

Indian Horse – Richard Wagamese – Nov 9, 2018

The Jewels of Aptor – Samuel R. Delany – Nov 18, 2018

All You Need Is Kill – Hiroshi Sakurazaka – Nov 20, 2018

The Ballad of Beta-2 – Samuel R. Delany – Nov 25, 2018

Development and Deployment of Multiplayer Online Games: Volume I: GDD, Authoritative Servers, Communications – Sergey Ignatchenko – Dec 3, 2018

Chess with a Dragon – David Gerrold – Dec 9, 2018

Toronto 2033 – Spacing – Dec 25, 2018

Wizard War – Hugh Cook – Dec 27, 2018

Posted in Uncategorized | Leave a comment

Breaking Up With Git LFS

After using Git LFS on and off for over two years, even going through one major version change, I’ve decided git LFS is not for me, at this stage. Here’s how I took an existing repo using Git LFS, and removed it to return to a vanilla Git repo.

Briefly, Git LFS is a way for Git to manage large binary files such as large textures/audio/video, typically found in content development pipelines. Git itself isn’t really meant to manage large binary assets, and most git server implementations (like GitHub) reject files above a size limit. I’ve blogged about Git LFS before, almost a year ago, when I determined it was the least-bad of my options.

Why I decided don’t need Git LFS
– Onboarding engineers is harder
– Sometimes it breaks in surprising, unrecoverable ways that require re-downloading the repo. Usually something goes wrong with the smudge filter.
– Installation and managing versions is unreliable (see previous blog post)
– Accidentally running a normal git operation on a git-lfs repo without git-lfs installed can break the repo unrecoverably (locally). This can happen if you accidentally use the wrong shell. I have done this several times.
– I’m only using it for a handful of almost-never changing files: big textures for prototyping assets, audio and video. I can instead gitignore these and sync via Dropbox.
– I’m paying extra for it (one Github “data pack” @ $5/month)

How I broke up with Git LFS:

1) I copied my git-lfs repo folder locally, and then pushed it to a new repo on Github while experimenting.

2) I compressed files that could be compressed further.
Turns out I had a great deal of *.tga files in my Unity repo. Both TGA and PNG are lossless formats. Some time back in the day, game engines preferred TGA; something about alpha depth, but to Unity and other modern engines they are indistinguishable.

I wanted to bulk convert all the tga to png, while maintaining Unity’s .meta files references. Turns out this is pretty easy, thanks to this Stack Overflow response to me here.

3) I excised lfs files from my git history.
To remove lfs from a repo, I couldn’t just do it in the modern day, I had to be a revisionist historian; these files never existed ;) Unfortunately, this means that if I rewind my repo to before the lfs removal commits (which I tagged), some references will be broken. But, at least the history will be there for diagnosis.

git-filter-branch is the traditional way to remove unwanted files from your git history. However, it can be slow, and bfg-repo-cleaner is a shockingly feature-packed alternative

I used this command:
bfg --delete-files *.tga *.TGA *.tif --protect-blobs-from master

4) I uninstalled git-lfs from the repo, I think.

I couldn’t find any documentation that gave me a clear answer on a clean uninstall of git-lfs, so here’s some things I did, some of which may be unnecessary.

I ran this command in the repo:
git lfs uninstall

I deleted some lines with lfs in them from .git/config

I deleted the .gitattributes file, which contained the listing of all the files I used with lfs.

However, once I did all this, the folder .git/lfs exists still with 2.5 G in it. I know that git does lazy, occasional garbage collection, so it’s possible that this hadn’t been triggered yet. I just removed the folder.

5) I gradually pushed my new repo in parts.

After bfg, nearly all my git repo history had diverged from the remote on Github. I had to force push the new repo (this is a dark-aligned force power, btw). This didn’t work initially, with the error “The remote end hung up unexpectedly”

Some answers suggested I increased my buffer size, with:
git config http.postBuffer 524288000

This did not resolve the problem at first.
This post suggested I push the repo history in parts.

It’s unfortunate this process is so manual. My repo had 477 commits, from looking up my rev list with:
git rev-list --all --count

However, HEAD~477 isn’t an accessible commit, probably due to merged branches in my history.

Apparently this is my first commit accessible this way:
git push -u origin HEAD~305:refs/head/master --force
I finished the push with
git push -u origin HEAD~105:master
git push master

6) I tested that git-lfs via cloning without git-lfs.

.git/lfs folder was present, but empty, when I cloned again. Suspicious, but it seemed gone.

7) I resolved billing issues with GitHub support.

After deleting the original git-lfs repo, it took a couple hours for my lfs data usage to disappear. The usage even then didn’t go to zero – turns out there was another repo I forgot about still using lfs and had to contact GitHub support to find out. Unfortunately this usage bar doesn’t tell us which repos are using the quota.

Posted in technical | Leave a comment

POV Edit: Star War’s Obi-Wan Kenobi

I’ve taken Star Wars I-VI and cut out every scene that Obi-Wan Kenobi didn’t directly witness. I wasn’t trying to make a movie, and the end result has some bumpy transitions, but in the spirit of this upcoming standalone film, I wanted to get to know Obi-Wan Kenobi’s life better.

I’m calling this a “POV Edit” because I think you could do this for any character, in any movie. The rules are simply to cut out any scenes the character doesn’t witness directly. If the character gets knocked out, the edit just jumps ahead to when they wake up again. I also cut out scenes that Obi-Wan didn’t witness directly, but heard about later.

This edit is the answer to the question “What does Obi-Wan experience and know about the world?” Parts of the final edit are just as disorienting to us and they would be to Obi-Wan, with major events happening off-screen and out of his control. Because I think a lot about non-linear narrative in theatre and games, this helped me think about how to use lack of information, or possibility for misinterpretation when designing this type of media.

But let’s talk about the edit. The final cut is 3:35.
48 minutes from The Phantom Menace
63 minutes from Attack of the Clones
64 minutes from Revenge of the Sith
33 minutes from A New Hope
6 minutes from The Empire Strikes Back
3 minutes from Return of the Jedi

Major stuff that Obi-Wan doesn’t see:
– Tatooine Qui Gonn/Anakin seduction
– Almost all awkward Jar Jar
– Padme/Anakin seduction scenes. It’s actually spookier because it seems like Anakin has magic seduced her. All Obi Wan has seen is Anakin confess he’s distracted by her.
– Anakin/Palpatine
– Lots of Yoda/Windu political context scenes
– C3PO/R2D2 misadventures

A recurring theme throughout Obi-Wan’s story is powerlessness despite best intentions. Anakin gets dropped in his lap, and seems to be able to do really creepy stuff when not in Obi-Wan’s presence. I use the term “seduction” above for both Qui-Gonn and Padme, because Obi-Wan hears second-hand about these people he knows well developing an infatuation with Anakin. There’s one funny sequence on Tatooine where all Obi-Wan is just hanging out on the Naboo royal ship, and gets a series of increasingly enthusiastic phone calls from Qui-Gonn about this kid he’s hanging out with.

Observations:
– Especially in the prequel movies, lots of climactic movies involve cutting between simultaneous action scenes. For example in The Phantom Menace, it’s between the Duel of the Fates, the dogfight next to the Trade Federation station with Anakin, and the Gungan-Trade Federation battle. In these, we’re just spending time wherever Obi-Wan is, which often seems more boring, pacing-wise. I think this is more a general film note than a POV note.
– Sometimes characters show up at “just the right time”, which feels like a Deus Ex Machine, without having knowledge that Obi-Wan didn’t witness. e.g. Yoda has a feeling Dooku is escaping in Attack of the Clones, and then confronts him seemingly just in time. This sort of reminds me of some of the complaints of Neal Stephenson’s writing, where major events happen outside the main POVs. I think this is great – someone isn’t in pause mode if they aren’t in your life.
– Did Obi-Wan see Count Dooku fight Yoda? It seems like he passed out some time after Yoda entered the room, and woke up sprightily just as it was over. I left the fight out of the final edit, but it seems vague.
– It’s ambiguous whether General Grevious died in the intro of Revenge of the Sith. During his escape, all the escape pods are launched, which to Anakin & Obi-wan, seem like an unlucky malfunction at the time until Obi-Wan deduces that it’s Grevious. What a guy! The scariness of this POV is way cooler than in the original film, which shows both sides of this action sequences.
– Padme starts to go mad as she’s dying, while Anakin is also becoming Vader. Is our audience extra-smart to be able to tell that Vader is becoming Vader off-screen? I believe they can be.
– Obi-Wan’s first line ever is “I have a bad feeling about this”
– Obi-Wan is the first Jedi we see the Order 66 order given to.
– At the end of Episode 3, Yoda says to Obi-Wan, “I will teach you to commune with the spirit of Qui-Gonn”, and we cut to like two decades later in A New Hope as Obi-Wan creepily Jedi screams to spook some minor Jawas. So things have not been going hot for Obi-Wan.
– Obi-Wan never gets to talk to Leia as an adult.

Notes on the POV Edit process:
There’s some cuts I had to make which felt arbitrary. As Obi-Wan is a jedi, there are some scenes he doesn’t directly witnesses but he senses. For example, in A New Hope, I include the shot Alderaan blowing up because Obi-Wan directly reacts to it in the next scene.
Character vs Action scenes seem to have different rules
Establishing shots are okay and good. Establishing shots that feature other notable characters without Obi-Wan (e.g. Mace Windu) are not okay as they sometimes contain plot information Obi-Wan doesn’t have.
I haven’t bothered to make any smooth transitions at all. If I tried to make a couple elegant ones, then I’d have to do it everywhere. Lots of scene transitions have weird cuts with background audio and music because they’re taken out of the original movie. This type of edit is primarily informational.

If someone created metadata for movies on a per-scene basis for which characters were included, we could generate these edits automatically, which would be sweeeeet.

Other potentially interesting POV Edits:
Hans Gruber in Die Hard
Godzilla in any Godzilla movie
Aragorn in The Lord of the Rings
The Terminator in The Terminator
The Shark in Jaws
Prince Edward in Braveheart

Posted in art, commentary, creations | 3 Comments