After attending a conference on roguelikes over the weekend, I was talking with friend Randy Lubin about how players move from learning rules to playing a game. We discovered/invented this really cool concept:
Game Mechanics Compression Ratio: the ratio between the initial instructions for a game once understood cognitively, and the complexity that they create during gameplay.
This is a computer science-oriented analogy to data compression. If we think of a zip file as the “compressed” version of the game, then this is the initial instructions. The full complexity of the game is the “uncompressed” version, what happens after you open the zip. The compression ratio is the size factor difference between the compressed and uncompressed version. Compression ratios can vary alot between different types of data.
When we were taking, I used the example of Quixote Games’ The Racket as a game that has a surprisingly large compression ratio. For a very basic set of instructions & items you start with, you end up with a game that is surprisingly complicated, especially in the case of social relationships it creates. And you can get going immediately.
Usually Go is used as an example of a surprisingly complex game, given the simple, “elegant” rules. However, this has a multi-stage decompression:
0. The fundamental rules
1. The implications of those rules
2. Playing a game.
You can’t really get to playing Go until you understand the implications of the fundamental rules, for example, the notion of “eyes”. Understanding these implication properly can take a long time and a few mock playthroughs, which makes its compression ratio lower. Again, we’re considering “compression ratio” to be a cognitive concept, rather than the amount of information that the literal description of the Go rules would require.
Nethack and other roguelikes are examples of a bad compression ratio. There’s so much stuff you have to learn, and so many mechanics included in the base game, that while you don’t need to understand all the mechanics to start playing, they’re still there, and you’ll have to come across them eventually, to really master it.
It’s been a while! Lots of excitement is upcoming and I’m happy to finally share.
At the end of April 2016, after a year and a half, I left Occipital and returned to Toronto to focus on personal projects. I had contemplated this move for a while, and the timing just happened to work out. I had worked on some really amazing things at Occipital, including building out access into deeper features of the Structure SDK, increased Unity integration, and the early parts of the upcoming Bridge Engine for Mixed Reality.
Transitioning from a life in academia to professional software development is a big change. Almost all the software I’d written by the time I started at Occipital in late 2014 was for only me, and for a demo that I’d be present at, in person, in case something went wrong. At Occipital, I had to massively professionalize, and not just write code that exposed new features and interaction models, but also robustly work across several devices while being maintainable. I’m saying the above as both a warning for other academics that join industry, but also as a call to adventure! More academics should spend time in industry! I have since looked back at code I wrote while still in academia and it’s often embarrassing; the worst parts are over-rushed and, paradoxically, also over-designed. I am now a much more well-rounded programmer.
The more time went on, the more I found myself getting involved in extracurricular projects. These took up more and more of my resources – temporal, mental, emotional, etc. and I eventually realized that these all fit into a single theme. While previously, I saw myself as an interaction designer for novel interfaces, what I really, truly, deeply cared about was building novel participatory digital theatre-like experiences. With the magic of hindsight, one can see how the latter is a more fully actualized version of the former. At Occipital, I was working on a very powerful platform for other people to build content on. I realized it made more sense for me to be building the content myself, in ad-hoc collaborations as suited to a given project, rather than as part of a persistent company. At least for the next little while.
So, I quit Occipital, exercised all my options, then, due to being on a TN Visa and Toronto being much cheaper than SF, I moved back to Toronto. I established a sole proprietorship to do contract work, and made sure the keeping-me-afloat contract work never exceeded 50% of my time.
I met Rob Morgan at GDC 2015, where he styled himself as a video game writer for Virtual Reality. We chatted back and forth over the following year. When he saw Josh Marx and I’s project The Painting, he ecstatically recruited me to do something similar in the UK. And thus arose Coming Out, a locative audio narrative experience about the Future of Love, wonderfully digital and queer. This project was sponsored by the UK arts organization NESTA, and I got to fly to the UK twice over the summer to run a preview version at a the Last Word festival, and then do an in-situ installation for where the final project will take place, during FutureFest September 17-18. Rob and I iterated extensively on the UX and the wording of the script; one intentionally unusual aspect of this project is the user is not required to look at the screen at all – it’s meant to be freeing from our usual daily cellphone-focused experiences. An iBeacon cannot detect where someone is looking, so Rob has to write words, and I have to write robust software, so that we can subtly guide people around as if they’re listening to someone monologuing to them over a phone. We have excellent sound editing and voice actors, so I got to listen to lots of swoony British accents.
Post-FutureFest, we’re looking to develop this format further, and are doing so under the label Playlines, a portmanteau of play and leylines. There’s at least one possible locative project in the pipe coming in the next year.
I met Jasper de Tarr through Kinetech nearly a year and a half ago, and we kept talking about doing theatre in virtual reality. What would it look like? Would everyone be in VR headsets (no, why would you need to be in the same place then)? We could have done anything at all, and it would count as VR theatre since no-one else was doing it. We ran several small performances featuring a single audience member in a VR headset, and everyone else around them. It worked surprisingly well. We then did a larger-scale performance for DorkBot at the Grey Area, and this coming Tuesday, our 50 minute show is debuting with the San Francisco Fringe Festival. I don’t want to spoil too much, as I want you to see it. I will say that there are a couple VR headsets, and some of the audience spends time in them. This is our next iteration in meaningful VR Theatre interactions, and I’d love for you to see it and hear what you think about it. Say you’re going on the Facebook event here!
Right now, Raktor thinks of itself as an art collective, not a company. Every VR meetup we go to, we have to re-explain this to the deluge of VR startup folk – no, we don’t have VC funding, no we don’t have a 10x plan, we are simply very competent and experienced participatory improv theatre artists who also can build and run a futuristic digital show that isn’t a half-assed “X for Y” like most of the other stuff you see. Raktor takes its name from a short passage in Neal Stephenson’s Diamond Age, describing audience interaction with traditional theatre a century hence.
We have a Mixed Reality Live VR show in the prototype phase. This has taken a sideline as we’ve focused on polishing the Fringe show, but look for it again in the following months. We’re currently hunting for a dedicated venue in the Bay Area where we can set up a Vive, green screen, and host several audience members in person. Our intention for this show is to have half of its audience members in person, the other half coming in over Twitch or streaming service (highly suggestive hint – contact me if you have a streaming service that wants to work with us). We care a lot about meaningful interactions between the performers and the live and remote audience. There’s no point in live-streaming our show out to the internets unless we can figure out a way for those folks to meaningfully interact. At this stage, we’d rather get 10 meaningfully-interacting audience members than get really popular and get 10,000.
Most Mixed Reality VR videos use a colour camera with a known position relative to the VR rig and a green screen behind. While this works okay, it has two frustrating problems:
(1) props in VR that the performer might manipulate have to be explicitly declared as foreground, and the system chooses whether to render them behind or in front of the performer based on an arbitrary threshold, like the position of the performer’s head. This is broken for multiple performers, and has all sorts of other awkward cases if you’re doing any sort of performative prop manipulation.
(2) the system does not know the pose of the performer, merely which pixels the performer occupies.
The solution I prototyped this summer is to use a Kinect 2 to both determine the performer foreground and track the other skeleton. I’ve seen other people use this technique as an alternate to a green screen, but I’m pretty sure I’m the first to use it to replace the live person with a rigged virtual avatar. Check it out here, where I have a magic hat I can put on to change me from a performer to a virtual character:
Current weak points of this approach:
(1) The Kinect 2 has a low colour resolution at the scale you’d want to do a multi-person performance at.
(2) A single Kinect captures only one side of the performer and they will occlude performers behind them.
I’d love an approach that solves these problems, and if I can’t find one (please contact me) I’ll put one together myself if I have to, dammit.
The above Kinect 2/Vive Mixed Reality setup was put together in Toronto’s fantastic Art Incubator/Performance space Electric Perfume. They had all the hardware, and also purely white walls. As I have a background in theatre and installation art, I like to think of them as a white-box theatre space.
For Raktor, one of the other show-running tools we wanted was a live set designer someone could use from off-stage. I pictured this running on a tablet, giving an interactive top-down view of the small stage area. At around the same time, Henry Faber and TIFF approached me to do an installation for TIFF’s entirely sold-out POP series. TIFF is taking a pretty cool approach here. As someone who’s effectively been embedded in VR development for 2 years now, it’s good to be reminded that almost everyone else knows almost nothing about VR. In the normal installation art approach, you can visit the install space at any time over multiple weeks; it’s usually unattended and broken. Instead, TIFF had 3 time-ticketed intensive weekends showing off new VR platforms, with a staff attendant at each station trained in running them. This ensured that each station was non-broken, and also that people who had never worn a headset before and didn’t know what was going on could be hand-held through the experience.
For TIFF POP 3, I adapted the live set designer idea into Inverse Dollhouse, a 2-player asymmetric VR experience where one person is wearing a headset and passive, and the other is on a tablet viewing them from top-down and active. The person wearing the headset is the “doll” in their dollhouse, and the top-down person is playing with them in their dollhouse, arranging furniture around for them. I retroactively decided that this was inspired by that other piece of Canadian content, The Friendly Giant.
Raktor also uses the technology from Inverse Dollhouse in our upcoming Fringe Show.
I got to present the paper that came from my PhD thesis in Brisbane, Australia for DIS 2016. This 10-page paper is a short summary of the whole work, but as a recovering academic (will write more about that at some point), I must urge that you read my 180 page thesis if you really want to have a hope of understanding what the hell I was going on about.
The technical director for Improv Remix’s live performance, Montgomery Martin, is in the midst of a really cool thesis himself where, to put words in his mouth, he’s looking at software production practices and how they empower yet constrain modern theatre. He also works on Isadora, which is sweet. And he’s competent and nerdy, too.
I haven’t talked much about this publicly, but for the past nearly three years, I’ve been working on a board game with former housemates Cian Cruise and Jan Streekstra. I can’t quite remember how it started, but now we describe it as a cooperative multiplayer animal adventure game, as inspired by the movies of our youth – Milo and Otis, Homeward Bound, The Land Before Time and All Dogs Go To Heaven. One of our clever innovations is dealing with the problem that cooperative board games have where, if everyone knows everything, such as in Pandemic, then one person tends to take over for the group, and it excludes everyone else. The game is hijinx and surprise-heavy, like a rogue-lite we hope any group plays a couple times in a session. We playtested the latest, printed version at ProtoTO and even got to demo it to some people from Hasbro. Conceptually, the game is already hella tight, and we’re polishing the heck out of it at a steady pace.
At Occipital, my software development was primarily in C/C++/Objective-C, with occasional work in Unity C#. When I switched to personal prototyping projects over the summer, most of these were in Unity. Unity is great for throwing things together quickly that will Just Work (TM), and its multi-platform support manages to bluntly function surprisingly well, but it definitely feels like I’m on training wheels mode. I can feel my deep programming skills atrophying slightly, to the benefit of getting content out more quickly, yet with less flexibility. I would love to hear what other programmers’ experience has been switching from C/C++ to Unity C# as their primary language. When you’re doing low-level rendering work in Unity, such as I was doing at Occipital and for other contract work this summer, its closed-source nature and big architectural changes between versions can be crazy frustrating. I have almost considered doing most of my development work in a native plugin, while treating Unity as a high-level rendering and logging wrapper.
I moved back to SF this time around to do the Raktor Fringe show, and I was very fortunate to find a contract day job for at least the next 3 months that is flexible enough to let me run errands during the day for side projects. I’ll say more on it later, but it’s managing to meld my love of recombinantcontent, artificiallife, and experiences playing Dwarf Fortress…
After a year hiatus, I played through much of Destiny recently. The production design is high-quality good. The writing is maybe good. The presentation is terrible. I’m a fan of the subtle out-of-order storytelling in the Dark Souls series. It seems Destiny tried to do the same, except ignored the subtlety, so every piece of information you’ve been given is made explicit. Being interpretive human beings, when given a piece of information directly, we presume we’re supposed to do something with it. In the Souls/Bloodborne series, you absorb story; in Destiny, it’s forced into you. In another universe, there’s a great mod of Destiny that, like Bladerunner, removes much of the voiceovers and makes the story something that teasingly reveals itself.
It’s clear that much of Bungie’s work takes inspiration from Iain M. Banks. Not just in components (orbitals, varieties of weird alien races, minds, drones, ancient alien stuff, etc.) but in the whole shtick.
As a human being who has filed tax returns in four countries, which is some sort of measure of world-experience, I’ve seen a tiny part of the world, but, having circled it a few times, I am aware of the sense that it is bounded — it has an end. Both Banks’ and Destiny’s universe are very open-ended. The feel of their open-ended is enhanced by the characters and powers-that-be in them feel, by comparison to ourselves, much larger and more far-reaching, yet are themselves are presented as small and powerless in their universe.
Destiny was disturbing bad to play at first. Like, painful. This was a year ago, when I had not yet read any Culture Series books (I know, crazy). Related? Unclear.
Destiny’s inventory acquisition milieu triggers by association the same painfulness as free 2 play nickel & diming games. There’s so much business to take care of – like I pick up a random “coded engram” item, which could be one of a few items, then run an errand to take it to someone whose profession is “Cryptarch” who then Decodes it into an actual item, which has various stats which may make it ambiguously better or worse than the items I currently own. Upgrading the items I own depends on a variety of currencies, some of which I get through drops, some of which I can trade (?) for, some of which require real money, which has an ambiguously-varying exchange rate. Oh sure, there’s a quoted price, but maybe the in-brick-and-mortar-store is different. Between various merchants and mission givers, I sprint, but the distance to each merchant is 1.75x the length a sprint lasts, so I’m always, always, optimizing, always, the shopping experience of the future.
The Culture Series features a post-human(oid) series of species that were raised on separate planets, but over millennia have effectively merged, genetically and culturally. Planets are an inefficient usage of living space, so The Culture’s trillions of citizens live on various spatial super-structures, most famously orbitals, cylindrical features inspired by Larry Niven’s Ringworld whose surface area, at a minimum, runs in the dozens of Earths. The Culture is run by minds or Minds (the capitalization matters somewhat), AIs who use faster-than-light processors, made long ago by the biological parts of The Culture and through a slow version of the singularity self-improved continually and only use 9% or so of their capacity in the day-to-day business of running things and the rest of the time in Infinite Fun Space, a sort of consensual hallucination of post-Euclidean video games not really incomprehensible by mere mortals. Except no one is really mortal any more, as an embarrassing death may be fixed by a backup, and the especially embarrassing ones feature in cocktail party stories, the frustration being that you’ll never know what that version of you was thinking, going for a reckless adventure 100 days away in spacetime from your last backup. Gender may be switched by thinking about it, though still taking a few months for all the parts to swap around, and the most common practice for the (unusual) monogamous heterosexual couples is to impregnate each other then put the fetuses on growth pause as necessary so they may birth at the same time.
The Culture is a utopian, socialist, anarchist society. Banks proposed in an ancillary essay that anarchist qualifier is a necessity due a siege being an impossibility in 3D space, though that I feel is an unneeded justification. The most important premise of The Culture is that it is post-scarcity. Anyone can have anything they want. This is such a monumentally successful achievement that Banks has deemed it impossible to write interesting literature about citizens fully self-involved in The Culture, and his works of fiction only document the interactions between its sections who interact with the great unwashed masses outside. For what is more horrifying for a perfect society than interacting with a society who is not? The biggest problem for a sentient being in The Culture is feeling not important or not useful. The small portion of The Culture’s staff devoted to outside contact is fittingly named Contact, and the quasi-military-fucking-around-with-shit-sketchy-like section is Special Circumstances. But what is everyone else doing?
I think they’re playing Destiny. In Destiny, you’re the protagonist, you’re told you’re special, you’re the first to every interaction with every new enemy! You repaired the machine god that hangs over our city! You discovered the secret of what’s going on on Luna! It’s all been you! And yet, at the same time, the game displays that there’s several other…apparently equal?…also-protagonists — people?…of various ranks running alongside and even though we’re all killing the god ruining Mars we’re still having to do the minutiae of choosing what gun to buy off the guy with the try-hard hood who should be thanking you as a saviour. Are we supposed to ignore these irreconcilable absurdities? If I start monologuing in one of the social areas that All. Is. Not. What. It. Seems. And. I. Think. The. Machines. Are. Trying. To. Make. Us. Feel. Important will I get dragged off? I don’t mean vaudeville walking cane violently, but I mean through a sudden you-may-be-a-winner reward if only you complete this hot new quest that everyone is talking about.
I think a Mind, a machine god we’d make one day, would feel the only ethical solution to making an inevitably inept past-bound human feel useful is to construct a simulation to make them feel so. A depressed flock is, frankly, embarrassing. The other Minds would make fun of you on the FTL message boards. In Destiny, you aren’t just anyone, you’re a Guardian, vaguely someone who is in charge of rescuing/returning humanity to its former glory, after some sort of previous Collapse (capitalized). So what is everyone else doing, exactly? Pretty much everyone you run into is a Guardian, so what’s the rest of the 23-chromosome crowd up to? I presume they’re, at a satellite’s-eye view, having a wicked hedonistic time inside the structure you’ve run across, experimental with various shaders and face paint and teleporting each other’s genitalia inside each other. Casual short-range teleportation appears in both Destiny and The Culture Series, though in the latter it is called displacement. And yet there’s still a viable gun market, with currencies, and you can only carry 9 and it’s kind of a chore dealing with that limit. I suppose The Collapse is a good excuse – There Was Once Good Stuff But Now It Is Broken, and thus we can now revert to a feudalistic economy because it’s all crashed to shit, and apparently it’s impossible to run any sort of game with role-playing elements with any sort of economy resembling anything past the 1300s. Because getting good at economics in any economy past then requires a degree and Excel and python scripts, and I’ve already done enough Eve Online in my life.
I think I’m only able to tolerate Destiny now because it’s implied world and production design is really top-notch great, and its richness of iceberg possibility resonates with The Culture’s whole deal. And also decades of doing level and gun-feel design means that Bungie can make something really fun and slick. And tossing down a grenade while double jumping and ricocheting off a well-placed nodule and charging my fusion rifle to release it into a headshot in just the chink in a boss’s armour does feel rather splendid.
Though, I am still vigilant, squinty sly-eyed, of the parts of the game which try to convince me to keep playing it, because I Am Important To It And It Wants Me To Feel Good. Because, Destiny, if I’m not important, please. Please tell me, I want to help. I know I’m small in this whole thing but please don’t lie to me. I can change, I can do better, I really can. Let me know what I can do. We’re all in this adventure together. Everyone to the future.
On December 21st, I was in the back of a car that was in an accident on a way to work. It was the first time I was in a vehicle that had its airbags deploy. I felt a sense of gratification and immortality and having survive something that totaled 3 cars. All I had was a mild headache. I walked to my office and was fine for a few hours, ish, then had the sudden urge to take a heavy nap. I went to the doctor; turns out I had had a mild concussion and moderate whiplash.
I, naturally, took the rest of the day off. I alerted friends on social media. Some responded with minor concussion stories of their own; some responded with horror stories – “…lingering effects lasting a year later”. I didn’t even remember hitting my head, though thinking back to it now, I don’t remember passing out, because who does? I was in the back of the car that was hit, and had to wake up the driver and front-seat passenger to get them out. So I was better than them, so I must have been fine, right? The accident looked like this:
t1 = pre-accident
t2 = post-accident
I spent the next 7 days in bed, in a dark room, with no stimulation. The whole point of being sick is watching TV and playing video games. I joked that I wished I had went into a coma instead so I could just skip it until the end, like the reality version of cryosleep. Even if I had a spare thought, I had to turn the lights on to write it down. I got gifted a colouring book later, which I appreciated. I listened to a ton of long-form audio books, detailed in my last post.
Late December/early January is a big time of year for me, a little bit more than most people, because its a confluence of holidays parties where you see some people once/year, one of my oldest friends’ birthdays, and my birthday, and CES, for which my work does a hot new tech demo for that I’m quite involved with. All of which I had to participate in with less enthusiasm or not at all this year.
I’m coming out of symptoms now, though staring too long at a screen or any amount of caffeine or alcohol re-triggers them. I’ve discovered f.lux‘s Dark Room mode for OS X, which shows the screen in monochrome inverted red, which is amazingly easier on my recovering brain.
So I’m calling it a do-over. This last month didn’t count; announcing TIMESKIPDUSTINMAS
Jan 24 – DustinMas Eve
Jan 25 – DustinMas
Jan 31 – New DustinYear’s Eve
Feb 1 – New DustinYear’s Day
Feb 11 – Dustin turns 31
This leap month should solve everything and I look forward to pretending this never happened. I feel this should be a more socially acceptable thing to do for other similar circumstances.
PS. It’s cool that airbags smell like burning fire. That really woke me up quick. I found out later that they are literally fireworks.
PPS. A few of my friends suggested I sue the accident-causer for all their worth. I haven’t had anything that my medical coverage doesn’t deal with with minimal fees, and I feel that people suing other people isn’t good in the grand karma scheme.
PPPS. I was in a Lyft car a the time of the accident. Lyft and the driver have been absolutely awesome during all of this. My driver was stoic and kept it together even though he had to lean on me for a bit. Lyft gave me a bunch of free ride credits after they heard about the accident and I made sure to tell them that it was absolutely not my driver’s fault.
Books listed with date of completion. This was a year with a massive change in reading style. I only read 10 books in 2014, not including texts skimmed for the sake of picking up something for my PhD. All my spare time was saddled with PhD obligations, and spare time outside that I really did not want to spend sitting and reading. Readying suddenly picked up again around May 2015 when I was finally done any major PhD work. I read 34 book this year, tending towards super-long sci-fi books at the end; a genre I feel I’ve missed out on for a couple years.
My favourite book this year was absolutely Peter Watts’ Blindsight and absolutely anyone with an interest in neuroscience or civilization or lived experience or space should read it.
I am currently halfway through Kim Stanley Robinson’s Aurora, so it doesn’t make this list.
The Comic Book Guide To The Mission – Lauren Davis – Jan 17, 2015
Martian Chronicles – Ray Bradbury – Feb 11, 2015
Saga Volume 4 – Brian K. Vaughan & Fiona Staples – Feb 20, 2015
Trickster: Native American Tales: A Graphic Collection – Ed: Matt Dembicki – Mar 8, 2015
Micromegas – Voltaire – April 6, 2015
Un Lun Dun – China Mieville – April 7, 2015
Incandescence – Greg Egan – April 26, 2015
Second Quest – David Hellman & Tevis Thompson – April 29, 2015
Little Brother – Cory Doctorow – May 24, 2015
Foundation – Isaac Asimov – May 29, 2015
Excession – Iain M. Banks – June 10, 2015
Homeland – Cory Doctorow – June 25, 2015
The Martian – Andy Weir – June 26, 2015
Seveneves – Neal Stephenson – June 29, 2015
The Astronauts Must Not Land – John Brunner – July 3, 2015
At least 1 Zilla. Though the Zilla should get tired quickly, so having a lineup of 3-4 Zillas works well.
This game works best in an atrium – it requires a raised platform 3+ meters above the play space for the pilots to stand on. The play space (for The Zilla) is 2 x 2 metres.
1 toy object representing a plane per pilot. I wanted to find toy planes, but could only find soft plush reindeer. You want these to not be too hard or pokey as they’ll be slamming into the Zilla repeatedly.
Enough string or rope to dangle the plane from the pilots down to the Zilla. These will get tangled often, so don’t use thread or fishing wire as those are hard to untangle.
How to Play:
Pilots stand on the platform above the Zilla, directing the motion of the plane by expertly swinging their strings around. The Zilla stands in the playspace below.
The goal of the pilots is to hit the Zilla anywhere except the hands or the front of the game; when this happens, the Zilla is DEAD. If there’s a lineup of Zilla, rotate to the next one.
The goal of the Zilla is to hit the planes as many times as possible with their hands before their inevitable death. Don’t grab onto the planes.
(Optional, for balance with more pilots: when the Zilla hits a plane, the pilot must pull the plane all the way up to them and touch it with their hand before returning it to play)
The restriction on the Zilla (to turn them into a Zilla instead of a human) is that the must keep their elbows touching the sides of their torso. This is pretty effective and the only restriction necessary to totally change how your strategy works.
Your goal as a pilot is to infuriate and tease the Zilla so much that they get sloppy and make a mistake and let one of you hit them. Big, figure 8 loops and sudden drops are best for disorienting the Zilla. Expect roaring.
All my income is in the States, and all my student debt is in Canada, so I’ll be regularly be transferring pretty hefty sums of money to Canada for probably the next decade.
The support for this is pretty terrible. While money transfer within countries seems very easy, as soon as you cross a border, it becomes un-futuristically awful.
My Canadian bank (CIBC) initially suggested I mail a cheque from my American bank to someone in Canada I trusted, who would then deposit it in my bank account. They’d have to do this once/month. Nope.
The current way I’ve been doing it is via a Paypal account. I transfer money into it from the US, then transfer money out of it to Canada. The turn around time on each of these transactions is 3-5 business days, so I need to plan balances in both accounts two weeks in advance at any given time. Terrible.
For Paypal, each of the two steps to complete the transfer are through a web interface, and it takes about 2 minutes, and I can set a calendar reminder for them. There’s no explicit fee, but I’m aware there can be fees built into the exchange rate. Based on a single data point, it looks like I lose 2% of the market rate reported by http://usd.fx-exchange.com each transaction.
I wanted to compare with a Wire Transfer. This entailed me waiting in my US bank for 45 minutes, as two attendants worked on copying a form I’d filled out digitally into another form, and then charged the wrong account. The fee was $60. It took two days to complete (and frankly based on the confusion around what US vs Canadian banks expected, I expected a call saying they’d lost it). And I lost 7% of the market rate
So, Wire Transfers appear to be terrible, and PayPal is the way to go for now. I just wish it was a one-stage, not a two-stage transaction. Am I alone in this? Surely I’m not alone experiencing how stupid this is.
Also, Paypal defaults to showing my balance in Great Britain Pounds, because I started using it while I worked in the UK. So it’s a big currency mess.
I had dental surgery midday Wednesday. It’s totally minor; I now have a small plaster cast in my mouth and it was recommended I don’t talk until Friday to speed up healing.
So I haven’t talked all Wednesday night (where I went to a birthday party for a bit) and all day today at work.
I used the following tools to communicate: Virtual Voice, a text-to-speech on my Android Phone apparently designed for deaf people. Slack. Which we normally use at Occipital. When someone would turn to me and speak, I’d listen and then turn and send them an answer on Slack, and they’d either look over my shoulder or be on their way back to their computer if it was the start of a longer conversation that required us to both be sitting in front of our work machines.
Here are the raw observations:
– When you start using hand gestures or a text-to-speech app to communicate, everyone accidentally assumes you’re deaf (even when they know you aren’t) and will start playing charades themselves.
– It’s a bad habit that I tend to interrupt and talk over people impulsively. This (obviously) made me listen to people more. I’d turn away if I was switching from active listening to passive listening mode. Passive listening meaning “I’m looking up or typing the answer to what you’re saying as you’re still speaking”.
– First contact with new people who aren’t aware of my condition is unsettling for both parties. I make eye contact and then immediately open my phone and begin typing away. This obviously comes off as massively rude and I felt shitty every time. I’m supposed to avoid operating heavy machinery while the post-op painkillers, so I’ve been cabbing (via Lyft) back and forth to work. I’d have the message “sorry I can’t talk, I had dental surgery” queued up in my text-to-speech program so I could avoid the awkward moment. Especially when you’re getting in the cab at first and you’re supposed to have the initial transaction of “is this the right car?”
– I tend to speak lots and fast. About 30% of what comes out of my mouth is goofy witticism fluff. This impulse is hard to satisfy when you have a 15 second+ lag on communication when you’re hanging out in a group. Though sometimes it comes off as Fridge Logic, which can be funnier. Sometimes.
– As people I communicated adjusted to my predicament, they got better at constructing their communications as multiple choice questions, e.g. “Give me one finger if you want to play Heart of the Swarm or two fingers for Legacy of the Void”.
– When I did the survey of Android Text-To-Speech tools, I was surprised to note all of them had a single text buffer. I found myself wanting 3-5, or a queue of “most recently used” messages or some sort of autocomplete of previous messages. I could manage all of these mentally, and it was frustrating to feel limited by the app.
– As a show of solidarity/mockery, several coworkers got out speech synthesizers and we shot the shit at the end of the day. I appreciated this.
I look forward to speaking again tomorrow. Though I’d be tempted to take a vow of silence again.
Tejo is a sport played in Colombia. It’s like curling, except you throw the weights like 60 feet instead of sliding them on the ground. The target is like a sandbox, except it’s filled with clay, and tilted on its side. Inside it, are explosives; little pink triangular envelopes filled with paper. These make very loud booming noises when you hit them, and the whole Tejo hall celebrates. The weights are so heavy and the clay so viscous that you need to pry/dig the weights out.
The admission fee to a Tejo hall is that there is no admission fee. You need to buy a certain number of beers. For the one we went to, in San Gil, Santander, you had to buy 5. The elegance of this is that you aren’t explicitly buying a slot of time. Because by the time you’re done 5 beers you’ve lost any interest in the activity you were doing when you started the beers.
During the show, the single audience member uses an iPhone in their pocket, with earphones in their ears. Audio cues and events in the show are triggered based on proximity to iBeacons, embedded both in the environment and inside some movable objects.
The iBeacons, donated by Webble, look like this:
We could even take them out of their plastic casing and embed them in real objects. We laid them out in the environment like this:
During development, the script evolved from a finite state machine model, as in this Google Doc:
To a listener model, written in JSON:
The Painting — An Immersive iBeacon Theatre Experience Powered by Webble SmartSpot