Thursday, 17 December 2015

Conscripts on Doctor Who

As mentioned before, my interests in conlanging and Doctor Who don't overlap as much as I'd like, due to Tardis telepathically translating everything. This apparently goes for writing too - in The Impossible Planet the Doctor realises that they're in a particularly dangerous situation when they encounter a script that the Tardis can't translate.

However, something odd has been going on this season. Amidst the rumours of the Hybrid, the theme of Truth of Consequences, the story of Ashildr, and the build up to the death of Clara, there's been another, more subtle theme in the background. In Under the Lake / Before the Flood, the Fisher King scratches this on the wall of his hearse.

The Fisher King's Script

The Doctor can't read it, and has to get Cass to lip-read what the ghosts are saying before he can work out what it means. The reason that the Tardis can't translate it is that the writing is intended to plant a message in the mind of the reader. Also, in that story we have the use of British Sign Language, which the Tardis can't translate because The Doctor's forgotten it.

In The Zygon Invasion / The Zygon Inversion we see this where the Zygon rebels have been active.

Zygon Script

Neither The Doctor, Clara nor the Tardis is present in these scenes, and whatever the poster says doesn't come into the story.

In Sleep No More, we see this Indo-Japanese script (apparently a hybrid on Kanji and Devanagari) on Le Verrier Space Station.

Indo-Japanese script

In this case it's not translated because what we're seeing has been hacked from the visual cortices of those who experienced the events, most of whom could read the script to start with. As in Under the Lake / Before the Flood, Ramussen's broadcast is meant to be a vector for mental malware.

Finally, in Face the Raven, we get this.

Aurebesh script

This is a bit of an oddity. It's the Aurebesh script from Star Wars, which is simply a cipher for the Roman alphabet (Star Wars never having cared about plausible languages). It says Delorean, which is presumably a Back to the Future reference. It looks like this is just an in-joke.

So, is this leading up to something? Might Doctor Who be about to start using conlangs? And if so, please can I make one?

Thursday, 22 October 2015

Integrating Java with Python the Easy Way

I have an idea for something I want to build, which will involve a speech recognition component, written in Java and a Hidden Markov Model, written in Python. So that means I have to integrate components written in two different languages. What's the best way of doing it? One way would be to run Python on the JVM. There is a Python implementation for the JVM, Jython, but from what I've heard it's painfully slow. Since I'm aiming for something as close to real time as possible, it's unlikley to meet my needs. It did occur to me that there could be a faster way to run Python on the JVM. Pypy is a self-hosting, JIT-compliled implementation of Python, which is much faster than the reference implementation. If its code generation phase were modified to emit Java Bytecode, then Pypy could run on the JVM. This approach, which I call Jypy, would be a worthwhile project for somebody who knows Java Bytecode. Unfortunately, I'm not that person. However, I then thought about the architecture of my project. I'd already realised that it would have to be organised as a number of concurrent processes, communicating via pipes. I then realised that meant that I didn't need to run Python on the JVM at all. The Java and Python components could each run in their own processes, and didn't need to share any resources. The only integration I needed was pipes. You know the sense of delight when you realise that something complicated is actually simple? That's how I felt when I worked that out.

Tuesday, 21 July 2015

Conlanging and Progressive Rock

Earlier this year, I helped to organise the Sixth Language Creation Conference, which I did so that I could finally get to meet so that I could finally meet some of the friends I've made online over the past few years. Among these were John Quijada (who later wrote some very flattering things about me in the Language Creation Tribune) and David Peterson, of whom some of you may have heard.

Conlanging is not the only thing we have in common. We're all progressive rock fans, too, but while I have never managed to get a band together, John has composed an album's worth of material, and recorded it with David singing. Here's the first track.

The impressive thing here is that David is singing in Ithkuil. Ithkuil is John's conlang, and it's very complex. It has about twice as many sounds as English, and allows more complex combinations. Due to the great precision and concision of Ithkuil, the slightest mispronunciation would change the meaning. It must have taken David ages to learn to sing it.

Wednesday, 17 June 2015

The Bootstrap Problem

A post on Data Community DC discusses Why You Should Not Build a Recommendation Engine. The main point is that recommendation engines need a lot of data to work properly, and you're unlikely to have that when you start out.

I know the feeling. In a previous job I created a recommendation engine for a business communication system. It used tags on the content and user behaviour to infer the topics that the user was most likely to be interested in, and recommend content accordingly. Unfortunately, my testbed was my employer's own instance of the product, and the company was a start-up that was too small to need its own product. I never really got a handle on how well it worked.

This brings me to Emily. Emily isn't a product. It's a personal portfolio project. I had an idea for a recommendation system that would infer users' interests from content they posted in blogs, and recommend similar content. The problem is, the content it recommends comes from the other users, so at its current early stage of operation, it doesn't have much to recommend. The more people use it, the better it will become, but what's the incentive to be an early adopter?

What I seem to have at the moment is a recommendation engine that needs somebody to recommend it.

Tuesday, 9 June 2015

Emily Has Moved

As those of you who've tried out my semantic recommendation system, Emily, will have noticed, it didn't work. The reason was, I'd used the wrong cloud platform. Google App Engine isn't meant for anything that needs as much computation as Emily does, so I've ported Emily to OpenShift. This has the advantage that it gives me much more control of how I write the code, and I can use things like MongoDB and multiprocessing. Let's try this again!

Thursday, 4 June 2015

Developing Emily - Revision 24: Porting to OpenShift. AppEngine wasn't suitable for the computationally intense

Changed Paths:
    Modify    /trunk/Emily.py
    Modify    /trunk/EmilyBlogModel.py
    Modify    /trunk/EmilyTreeNode.py
    Modify    /trunk/emily.js

Porting to OpenShift. AppEngine wasn't suitable for the computationally intense parts of Emily.

from Subversion commits to project emily-found-a-thing on Google Code http://ift.tt/1G9GWoV
via IFTTT

Tuesday, 26 May 2015

Introducing Emily - my latest Fantastical Device

Emily is a semantic recommendation system for blogs that I've been working on. If you give it an Atom or RSS feed from a blog, it will create a feed of items from other blogs that hopefully match your interests.

It does this by using significant associations between words to infer your interests. Suppose a randomly-chosen sentence from your blog has a probability P(A) of containing word A, and a probability P(B) of containing word B. If there were no relationship between the words, we would expect the probability of a sentence containing both words to be P(AB)=P(A)P(B). If there is significant information contained in the relationship between the words, they will cooccur more frequently than this, and we can quantify this with an entropy, H=log2 P(AB) - log2 P(A) - log2 P(B)

Emily uses the strengths of these associations to calculate the similarity between two blogs. Then, if you post an article that makes your blog more similar to somebody else's blog than it was before, that article is recommended to them.

This has been an interesting project for me. I've learned about Google App Engine, pubsubhubbub and Atom. What I need now is for people to try it out. I'm looking forward to when Emily starts finding things for me.

Thursday, 21 May 2015

Developing Emily - Revision 23: Ready to launch

Changed Paths:
    Modify    /trunk/Emily.py
    Modify    /trunk/EmilyBlogModel.py
    Modify    /trunk/EmilyTreeNode.py
    Add    /trunk/emily.js

Ready to launch

from Subversion commits to project emily-found-a-thing on Google Code http://ift.tt/1IN7SNv
via IFTTT

Thursday, 15 January 2015

Alan Fridge

"From now on all rumours must be attributed to Alan Fridge!! BBC mole, Cardiff insider—Alan Fridge!!!"
—Steven Moffat (personal friend of Alan Fridge), Outpost Gallifrey Forums, 6 August 2007

Last year, a tabloid newspaper published a rumour that Jenna Coleman (who plays Clara) was leaving Doctor Who. It was, of course, complete rubbish, Jenna was quick to make it clear that she wasn't going to answer the question either way, since it was a goldmine of free publicity - something that the rest of the cast, crew and publicity department got on board with. Just before Christmas, when the fact that Jenna was staying couldn't be kept secret any longer, the rumourmonger tried to save face by claiming that she'd had a last minute change of heart, and that the ending of _Last Christmas_ had been hastily rewritten to accomodate this. However, the ending certainly didn't look tacked-on.

So who is Alan Fridge? My theory is that he's a low-ranking member of the production team, a runner or somebody like that. He's around a bit during filming, and picks up things like the row between Clara and The Doctor in _Kill the Moon_, or the old Clara scene in _Last Christmas_, but he doesn't have the big picture. He leaks information to the tabloids to make himself feel important, and probably for a kickback.