I got a couple of new ideas this past few days. They each deserve a separate article, but since I’m dead-tired from working on TimeOP, I’ll just lay out some bullet points.
I’m very curios what you think:
- User Interface Usability Metrics. There are a lot of UI prototyping tools (the best I know of is FlairBuilder), but none of the gives you any metric of how usable the user interface you come up really is. I know for a fact (thanks to my Human-Computer Interface course) that there are analytical models to asses how good an interface is: computing time required to perform simple tasks in the UI (“find project”, “add a contact”, “send message”), computing the length/duration of the critical path in performing certain tasks, visual complexity of the user’s viewpoint at any certain time. One way to do this is statistical – let users run like rat in a maze and time how much time it takes them to get to the cheese and just simulating actual user before. While users are complex being (well, some of them), they are a pretty scarce resource nowadays, a resource one does not afford to waste. But there is another way: modelling and simulation user behavior on a certain interface. Simulating such a model would be a great could help developers spare users of a lot of design horrors and avoid a lot of obvious mistakes. There are quite a few models for user behavior, but I don’t know of any reliable commercial implementation to date. Cristian, if interested, drop me a line.
- Classical (PERT) Project planning is a NP-Complete problem (long story short: takes a really long time to solve). Once you define all the dependencies between tasks are defined and the resource constraints have been added, this basically becomes a version of the Multiprocessor Scheduling problem, which implies that you have a bunch of tasks, a bunch of workers (processors) and you need to find a scheduling of tasks (knowing that a worker shouldn’t work on two tasks at the same time). With a great number of workers and tasks, this problems takes a lot of time to solve (even on a computer). However, genetic algorithms are famous for giving near-best solutions for such problems in a lot less computing time. Can’t help but wondering: isn’t there an implementation based on genetic algos that schedules tasks in a project ?
- You probably heard all the buzz about the XBox Kinect (aka Project Natal). It’s basically a controller free game console. It just has a depth camera and regular RGB camera and this way it can accurately measure where your arms/feets. How cool is that ? Not so cool, because after spending $399.99, you’ll only get to play just a couple dozen games. What if one could use just software to get half the quality of controller free gaming for free? Imagine this: four colored paper bands – two on the hands, two on the feet. A software component would just recognize the color key’s an place the arms/legs in a 2D space accordingly. Some depth information could be gained from analyzing how big the color spots are, but that won’t be too accurate. In any case, with this kind of software component (all the hardware you need if colored paper) any game dev could bring a level of controller free control with just a webcam (which a lot of people already have). Detecting spots of color and color-keying isn’t that hard, especially using libraries like OpenCV (which is also used for face detection and more complicated stuff). I just wonder: what other kind of game controllers could one imagine using just the regular laptop webcam?
Oh well, that was my rant for the evening, hope you enjoyed.
Don’t forget to leave your comments.
And don’t take it too seriously.
Whatever I’ll ever say, it pales in comparison to Confession: The Roman Catholic App