Omnipresent computers – Part 2

Once more into the world of the future and once more into the world of terrifying but strangely desirable paths.

Imagine this: You’ve got this great idea for a drawing, you sit at your desk, call up some pictures to remind yourself the exact details of what hyper modern weapon you are equipping your space fish with (*cough* Avengers *cough*). Then you draw, and draw and then decide that the laser equipped terrifying space bass is really rather good and the Internet will love it. So you post.

Now at the moment, I have a desk, with screens and I could draw, if I had any talent at it, and so, once I had finished, I’d turn around, walk over to the scanner, hit the button, select where I wanted to save/send/print it (NB: future OC on that whole fuss), then I’d email to myself (different computer and all) and then finally share it. And I’m okay with that, becuase I hardly ever do share things, and I can’t draw…I digress.

Cameras are getting pretty decent these days, tiny image sensors capable of taking sort of decent images at resolutions which allow pretty good crops to be taken, and these sensors will only get better. At some point processing it all might actually catch up, but that is a little far off. Imagine if, while drawing, your computer videoed all of your pen strokes, snap shotted each time you moved your hand away, and when you decided to post, you didn’t even need to scan, it knew it was “the drawing” or “it” and it went ahead and posted it on your angsty drawings blog. Really really high resolution, beautifully lit and with a complete “revision history” of most of the drawing progress. You notice a problem with some of the shading? You could edit an area without actually having to rub out…weird stuff.

I love this idea, where your area has cameras trained on it, in sufficient different angles to capture 3D hand movement, digitize objects and documents, without any trouble or special hardware. I want to specify a rough dimension into an actual measurement? I hold my hands to the size and say measure, and its done. I want to have a skype call with decent quality video, it can do that too!

I can think of hundreds of uses for this always on image processing, as we are already seeing with motion controlled televisions or puppet shows. But there are some major problems: Mainly, the processing power needed to take a 14Megapixel sensor at 50fps, make a depth map (or even full model), extract where is person, recognize the gesture and react in real time (while running that favorite program) is something that I doubt any single computer could pull off. Using back of envelope calculations, that is 6.3 Gigabytes a second of data pouring in (3 cameras, 24bit colour, 14MPx, 50fps). Thats raw data, let alone the hundreds of filters one has to run on it. Not a chance.

Always on high definition cameras also bring big personal privacy problems, especially with the inevitable level of networking any system one used this on would have and there are a few ideas I came up with to help quell those feelings of insecurity: Two modes, in hardware on the cameras: high resolution, slow frame rate (also addresses the bandwidth problem) and a low resolution, fast frame rate, indicated by a light or some infallible indicator on the actual camera.

Another feature would have to be firewalling, even inside individual applications: the camera and any network connected components would have to be separated by a strong data wall, limiting the bandwidth between the two so no video stream could be passed, although this wouldn’t stop everything, it would help stop live sevalence. Any video chat services or online games could have express user permission to break this, but it would be ideal that this was clearly stated somewhere when in use. Sounds messy to implement though, but in a model of crazily powerful computers, such a problem would largely dissolve.

Getting computers to see usefully is something I’d surely like to see, but we have a long way to go in processing (mainly) before such a useful bit of software/hardware comes forward.

See you in the future.

Advertisements

Omnipresent computers – Part 1

The future of technology is something that occupies me, partly because its what will eventually be my everyday pursuit and partly because the current computers are just unsatisfactory. We should be living in the future, but we seem to be ever stuck in the slowly advancing present (a statement which makes no sense, I’ll grant). Anyway, here is the first installment in my n-part series.

Google Glasses

The Google Glass-es prototype/first model

Google recently announced its “Glass” project: a device and system which provides augmented reality using a pair of fairly futuristic looking glasses. Aside from the usual arguments around letting a computer giant (or in fact any company) have access to a high resolution camera pointing at everything you see, your eye, and displaying content to you almost continually (an issue I’ll be sure to write about in a future OC), I can see several problems:

Handy solve-all computer systems have been around for ages now: from the Google Mail auto calendar integration to the all popular Siri and they are great, and probably really useful (I’ve never really used them in a day to day way, I’ll admit) but have a few big problems in themselves, the main and hardest to solve being limited response: like the late Dr Lanning in iRobot (film) says: “My responses are limited, you must ask the right questions”. All of these systems suffer the same problem, they use templates to know how to go about something: Each use case is programmed in explicitly, so even if a little automatic re-phrasing of the question or request is made automatically, if you asked it to send an email of all the texts you had sent [insert person] to [another person], it would be stumped, unless the makers of the system had had some pretty far out and comprehensive ideas. And funnily enough this type of system makes the whole scenario so much simpler: if one has a set of templates, keywords and re-phrasing techniques which all link explicitly to a method of communication or research, one can just think up a load of use cases and implement them, without much performance needed (after all these things are meant to be used, so there is no point making them take months to get directions to your local corner shop).

A template might go like this:

“[Possible  please] [Send a message, Text] to [Contact name or number] [saying, asking etc] [A string]”

or

“[How would I get, directions] to [the nearest] [Place, category of place, address]”

And of course such systems would know how to deal with those sorts of requests, and this is all fine for most of the times we do things, until we actually want something done. You shouldn’t have to know the limitations of a system while using it and this has been the bane of such systems forever (along with some other issues involving voice-to-text capability and “always on technology” – a topic I’ll go into in another OC).

Seemingly the only way of sorting this out is to have an intelligent computer system (AI driven) which can wheedle its way into file systems, networks, programs and APIs to give it pretty unlimited control over what you might want to access, hardly a comforting proposition, but one which will give the best end results.

So maybe thats the way it will go, computers actually doing a good job in terms of everything-control, rather than being able to tell you where to bury bodies, but not being able to tell you where the nearest Michelin-stared restaurants are.

Casual Computing

How would you like to be using your computer? Anything you wanted, have a think (or even comment (might be interesting if you did it before reading the post…)).

I’m sure I’m not the only person seeing the slow and long awaited rise of the concept I would call causal computing, the move away from the omnipresent mouse, track pad and keyboard which has been our main interface with computers for decades. Touch screens coming to more and more devices is not the main change I’m talking about: The whole attitude of computing is changing. Browsing has long been a bit of a “time wasting” activity but is more and more coming a leisure activity, commonly done while not-watching-but-kind-of a film or somesuch, to the point where new devices have come out pretty much just to fuffil this one function.

Tablets, in my view, are changing computing more than the last 10 years of innovation has. No longer are you worried about using a menu to get to a program to search the web, you pick up a slate of electronics while sitting on the sofa and just start browsing, “facebooking” or chatting. They have started to blur the boundaries of digital information and everyday offline life in really large way, even more than the invention of the real smart phone (seeded by Apple I will concede) did.

They make interfaces easy, they use real life sensor data, cameras and blistering performance to deliver a much more intuitive and less intrusive way of experiencing technology. Intuitive interfaces have long been the aim of all interface designers, and this has come on leaps and bounds (although still has a long way to go) with the introduction of such technologies.

Voice control has traditionally failed, perhaps Siri will help this trend, however it needs to be on tap all the time, the same with movement control. Whats the point in having voice control when you have to touch the thing to actually get to that mode? This must include the whole system working wherever you are, not just by your device.

So what of the future? Perhaps the trend of tablets from phones will continue, leading to the long sort after desk computers, rather than desktop. An interconnected area where everything is accessible from your coffee table, including playing huge games with your friends, or writing a research paper. The future is impossible to accurately predict, but what I am sure of, is that computing is on a consumer friendly course, whether through voice control, movement control or just the plain touchscreen, something is going to change pretty dramatically in the human-computer interfaces, and has already started.

As a side note, this blog has recently got its first thousand views! So thank you and keep reading 🙂

%d bloggers like this: