Omnipresent Computers – Part 3

I’m a memory stick person, when I have it with me, and it allows me to have my files, and some programs (Inkscape, PChat, OpenOffice, notepad++, the list goes on and on) with me wherever and whenever I like but being a small piece of plastic, metal and glass, it’s rather prone to being lost, and or broken, necessitating frequent and thorough backups. Despite the flaws in the backup software (Syncback SE) to do with “version control” it really does work, and I’ve never lost more than about a days work, or lost the capability to work for more than a day.

People say that the cloud is the future, the ability for, anywhere, any time any files one could possibly want to access to be used, modified and globally updated over the internet, and this is surely something that “omnipresent computers” (the series) should be all over. Connectivity, ease of use and reliability are surely all improved? Well yes and no: outsourcing the storage of such sensitive data can be dangerous, especially as one has no control over deletion, duplication or good access control. One lacks the options to put version control along side plain drives or to triplicate some files. I may be a little of a hypochondriac when it comes to loosing data but this sort of flexibility is something that should be available, especially if the data is your livelihood or personally important. Who says online services have any duplication of data?

Back to the point: Save as. Why do I have to navigate a file system to save anything? I’ve long wanted a system where one attaches files to particular tags or projects, easily searchable and not rigidised to a strict file system. I could then tie a version control system to all files in that tag or project, call them up, do cross linked searches etc.

This system still needs more casuality to be added to machine computer interface, its a particularly AI and voice control friendly way of sorting files, but I can see it being a major change in the stagnating area of consumer file management. That combined with a service through which one could access ones files off a home server or separate server through a similar method quickly and easily, including, say adding guest access to particular tags or files.

I’ve written previously about CelTex, and the lovely interface afforded by a stored project file but add to that the power of visual linked branches, one can see a future where everyone’s files are neatly organised, without one having to remember to put things in the right place. Maybe soon we’ll see “Save as” replaced with “store”, and get rid of the book-keeping to the computer.


Omnipresent computers – Part 2

Once more into the world of the future and once more into the world of terrifying but strangely desirable paths.

Imagine this: You’ve got this great idea for a drawing, you sit at your desk, call up some pictures to remind yourself the exact details of what hyper modern weapon you are equipping your space fish with (*cough* Avengers *cough*). Then you draw, and draw and then decide that the laser equipped terrifying space bass is really rather good and the Internet will love it. So you post.

Now at the moment, I have a desk, with screens and I could draw, if I had any talent at it, and so, once I had finished, I’d turn around, walk over to the scanner, hit the button, select where I wanted to save/send/print it (NB: future OC on that whole fuss), then I’d email to myself (different computer and all) and then finally share it. And I’m okay with that, becuase I hardly ever do share things, and I can’t draw…I digress.

Cameras are getting pretty decent these days, tiny image sensors capable of taking sort of decent images at resolutions which allow pretty good crops to be taken, and these sensors will only get better. At some point processing it all might actually catch up, but that is a little far off. Imagine if, while drawing, your computer videoed all of your pen strokes, snap shotted each time you moved your hand away, and when you decided to post, you didn’t even need to scan, it knew it was “the drawing” or “it” and it went ahead and posted it on your angsty drawings blog. Really really high resolution, beautifully lit and with a complete “revision history” of most of the drawing progress. You notice a problem with some of the shading? You could edit an area without actually having to rub out…weird stuff.

I love this idea, where your area has cameras trained on it, in sufficient different angles to capture 3D hand movement, digitize objects and documents, without any trouble or special hardware. I want to specify a rough dimension into an actual measurement? I hold my hands to the size and say measure, and its done. I want to have a skype call with decent quality video, it can do that too!

I can think of hundreds of uses for this always on image processing, as we are already seeing with motion controlled televisions or puppet shows. But there are some major problems: Mainly, the processing power needed to take a 14Megapixel sensor at 50fps, make a depth map (or even full model), extract where is person, recognize the gesture and react in real time (while running that favorite program) is something that I doubt any single computer could pull off. Using back of envelope calculations, that is 6.3 Gigabytes a second of data pouring in (3 cameras, 24bit colour, 14MPx, 50fps). Thats raw data, let alone the hundreds of filters one has to run on it. Not a chance.

Always on high definition cameras also bring big personal privacy problems, especially with the inevitable level of networking any system one used this on would have and there are a few ideas I came up with to help quell those feelings of insecurity: Two modes, in hardware on the cameras: high resolution, slow frame rate (also addresses the bandwidth problem) and a low resolution, fast frame rate, indicated by a light or some infallible indicator on the actual camera.

Another feature would have to be firewalling, even inside individual applications: the camera and any network connected components would have to be separated by a strong data wall, limiting the bandwidth between the two so no video stream could be passed, although this wouldn’t stop everything, it would help stop live sevalence. Any video chat services or online games could have express user permission to break this, but it would be ideal that this was clearly stated somewhere when in use. Sounds messy to implement though, but in a model of crazily powerful computers, such a problem would largely dissolve.

Getting computers to see usefully is something I’d surely like to see, but we have a long way to go in processing (mainly) before such a useful bit of software/hardware comes forward.

See you in the future.

The order of magnitude future

Around 1970, the law known as Moore’s Law was formulated: the simple statement that processing power would double, and price halve (or such kind of guidelines) every 12 to 18 months. A phenomenal prediction at the time, but one that has held amazingly well so far. We see a new range of amazing processors come out every year, but recently, the push has been for parallelism rather than sheer speed of silicon: for density over yield and integrity. This does mean that markers such as transistors per chip go up, but has the problem that actual user experience does not necessarily go that way (after all operating systems and programs cannot be custom fitted to a particular architecture of parallelism while keeping general enough for consumers). The problem is: we have hit a wall: Silicon just isn’t fast enough. It pretty much tops out at around 8.3GHz (or so the over-clockers seem to have proved). Now while this might be pretty fast, it means that to get more power, one has to have more die area. To add to this we are hitting the limits of just how small you can make a single transistor: about 5nm. So what the world really needs is a completely new technology base: something that can go orders faster, orders smaller, orders lower power… And we are starting to see such technologies emerging in labs around the world, some are even beginning to make real progress, so much so we may see them in a decade or so…

A similar problem is that of space travel: its been pretty common for a few decades now but in no way is it cheap or easy, it takes thousands of tons of fuel to propel tons of satellite into orbit costing millions. To say take men off Mars or other feats, we dont need a little more energy and some clever design, we need loads more energy and loads less weight, we need systems which run off pretty much no power. We need an order of magnitude.

Labs at the moment, like for semiconductor design, are also discovering new materials (weirdly based off carbon again) which give strength, weight and properties way higher than what aluminium and carbon fiber currently delivered. But this brings with it a whole new set of accessibility problems: Who could drill something as strong as diamond with a hand drill? Who could grow crystals of insanely complicated solution in their garage? What start up could afford very high cost manufacturing just to keep up with big companies? At the moment the technology industries are split, but in hardware enthusiasts can mostly match big companies in terms of materials, maybe not in reproducability, but in ease of prototyping certainly.

So what will the future of engineering hold? Probably a few very able companies with patented super materials, licensed to medium-sized manurfactures…Looks pretty bleak, especially thinking how many massive developments have come from little companies.

Energy use too needs to be massively reduced if we are to continue living. The current trend is to insulate a bit more, make some renewable power stations and charge more for fuel and these approaches gets one some way towards the targets set. But it doesnt go anywhere near the the problem that the amount of energy and material the average person in a developed country uses is way, way more than is ever sustainable. In energy we need an order leap to the user using less, not reducing the impact of each unit, but reducing the units. Our way of living is totally unsustainable and we need to reduce impacts by tens or even hundreds of times.

I leave you with this thought: “If humanity survives the next 300 years, we will survive the next 1500”

Casual Computing

How would you like to be using your computer? Anything you wanted, have a think (or even comment (might be interesting if you did it before reading the post…)).

I’m sure I’m not the only person seeing the slow and long awaited rise of the concept I would call causal computing, the move away from the omnipresent mouse, track pad and keyboard which has been our main interface with computers for decades. Touch screens coming to more and more devices is not the main change I’m talking about: The whole attitude of computing is changing. Browsing has long been a bit of a “time wasting” activity but is more and more coming a leisure activity, commonly done while not-watching-but-kind-of a film or somesuch, to the point where new devices have come out pretty much just to fuffil this one function.

Tablets, in my view, are changing computing more than the last 10 years of innovation has. No longer are you worried about using a menu to get to a program to search the web, you pick up a slate of electronics while sitting on the sofa and just start browsing, “facebooking” or chatting. They have started to blur the boundaries of digital information and everyday offline life in really large way, even more than the invention of the real smart phone (seeded by Apple I will concede) did.

They make interfaces easy, they use real life sensor data, cameras and blistering performance to deliver a much more intuitive and less intrusive way of experiencing technology. Intuitive interfaces have long been the aim of all interface designers, and this has come on leaps and bounds (although still has a long way to go) with the introduction of such technologies.

Voice control has traditionally failed, perhaps Siri will help this trend, however it needs to be on tap all the time, the same with movement control. Whats the point in having voice control when you have to touch the thing to actually get to that mode? This must include the whole system working wherever you are, not just by your device.

So what of the future? Perhaps the trend of tablets from phones will continue, leading to the long sort after desk computers, rather than desktop. An interconnected area where everything is accessible from your coffee table, including playing huge games with your friends, or writing a research paper. The future is impossible to accurately predict, but what I am sure of, is that computing is on a consumer friendly course, whether through voice control, movement control or just the plain touchscreen, something is going to change pretty dramatically in the human-computer interfaces, and has already started.

As a side note, this blog has recently got its first thousand views! So thank you and keep reading 🙂

%d bloggers like this: