A2 Technology Coursework – An Underwater Vehicle

An Underwater Vehicle_Final

This is my A2 (systems and control) technology coursework. Its been a while so I figured I should publish this as a reference not only to those doing the course but generally as a moderate example of report writing and a project. I hope you enjoy.


Omnipresent Computers – Part 3

I’m a memory stick person, when I have it with me, and it allows me to have my files, and some programs (Inkscape, PChat, OpenOffice, notepad++, the list goes on and on) with me wherever and whenever I like but being a small piece of plastic, metal and glass, it’s rather prone to being lost, and or broken, necessitating frequent and thorough backups. Despite the flaws in the backup software (Syncback SE) to do with “version control” it really does work, and I’ve never lost more than about a days work, or lost the capability to work for more than a day.

People say that the cloud is the future, the ability for, anywhere, any time any files one could possibly want to access to be used, modified and globally updated over the internet, and this is surely something that “omnipresent computers” (the series) should be all over. Connectivity, ease of use and reliability are surely all improved? Well yes and no: outsourcing the storage of such sensitive data can be dangerous, especially as one has no control over deletion, duplication or good access control. One lacks the options to put version control along side plain drives or to triplicate some files. I may be a little of a hypochondriac when it comes to loosing data but this sort of flexibility is something that should be available, especially if the data is your livelihood or personally important. Who says online services have any duplication of data?

Back to the point: Save as. Why do I have to navigate a file system to save anything? I’ve long wanted a system where one attaches files to particular tags or projects, easily searchable and not rigidised to a strict file system. I could then tie a version control system to all files in that tag or project, call them up, do cross linked searches etc.

This system still needs more casuality to be added to machine computer interface, its a particularly AI and voice control friendly way of sorting files, but I can see it being a major change in the stagnating area of consumer file management. That combined with a service through which one could access ones files off a home server or separate server through a similar method quickly and easily, including, say adding guest access to particular tags or files.

I’ve written previously about CelTex, and the lovely interface afforded by a stored project file but add to that the power of visual linked branches, one can see a future where everyone’s files are neatly organised, without one having to remember to put things in the right place. Maybe soon we’ll see “Save as” replaced with “store”, and get rid of the book-keeping to the computer.

Omnipresent computers – Part 2

Once more into the world of the future and once more into the world of terrifying but strangely desirable paths.

Imagine this: You’ve got this great idea for a drawing, you sit at your desk, call up some pictures to remind yourself the exact details of what hyper modern weapon you are equipping your space fish with (*cough* Avengers *cough*). Then you draw, and draw and then decide that the laser equipped terrifying space bass is really rather good and the Internet will love it. So you post.

Now at the moment, I have a desk, with screens and I could draw, if I had any talent at it, and so, once I had finished, I’d turn around, walk over to the scanner, hit the button, select where I wanted to save/send/print it (NB: future OC on that whole fuss), then I’d email to myself (different computer and all) and then finally share it. And I’m okay with that, becuase I hardly ever do share things, and I can’t draw…I digress.

Cameras are getting pretty decent these days, tiny image sensors capable of taking sort of decent images at resolutions which allow pretty good crops to be taken, and these sensors will only get better. At some point processing it all might actually catch up, but that is a little far off. Imagine if, while drawing, your computer videoed all of your pen strokes, snap shotted each time you moved your hand away, and when you decided to post, you didn’t even need to scan, it knew it was “the drawing” or “it” and it went ahead and posted it on your angsty drawings blog. Really really high resolution, beautifully lit and with a complete “revision history” of most of the drawing progress. You notice a problem with some of the shading? You could edit an area without actually having to rub out…weird stuff.

I love this idea, where your area has cameras trained on it, in sufficient different angles to capture 3D hand movement, digitize objects and documents, without any trouble or special hardware. I want to specify a rough dimension into an actual measurement? I hold my hands to the size and say measure, and its done. I want to have a skype call with decent quality video, it can do that too!

I can think of hundreds of uses for this always on image processing, as we are already seeing with motion controlled televisions or puppet shows. But there are some major problems: Mainly, the processing power needed to take a 14Megapixel sensor at 50fps, make a depth map (or even full model), extract where is person, recognize the gesture and react in real time (while running that favorite program) is something that I doubt any single computer could pull off. Using back of envelope calculations, that is 6.3 Gigabytes a second of data pouring in (3 cameras, 24bit colour, 14MPx, 50fps). Thats raw data, let alone the hundreds of filters one has to run on it. Not a chance.

Always on high definition cameras also bring big personal privacy problems, especially with the inevitable level of networking any system one used this on would have and there are a few ideas I came up with to help quell those feelings of insecurity: Two modes, in hardware on the cameras: high resolution, slow frame rate (also addresses the bandwidth problem) and a low resolution, fast frame rate, indicated by a light or some infallible indicator on the actual camera.

Another feature would have to be firewalling, even inside individual applications: the camera and any network connected components would have to be separated by a strong data wall, limiting the bandwidth between the two so no video stream could be passed, although this wouldn’t stop everything, it would help stop live sevalence. Any video chat services or online games could have express user permission to break this, but it would be ideal that this was clearly stated somewhere when in use. Sounds messy to implement though, but in a model of crazily powerful computers, such a problem would largely dissolve.

Getting computers to see usefully is something I’d surely like to see, but we have a long way to go in processing (mainly) before such a useful bit of software/hardware comes forward.

See you in the future.

Omnipresent computers – Part 1

The future of technology is something that occupies me, partly because its what will eventually be my everyday pursuit and partly because the current computers are just unsatisfactory. We should be living in the future, but we seem to be ever stuck in the slowly advancing present (a statement which makes no sense, I’ll grant). Anyway, here is the first installment in my n-part series.

Google Glasses

The Google Glass-es prototype/first model

Google recently announced its “Glass” project: a device and system which provides augmented reality using a pair of fairly futuristic looking glasses. Aside from the usual arguments around letting a computer giant (or in fact any company) have access to a high resolution camera pointing at everything you see, your eye, and displaying content to you almost continually (an issue I’ll be sure to write about in a future OC), I can see several problems:

Handy solve-all computer systems have been around for ages now: from the Google Mail auto calendar integration to the all popular Siri and they are great, and probably really useful (I’ve never really used them in a day to day way, I’ll admit) but have a few big problems in themselves, the main and hardest to solve being limited response: like the late Dr Lanning in iRobot (film) says: “My responses are limited, you must ask the right questions”. All of these systems suffer the same problem, they use templates to know how to go about something: Each use case is programmed in explicitly, so even if a little automatic re-phrasing of the question or request is made automatically, if you asked it to send an email of all the texts you had sent [insert person] to [another person], it would be stumped, unless the makers of the system had had some pretty far out and comprehensive ideas. And funnily enough this type of system makes the whole scenario so much simpler: if one has a set of templates, keywords and re-phrasing techniques which all link explicitly to a method of communication or research, one can just think up a load of use cases and implement them, without much performance needed (after all these things are meant to be used, so there is no point making them take months to get directions to your local corner shop).

A template might go like this:

“[Possible  please] [Send a message, Text] to [Contact name or number] [saying, asking etc] [A string]”


“[How would I get, directions] to [the nearest] [Place, category of place, address]”

And of course such systems would know how to deal with those sorts of requests, and this is all fine for most of the times we do things, until we actually want something done. You shouldn’t have to know the limitations of a system while using it and this has been the bane of such systems forever (along with some other issues involving voice-to-text capability and “always on technology” – a topic I’ll go into in another OC).

Seemingly the only way of sorting this out is to have an intelligent computer system (AI driven) which can wheedle its way into file systems, networks, programs and APIs to give it pretty unlimited control over what you might want to access, hardly a comforting proposition, but one which will give the best end results.

So maybe thats the way it will go, computers actually doing a good job in terms of everything-control, rather than being able to tell you where to bury bodies, but not being able to tell you where the nearest Michelin-stared restaurants are.

%d bloggers like this: