Leveling out

The terms High level and low level control refer the abstraction and application specific nature of the control. A good explanation of high and low-level is: Suppose one has a really obedient person, who you had to direct around a short maze. A low level approach might be: Put your right foot in front of your left (and instructions to keep balanced) then the other and then turn… with all the details included. Now, that’s pretty tricky from far off, especially if one has to shout it at people (slow). The next approach would be: Walk forward 10 steps, turn left…etc. Or even higher, walk to the next turning and turn left. Or the highest level of them all: Navigate the maze.

Say now it was something that flew. The lowest level just wouldn’t work, the next, the units would have to be converted, the next, might work but probably wouldn’t be that idea, however the highest: the thing might simply just fly over it, however it would “navigate” the maze. Simple!

Of course it gives you much less fine control over speed or route or remembering or anything, and requires a lot of work on the part of the addressed thing.

Software constructs like operating systems and drivers are designed to give the developer, and in the end the user, a higher level approach. It makes all file systems look the same to the program: Whether external memory cards or the built in hard disk, makes a printer look like a printer not a specific model. That is the genius of modern computing.

Of course the operating system approach is a software leveling up, and there are potentially other ways of doing such things, such as segmented hardware.

By segmented hardware I mean having external hardware which takes the specifics of the interface or driving away from the main controller. Although this takes away potentially even more control possibility from the user program, it also means that already stressed processors have fewer overheads. Here’s an example:

Say one is building an autopilot (which flies a plane) and you want it to have quite sophisticated behaviour built in: able to do stunts, cool dynamic calculations or some such. But to control the physical hardware, one needs to send servo control signals, which are made up of 1 pulse of a variable length at about 200Hz. Now say for every pulse one has to set up a interrupt routine (a part of code which is called periodically, on a pin change or on another system event), which would be called at 8(channels) times 200 a second => 1600Hz. Say you clock your main processor at 160MHz, a pretty fast micro-processor. Then you would end up having an interrupt every 100 instructions: Which, especially when combined with operating system overheads, really cut up ones flow.

If one had a pretty dumb micro (or even logic system/FPGA) one could offload all of that time dependant stuff onto it, leaving cleaner code and easier writing to the hardware. Also, if one wanted to change all of the servos for ones which take a different input, all it would require is a change of the auxiliary system, not the main source code, much like a change of drivers in a software model.

Although on a higher performance system, the overheads and problems caused by drivers and interrupts start to diminish, on lower performance systems it really does matter and especially as it will messy the code used. Problems such as this are amplified on systems with no operating systems. Sometimes, although the added complexity of an interface, the neatness and verifiable nature of separate blocks of hardware can pay off, especially as it will not fail if the main system locks up. Prehaps something worth considering…

Advertisements

Jernau

I’ve mentioned The Player of Games previously but this time I bring a slightly different post: a call for input.

A year or so ago (the last time I read the book in fact) I started designing the concept of a new board game, inspired by the concept of more complex board games taken from the book. A sort of chess with luck, hidden pieces, guesswork… something I would like to play.

Jernau (named after the protagonist of the novel) is a game played on a 15 or 20 to a side grid (probably on the computer, due to the number of pieces needed) between 2 (possibly more) players. I am releasing my preliminary work on the rules and mode of play behind the games that, although are no where near comprehensive, give a general idea.

I’ve made a Google Code project for it, at: http://code.google.com/p/jernau-game/

The rules will be transferred into the Wiki on that page in time, as well as the source code and a compiled version.

If you want to be able to edit it, just email “jernau” (dot) “game” (at) “gmail” (dot) “com” (that should throw the spam bots…) And I’ll add your email to the list of admins.

What needs doing is:

  1.  Coming up with the algorithms judging the conflicts
  2. Software implementation
  3. Closing the semantics of movement

I have already started a kind of program in which one can test the concepts (more of a test bench at the moment) in processing, although it lacks a lot of framework, any networking or a proper digital representation of the board.

The scheme of representation is a topic of a bit of interest to me, how to store, send and even play the game. The hierarchy of the system is one that I feel requires some thought, especially in thought of keeping the software light on RAM. There are multiple approaches:

  1. Have a set of counters, each with a properties stored locally, and also a coordinate of where it is.
  2. Have a set of squares, with member counters.
  3. Have a set of squares with member groups which has member counters and some other properties.
  4. Have a board with a member set of squares… a la 3.

I like the idea of 4, as it then could have tools for transferring state changes of all of the parts, as well as allowing multiple boards to be stored in one program simply.

Applying rules would require knowledge of the interaction, requiring another transfer medium etc… The usual programming choice.

I haven’t much experience in this sort of organisation of infrastructure (the networking side also needs writing!) so any help would be great!

Micro-payments

“Traditional media” is (in brief) physical information. We pay for it, not just because we are actually getting matter, but also for the information, however the information has, apparently, started to reduce the amount we actually consume which has been made or printed.

I’m writing this on a computer, probably for a few people to read it on a screen, maybe even on an e-reader or if desired, on paper, I have no idea, but I am not responsible for distributing any of it. In “traditional media”, I would either be paid for this, or I would have no chance of publishing it, as it would have to be printed, but as it is, I do not have to pay anyone, nor do you, my reader to consume my writings.

And I’m quite happy with this system, as I have no desire to force people for money for anything, unlike some (recently The Times newspaper has broken new ground by starting to charge for use of its website) and this has obviously put a lot of people off. We have all got used to paying nothing for information and it seems an affront to have to actually pay for anything. Usually the information we glean from a site is so small that actually we wouldn’t want to pay for it (who knows what kind of thing your search is actually linking to!) but sometimes it yields actual gems of insight.

Possibly what is needed for the future of good quality media and information is a system of paying very small amounts of money (cents/pence) to sites one reads with minimal time delay to the user (possibly built into the browser) while not forcing any particular provider or amount on people.

Elektor (which I have previously criticised for pricing and other bits and pieces) has a pretty neat way of dealing with this: one can buy Elektor credits, for about 70p for 10, which buys you an article, in a lovely pdf, from a previous issue of the magazine. This gives one a cost effective alternative to buying the magazine regularly, without undermining the companies business model.

Of course these are only usable on Elektor, which is a bit of a drawback if one wants to read a well written documents on politics or anything else but I like the mechanics of it…

Open source hardware, software and content all costs something to make, so why not reward the creators with a tiny bit of your hard-earned cash, it might encourage them, or fund some interesting new development…

(See that donate button on the right hand margin? Hit it and donate, go on, And I promise, I’ll write another article for every donation I get, no matter how little an amount…)

Casual Computing

How would you like to be using your computer? Anything you wanted, have a think (or even comment (might be interesting if you did it before reading the post…)).

I’m sure I’m not the only person seeing the slow and long awaited rise of the concept I would call causal computing, the move away from the omnipresent mouse, track pad and keyboard which has been our main interface with computers for decades. Touch screens coming to more and more devices is not the main change I’m talking about: The whole attitude of computing is changing. Browsing has long been a bit of a “time wasting” activity but is more and more coming a leisure activity, commonly done while not-watching-but-kind-of a film or somesuch, to the point where new devices have come out pretty much just to fuffil this one function.

Tablets, in my view, are changing computing more than the last 10 years of innovation has. No longer are you worried about using a menu to get to a program to search the web, you pick up a slate of electronics while sitting on the sofa and just start browsing, “facebooking” or chatting. They have started to blur the boundaries of digital information and everyday offline life in really large way, even more than the invention of the real smart phone (seeded by Apple I will concede) did.

They make interfaces easy, they use real life sensor data, cameras and blistering performance to deliver a much more intuitive and less intrusive way of experiencing technology. Intuitive interfaces have long been the aim of all interface designers, and this has come on leaps and bounds (although still has a long way to go) with the introduction of such technologies.

Voice control has traditionally failed, perhaps Siri will help this trend, however it needs to be on tap all the time, the same with movement control. Whats the point in having voice control when you have to touch the thing to actually get to that mode? This must include the whole system working wherever you are, not just by your device.

So what of the future? Perhaps the trend of tablets from phones will continue, leading to the long sort after desk computers, rather than desktop. An interconnected area where everything is accessible from your coffee table, including playing huge games with your friends, or writing a research paper. The future is impossible to accurately predict, but what I am sure of, is that computing is on a consumer friendly course, whether through voice control, movement control or just the plain touchscreen, something is going to change pretty dramatically in the human-computer interfaces, and has already started.

As a side note, this blog has recently got its first thousand views! So thank you and keep reading 🙂

Something brewing

A Render of the thing

A Render of the thing

Steady progress shall insue… Guesses as to what it is? And what I’m using to design it?

Chess and The player of Games

The Player of Games by Iain M. Banks, one of my favorite books which I am currently re-reading, describes the “culture”, a civilisation which is so rich and automated that the civilians (and some of the intelligent robots) have very little to do. In such a culture game playing has evolved as a serious persue, as one of the few intellectual activities left to them. The games are mostly quite complex in rules, usually based on boards (or e-boards) with other game elements thrown in (cards, discs, etc). The protagonist is an epic game player, almost un-matched for his all round un-beatable intelligence at the games. Later in the book he goes off to a newer, imperial state to play at one of their, extremely complex games and it all makes a rather good read.

Anyway, this got me thinking; what kind of complex games do we have? Some would argue that bridge and chess were the most complex widespread games that were played. Chess is both completely predictable and almost impossibly complex, such that humans and computers alike and strategy is hard to plan except at a very high level at play. We seem to have no high level games where luck does actually make a part of the game while making skill key to success. The best games usually require a brilliant mixture of luck and skill, easy to play but impossible to master. A sort of chess, but with luck.

One problem with such games is that they are incredibly hard balance: some piece becomes ridiculously good or it all depends on one little trick which becomes widely known or the games last years or no time at all and so one needs to tune the interactions, abilities and frequency of occurrence to perfection, a task which surely doesn’t often perfectly fit in with the 1/6 blocks dice offer or the interactions are too complex for that to be the solution. Of course now computers are so prevalent (phones, laptops, tablets or even a fancy watch), these kind of calculations are trivial, even with the need for actual random numbers. And the piece limit problems which often occur are also gone away with with e-boards, which can render near infinite pieces. So why not make a new age of games: played on computers, with fairly complex but public interaction dynamics with the same intellectuality as chess is given? Prehaps soon we all might be players of games.

A life for a battery

Quite a lot of the tubes are buzzing with a hackers observation that, given some specialist hardware and knowledge, hacking a wireless insulin pump would be pretty easy, in fact so much so, it could be done from a full half a mile. On doing a little more reading, it quickly becomes apparent that that kind of meter has no encryption in its link. None.

Now I use data radios quite a bit, and pretty much all of them worth speaking about have AES128, which means it has 128 bits of crypt, meaning there are 2^128 different possible codes, meaning if one tried a new code at 1 MHz (about as fast as it would be possible, even with some really custom and flashy hardware), one would take 10 septillion (that number actually exists) years to test every possibility.

Say ones data was being sent every minute, as a glucose meter probably would, that would mean that one simply would not get enough data over an hour to crack AES128, let alone AES256. And why don’t the manufactures do it? (other than its not required, and they aren’t massively incentiveised to do it) Because more hardware = more power used. Now, I’m not sure about most people, but if I had to change my insulin pump/monitors batteries 1.5 times more often (its only a tiny bit of extra silicon running), I probably wouldn’t notice, whereas if someone hacked my pump, I think I would, too late.

Now AES128 might even be a bit extreme, so lets review the points:

  • Car keys are hardly ever cracked, even if you have the key and the car and you hit the button thousands of time, they have a rolling code etc and are really really hard to crack. Oh wait, they run on batteries… forever…
  • One has to establish a private key at some point, so that would have to be done carefully (a la bluetooth) or with a physical interface (a little cable or something) at setup. Setup of connection can often be a weak point.
  • If the interfaces were two way, and required checking or acknowledgement, or even *any* scheme like that, the distance at which one could play that game would be extremely limited (10m?).
  • What happened to charging?

The real main problem is that, sure now we know that people have thought of hacking into them, but I’m sure that the management who designed the block diagram of the whole system didn’t go “wow, I bet someone will try and hack this”, because people never think like that. Now, prehaps it will be put into new designs (good sales “gimic”), but thats where the major problem lies.

Products specified for medical purposes have to go through thousands of hours of testing, verification and error finding (as you would expect) which can cost huge quantities of money and, more relevantly, take years. On top of this getting approval from the right medical bodies would would also take a couple of years. This means that any new devices with the encryption built in would take at least 2 years to surface, and all those already in verification would all be released in the mean time would be useless.

The only thing that would actually push companies to change their designs would be to only grant new approvals to products which include enough digital safety systems. Sure this would be unpopular with the companies who make such things, but otherwise we’ll face a few years of new pumps and new medical systems going out completely un-protected.

It seems ridiculous that this kind of thing isn’t well covered somewhere in the medical spec, but I guess thats the kind of unjoined up thinking that the world suffers. Again the hackers have probably saved and condemned lives and business.

The open workshop

I recently purchased the Open bench logic sniffer for debugging various things (I kept finding that whenever I did something, I would think “if only I had more samples on this scope, even though the signals are digital”) and I have to say, I am very impressed with the hardware, as well as the very comprehensive set of PC softwares available for the device. A minor fault I have to say is the lack of live viewing of signals (for testing things really quickly and for testing hookups) however I might well just write a tiny app for that very function. The Bus Pirate tool is one that is very well established now, hitting its 4th revision quite soon and allows very quick, PC based modification and testing of various things (a beautiful list is available here, under chip demonstrations). These tools are beautiful examples of open source hardware, and provide handy bits of diagnosis equipment at low prices (<$50).

The Open bench logic sniffer is based on the Spartan 3E, a pretty low cost FPGA made by xilinx, with 250K gates. Making this kind of hardware out of FPGAs are very popular, even in a commercial context, due to their ability to handle signals in a massively parallel and direct manner, allowing very fast signal paths and therefore sample rates similar to commercial products using more sophisticated hardware. In fact PicoScopeuses Xilinx FPGAs in quite a few of its products (at least according to Elektor’s photos), the scopes are very high performance, however most of the features are placed in software, meaning (I would expect) the scope would get laggy when one is running lots of bits and pieces. Even so, the scopes are very popular. Oscilloscopes are very hard to make, however, due to their required high performance front end, so amateur scopes can hardly be expected to perform with the pro scopes.

However the idea of a set open tools ones workshop or workbench, made professionally but with the software competitive and modifiable for the users good. There are a few tools which pretty much everyone uses, although some of them have higher price form factors which really aren’t far out of the capabilities of open source: PC scopes, bench multimeters, logic analyses, “pin wigglers”, variable loads, supplies, arbitrary waveform generators and simple sensor input (temperature, tachometer). One could also have remote sensors using ZigBee or Bluetooth.

Being as bigger projects dont (really) work in open source (as the hardware costs start to be pretty huge), the chances of filling a full 19 inch rack would be pretty low, and the practicality of doing so would be questionable due to the cost of the mounts. The open bench logic sniffer is about 60 by 100mm (very roughly), meaning a big enclosure would just be wasteful, as well as costly. Most instruments now dont need very large boards if it doesnt have a user interface, and connectors are pretty small. Clearly what is really needed is a metric based rack system, prehaps miniaturized for the new breed of test equipment. I would personally suggest a rack size of around 160mmx160mmx25mm, although prehaps even smaller would still be practical. Power supplies or more complicated systems (computers) could take several units (like 19inch currently). A universal power bus providing low currents for the digital systems would simplify the individual design (5, 3.3 and 12V?) as well as a universal interconnect system with USB 3.0 taking pride of place as the PC interface. If one wanted a stand alone instruments one could screw the power supply and instrument modules together (or put an add-on at the back).

The software that could be created on a computer would allow interconnected equipment for testing and debugging on a new level: peripheral simulation triggered by other events, interconnected triggers, data loggers and configurable displays would completely transform the current level of sophistication. Bring on a massive set of development…

An open letter to Elektor

Dear Editor/Writers,

When reading your May and June 2011 issues, I noticed two new products you have released recently, to which I have some queries. However, firstly, I would like to introduce my point of view, and what I see in the Elektor magazine: I am a student, with a smallish budget but an interest in robotics and electronics. Like most people I know £200 is a rather large investment. To me Elektor is a free time read to occupy and educate me.

Your Proton Robot costs ~£1000 unassembled. I understand (about) how this design worked up such a price tag, the feature set is really rich and the robot is very well made however it has quite a few flaws: Its price: I really can’t see my self ever spending that much on a kit or a assembled robot unless it was featured to the point of flying out of the door alone. That kind of money is a serious investment. Secondly: Relevant features: Sure the robot is totally well equipped, LED eyes, ultrasound sensor on 2 servos, line sensors, wheels, screen, MP3 player and a gripper. What would I do with it? I have no idea, probably some human following, line crossing, fun thing. I’m not sure what else one could do with it, but I’m sure there are plenty of things to do with the features. However does every user need them all for development? Since the price is really quite high, why equip it with that many features? A modular, option block based system would surely work? Finally, modularity: The robot claims to be “modular”, however, there is no “nice” way of attaching things to that beautiful perspex body. Also, there is no major room on the outside for attaching fancy, application specific sensors or actuators. I like the development system but I really cannot see how I would use it, or why I might buy it.

My second point is about the new “OSPV” project: An “open-source”, indoors only, Segway clone, based on the previous Elektor wheelie (admittedly £300 more). As far as I can see, this kind of “open source” hardware should, at least in spirit, resemble slightly the Open Source Hardware licence (http://freedomdefined.org/OSHW) however Elektor hasn’t released *Any* of the actual hardware plans, just the circuit boards. Further more, it claims in the article that it is “exclusively available from Elektor”, hardly in the spirit in open source! The ground clearance is 1cm… hardly enough to get over a door lintel, and seemingly useless outside and costs £1000, whereas one can buy a Segway for ~£800! It really isn’t that modifiable as it has no mounting places, and finding an application for it is going to be a challenge. I dont understand how the Elektor Wheelie wasn’t open source (with all the source code and board designs downloadable) and the OSPV can claim to be. The hardware designs, at least, should be open.

I’m sorry this letter just seems like a couple of rants, however these issues (out of the many articles I have read),  really bug me. Developers like me really don’t have much capital and Elektor is usually a brilliant way of getting some interesting education in electronics, so thank you for many brilliant magazines.

From A Reader

Internet freedom

In the UK, there is currently (at the time of writing) a lot of controversy over these new, so called, “Super Injunctions” ( a court ruling which prevents media reporting a certain story, or even the existence of the court order), which people are saying are just ridiculous, letting people hide really quite “important” facts from public view. However, as usual, some tweeters have something to say about these injunctions and are happily tweeting away about them, who has “won” injunctions and all that kind of thing.

Now, to the people who have taken out these injunctions, such a mass disclosure of their carefully held secrets is sure to annoy them, and the  courts which are tasked to uphold them, and even the media organizations which would probably like to sell their services by being the ones to report, but, of course, they can’t.

So how does one shut down a few, effectively anonymous people, trying to break some legal gags? Well there’s talk of demanding the names of the tweeters/perpetrators or shutting down their accounts and all sorts of things, but what can one really do in a world that is effectively anonymous. During writing, an MP decided to use the protection parliament offers to pronounce one of the people involved in the controversy. It’s well realized that one simply cannot prosecute all of the people involved, so there is just nothing to do.

But the whole outrage this has caused towards the online world has raised some interesting questions: how much freedom of speech should be allowed on the Internet? and what should be done to people who breach it? and how does one find *who* is actually responsible?

Freedom of speech is really a mixed bag: in revolutions and repressed countries it seems to be the best thing about to allow the peoples to express and report happenings however as soon as it happens here, affecting its mildest rules, there is suddenly talk of prosecution and restriction of this freedom.

What can one do to people for such small breaches? Well, jail, community service? Seems ridiculous, and where do they serve it? Internet silencing? Not gonna happen… One really can’t punish ghosts.

Which brings me on to my final point: How does one even start to find people who have given fake names, use proxy servers in countries where free speech exists due to the governments lack of modern, digital policy or just open mindedness. IPs can be tracked, sure, but what stops a “purp” using a coffee shop or some such for its free WIFI? If people really want to hide on the Internet, I’m pretty sure they suddenly become pretty impossible to trace.

So what we really have, is laws being made which will probably effect very few people, but give this huge feeling of oppressed Orwellian fear in the majority of everything they do online being watched, that they can’t speak their mind, that (in my opinion) the best enabler of free speech is being eroded and leaving us looking over our shoulders once again.

%d bloggers like this: