The world’s worst operating system

Windows 8 has come in for a fair bit of criticism since it was launched, which is a shame since it’s actually really rather good, it’s just that Microsoft seem to have underestimated the number of people who might still want to use their keyboards. Of course, in the typical over-the-top, making-a-whole-mountain-range-out-of-a-molehill way that criticism is done via the Internet, you’d be forgiven for thinking that Windows 8 is the worst thing to have been produced by humankind since grenades wrapped in barbed wire, or Heinz Toast-Toppers.

This got me thinking about what would be the worst OS ever created, and that I should really write it just to show everyone just how bad something can be. It would be called Oh!-S (where the trademarked name would also include a little false chortle at the end), and these would be some of its key features:

  • Applications would be launched using a Go! Go! Go! button which, when clicked, would cause the volume to be set to maximum and a sound clip of someone shouting ‘Go! Go! Go!’ to be played;
  • All user-generated files are stored in the My (Your) Files folder. They’re all named in the format File_1.file, File_2.file etc. Any files whose sequential number would be equally divisible by seven are automatically deleted;
  • There is also a folder called Not My (Your) Files that you cannot look inside, but starts off at 7GB in size and grows exponentially according to the phases of the Moon;
  • When a problem occurs, no matter what has gone wrong, the following error message always appears:Oh!-S Error

 

 

 

 

 

 

 

 

 

  • A search function is available, but regardless of the term entered you will only ever get back two results, comprising an MP3 of a song by a country and western group you’ve never heard of, and a video clip of the 1976 Austrian Winter Olympics;
  • To access the Internet you must sign up to AOL and must load their custom software and ‘connect’ even though you have an always-on broadband connection;
  • When you’re browsing the Internet, each 4-9 minutes a popup box will appear that asks if you’re okay. Clicking the ‘No’ button will alert the emergency services;
  • It is not possible to change the preset wallpaper, which in all installations is a PhotoShopped picture of Robert Pattinson disguised as Krushchev.

Oh!-S will come in four main distributions:

  1. Lite Edition – This has no functionality whatsoever and consists solely of a 3-minute looping video of someone else using your computer;
  2. Pro Edition – Contains all the features mentioned above, but does not work at weekends or bank holidays;
  3. Ultra Pro Edition – As above, but works all the time, except when you really want it to;
  4. Ultra Mega Hyper Pro Max Power Edition – As above, but in a blue box.

 

Too much data, not enough information

Every second of every day millions upon millions of bits of data are being created. Just been to the supermarket and used your loyalty card to purchase your horse meat burgers and soft-cheese snack packs? There’s some data right there. Searched for something on the Internet and clicked on sponsored link? Data. Used your work pass to get through a car park barrier? More data. Pretty much everything we do nowadays that has some interaction with some system somewhere is storing some kind of data. There was a lot of usage of the ephemeral ‘some’ in that last sentence, I realise, but this is just because the possibilities for what can be stored are so massive.

So, there’s an awful lot of data. A shedload, in fact, as long as you have a very big shed – according to IBM (who I’m assuming to be quite clever people and therefore knowledgeable about this kind of thing) we as a species produce some 2.5 quintillion bytes of data. That’s 25,000,000,000,000,000,000 bytes, as long as I’ve not made a typo and got the number of zeroes wrong. That seems quite a lot, and it’s quite worrying to imagine just how much of that comprises photos of cats.

Whilst there’s a massive amount of data being generated, however, I’m not convinced on how much information is being produced. The distinction is a subtle yet important one: data is the raw material, and information is the end result of some processing – be this automated or involving manual intervention – and forming something that conveys meaning to somebody. In and of itself, data is pretty useless, yet I can’t help thinking that those of us who work in IT are often guilty of providing this rather than actual information to users, asking them to provide their own interpretations. It’s a bit like a glazier providing you with a big of sand and an oxy-acetylene kit.

The tricky thing, I guess, is that it can be very hard to understand what people actually want or need to know. I used to try and elicit system requirements from users (in part, at least) by trying to get from them an idea of what reports they would like to get out at the end. This seemed to make sense: surely people would have an inkling of the sort of information they need to see. What rather rapidly became apparent, somewhat to my surprise, was that quite often people didn’t know this; rather, they wanted us as a development team to guide them in deciding what they wanted. This is often remarkably difficult to do, and I think in part this is because as a software engineer your brain tends to work in terms of process and logic, and this isn’t what’s needed here.

The term ‘Business Intelligence’ is often misused, as well as being somewhat esoteric. In it’s true form it refers to the vast array of methodologies and processes that occur in alchemic act of transforming data into information. Many people wrongly assume that this basically boils down to the generation of a few reports and maybe the odd managerial dashboard (you know, the whizzy things with the graphs and those lovely 3D pie-charts that,okay, don’t really tell you an awful lot but, boy, do they ever look great!). Of course, a lot of it is that kind of thing, but if you think of BI as being solely that then you’re doing it a great disservice. BI runs the whole gamult of things from data warehousing through strategic analysis frameworks such as balanced scorecards all the way out the other side into the murky depths of trend analysis. Making BI work is, I think, one of the key challenges in enterprise IT today, incorporating not only the technical obstacles involved in ensuring all pertinent systems and data is integrated in ways that allow for dynamic cross-measurement, but also the difficulties that arise from trying to determine the ‘what’s, ‘when’s and ‘how’s.

One of the big buzz-phrases at the moment is ‘big data’. I’m not going to delve into the details of this, because it’s something I don’t really have what I feel is an adequate understanding of it and its impacts at the moment, but suffice to say, the possibilities that the increased concentration of research into it will have on the world on BI are potentially enormous. Most organisations, though, I suspect are still struggling with whatever small or mid-size data, and just adding more into the virtual pot really isn’t going to help the matter.

Why I hate SEO

Okay, maybe ‘hate’ is a strong word, but Search Engine Optimisation (or SEO, presumably pronounced ‘Cee-O’ by some just to accentuate its general gittish-ness) is one of those things that makes my skin crawl, like millipedes or Piers Morgan. Back in those dark pre-Internet days, when men were men and women weren’t, the most your typical SME could do to optimise its chances for being discovered by people was to give more money to the Yellow Pages who would then give them more real estate in its gloriously monochrome tome. Or, you know, they could actually do a decent job and get some good word of mouth going.

Nowadays when everybody has a website or a Facebook page or, most probably, both, the SEO company has arisen from the primordial soup with its promises of more visitors, more business and more money. The concept is simple: in today’s information-saturated world, the average Joe will, when faced with a need to find a company that offers a product or service, head straight to Google and type in the name of that product or service. Given that almost every search term nowadays will result in around 500,000 results, Average Joe will then be confronted with a plethora of pages stretching far into the ether. They will panic and click on the first link they see, after the yellow sponsored ones that everyone ignores. Please note that I may have over-simplified my description here for the purposes of mockery.

All of this means that it’s very important that your website gets as close to the top of the results as possible, and this is where SEO comes in. If you so wish, you can bung an SEO chappy (or chap-ess, the industry has no barriers to sex) a chunk of cash and they will analyse your website and tell you what you can do to make it more appealing to those robotic spider things that index search engines (and hopefully aren’t the same robotic spider things that were in the Matt Le Blanc/Gary Oldman film version of Lost In Space, otherwise we’re all in trouble). One of my problems with this is that, in my experience, your typical SEO analysis will probably tell you a mixture of the following things:

  • You need to use some meta tags in your web pages, even though they will admit that Google doesn’t really use them any more (though Yahoo might, and we will all know how many people still use that);
  • You need to use <h1> and <h2> tags to put your headings in;
  • You need to have a single paragraph somewhere that Google can display as content text;
  • You need to link to other sites which are highly linked, like Wikipedia, even if they have no relevance whatsoever to your contents;
  • You need provide a site map;
  • You need a robots.txt file. Nobody really knows what these do;
  • You need a lot of static content as search engines can’t as easily index dynamic stuff (depending on how it’s done);
  • You need URLs that mean something, so http://www.yourwebsite.com/this_page_shows_photos_of_cakes/ rather than http://www.yourwebsite.com/products.php?cat=3 .

See? I’ve just done an SEO analysis for you, and it’s cost absolutely nothing. No, no need to thank me, I do it all for the sake of humanity.

Here's me searching for SEO, which is almost nearly as meta as it gets.
Here’s me searching for SEO, which is almost nearly as meta as it gets.

Aside from the fact that 90% of what an SEO company will tell you will be the same no matter what your business, the other thing that annoys me is that their strategies either don’t work, or they don’t work, and that’s even worse. As quite a regular user of the Internet, I get increasingly frustrated by all of these sites which have been designed and tweaked just to get Google’s attention. As a programmer, I often have to search for specific phrases to find out just why (for instance) GlassFish server isn’t working in the way I’d expect it to. What I’m usually confronted with is a hit that seems to provide exactly what I’m looking for, but in actuality is just a web page listing the search term that I’ve just given and no content whatsoever. Or I get Experts-Exchange.com, they always come up.

By promoting certain tactics to get people higher up search rankings, SEO companies are making these rankings less useful for everyone. Of course, Google often change their KFC-style secret formula to try and combat this, but for all their annoyance, the SEO people are clever and persistent. It’s ultimately self-defeating, though, because the more things that get to the top when they obviously shouldn’t be – and people are generally quite good at spotting this – the less we’ll all trust the results and, therefore, the top position is lessened in importance.

I can see why businesses feel the need to use SEO, particularly if all their competitors are, but really it’s a result of the fact that the Internet is still a very new thing and we don’t truly understand how it can be used or how it can make money for people. There’s still a big confusion between the Internet as a searchable repository of all humankind, and the Internet as a place where business can be conducted. Google and other search engines attempt to coagulate both, and are being exploited by the latter at the expense of the former.

Thoughts on the Blackboard Teaching & Learning Conference 2013

Most of the week for me has been taken up by the Blackboard Teaching and Learning Conference. This was my first visit, and – a red wine and whisky-induced hangover on Wednesday morning excluded – was a thoroughly enjoyable experience. The conference was held at Aston University in Birmingham, a very nice venue, apart from the utterly confusing room-numbering system in the main building, which managed to baffle pretty much all of the conference delegates. Apologies to anybody who at any point was attempting to follow me in the misguided belief that I knew where I was going. In fairness, I never really have much of a clue where I’m going, but this is a personal flaw only exacerbated by a building where you can go in a lift that takes you to the fifth floor and, upon exiting, find yourself facing a staircase that leads upwards to the fourth floor. I have visions of there still being some delegates wandering around the corridors now, searching for an East Wing lecture theatre that is actually in the West Wing.

Anyway, aside from getting hopelessly lost on several occasions, I also presented a workshop with the help of two colleagues, Nicola Randles (she’s @tweet_nicola on Twitter, though she never tweets) and Rob Oakes (he isn’t on Twitter at all). The workshop focussed on planning the integration of your student records and other systems with Blackboard via the SIS integration tool. This is a relatively new means of interfacing Blackboard with other systems; it’s pretty good, being remarkably powerful whilst very simple to set up. We didn’t cover the tool itself in a great amount of detail, concentrating instead on the thought processes behind organising an integration in the first place. As such, we dealt a lot with stakeholder analysis and data mapping, but didn’t delve too much into the intricacies of how SIS works. I need to spend some more time investigating it, particularly the abstraction layer that seemingly lurks in the midst of it, as the thinking behind that is going to help inform some of my own design work over the next few months.

Our workshop was well-attended and we got some good feedback from it, which is always nice. Thanks from me to both Rob and Nicola for all the work they put in, and of course to everyone who turned up and participated. If you want to view the presentation or the resources that we used, you can visit the companion website we put together at http://www.staffs.ac.uk/bbtlc5aday/ .

Elsewhere there was a lot of concentration on increasing mobile usage, both in the sense of phones and also tablet-style devices. This is obviously pretty much par for the course across the whole of the IT world nowadays, but it was at least rather encouraging to see that Blackboard are trying to think about how best to ‘do mobile’ (if that even means anything) rather than just doing it. Speaking as someone coming from an IT perspective rather than a teaching and learning one, I can see the potential benefits that mobile can bring but trying to shoehorn it in for the sake of doing it is ultimately self-defeating.

One thing that I did find particularly interesting was a talk by Blackboard’s Emily Wilson (@emilyalexwilson) where she spoke about the importance of native apps on mobile devices over mobile-optimised websites. This is backed up by research I’ve seen over recent weeks indicating the amount of time users spend on the phones within native apps as opposed to within a browser. It rather flies in the face of how I expected mobile to go, though, as I’d anticipated that the same kind of trend that has led to Google Drive and Office 365 would lead to mobile web applications becoming the norm, particularly with the advent of HTML5. That’s me proven wrong, again.

Native apps can be a lot better in many ways than mobile apps, of course, though I’d argue for many things most users wouldn’t notice the difference (and the vast majority of people don’t care about the distinction between the types). I’d never once try and argue that the user experience isn’t typically better in a native app, just because it can use the interface common to the device. The problem I can see is that, from the perspective of businesses and organisations trying to produce applications, it has great cost and time implications since you really need to be producing at least two separate products, one for iOS and one for Android, in order to make sure you’re hitting the vast majority of the mobile marketplace. If you add Blackberry and Windows Phone into the mix it’s even more of a problem, not to mention the fact that all the platforms have version and hardware fragmentation in them, especially Android.

To me it seems that a good way for smaller-scale development teams to try and cope with this is concentrate on the back-end with a SOA approach, shifting the majority of all the business logic to the server end so that the interfaces are just that: lightweight points to access a write-once service layer. By following this method, it’s possible for dev teams to reduce the amount of time that producing and maintaining multi-platform mobile applications. If you’ve not got the in-house expertise or environment to produce an iOS app or similar, you could contract this out whilst still maintaining control over your business rules.

Away from mobile, another common theme that went through a number of the workshops and presentations I attended was data integration and presentation. Everything’s becoming more interconnected nowadays, and the era of enterprises having multiple small or large systems with semantically common yet asynchronous data-sets is slowly being consigned to the past. Now we’re in a world of increasingly homogenised information, but what I think we’re still struggling to get to grips with is how to present this in ways that are comprehensive without being overwhelming, and concise without being over-simplifications. I’m going to be thinking quite a lot about this over the next few months and will try and document my thoughts here as much as possible.

All in all, then, Blackboard TLC 2013 was a very rewarding experience, and something I’d love to repeat in the future. I usually find that such things are good if they spark off thoughts and ideas in the fetid recesses of my brain, and the conference certainly did that. What would really top it off, of course, would be winning the free trip to Las Vegas…

No promises this time

I must admit to be prone to exaggeration, but even so it’s probably fair to say that over the last few years there have been more abortive attempts to do something with this website than there are Jennifer Aniston romantic comedies.

And here’s another one. This time I’m taking a different tack of just having a blog that I may or may not post things on that are connected with IT, or games, or writing, or superheroes, or Star Trek, or, well, anything I’m interested in, really. I’m not going to provide any guarantees that there ever will be an awful lot of stuff here, as my previous updating strategy has been somewhat lackadaisical.

For anybody who did read the blog previously (there might have been someone), I’m going to be migrating some of the posts over to here. Well, by ‘migrate’ I mean ‘copy-and-paste’ because that general seems to be an easier way of doing it than attempting any half-hearted import/export routine that’ll no doubt just end up with me losing my temper and having to manually edit a bunch of things anyway. For the purposes of what will probably amount to six or seven posts that really doesn’t seem worth it.

And finally, just a quick reminder that you can find a more stream-of-consciousness-style set of updates over on Twitter, under my username @Octavius1701 .