There was a scary realisation at a recent company meeting, when someone pointed out that I should write about my thirty years of experience in IT.
Thirty years. Thirty. Years. Jeez. Thirty is old, in IT, right? So if I’ve been working professionally in IT for thirty years then I must be… ancient? Well yeah. OK.
Now, I’m not going to go through a random and very very boring timeline of my career. What I do know is that throughout those thirty years I picked up and realised the importance of coming technologies and was an early adopter and sometimes developer of almost every mainstream technology you can see today – from personal computing to tablets, smartphones, the web. All of them. I was there and involved. Because I really love technology.
So instead I’m going to ask some questions. Has technology actually improved? And if so, how much better is it? How does it relate to the work we do now, and how will it work in the future?
Did tech improve, really?
Yes. Yes, it did. I mean, it’s really astonishing just how much. My little tablet is about 1000 times more powerful than the mainframe I started on in 1987. It can store ten times more data online. And it doesn’t need a power centre in a whole building to run it, but can instead be powered by the tiniest of batteries… for most of a working day.
But the key thing is why? I mean, we know computers have become faster, but for what real reason? Well, most of it is for you, dear reader. So that you can get your information more quickly and so it looks good when it gets to you, we need an enormous layer cake of technologies.
So what do the improvements mean?
What tends to happen with technology is that a series of progressive improvements occur that make general purpose computing use more and more pleasant. Systems get faster, interfaces get richer, and the content available grows neatly.
On the whole, that’s not a lot to offer. That I can find a Wikipedia article more quickly today than five years ago doesn’t massively change my life.
Instead, though, more interesting things happen. Over the years I’ve seen pivot points. First was the cheaply available general purpose personal computer. That allowed people to try and do things they couldn’t have managed before without a lot of money. They could run a database for managing their new mail order business, or handle the stock in their store. Others handled paperwork for garages or automated the process of measuring car emissions. Cheap computers could be connected to sensors and take lots of boring readings reliably, time after time, freeing people up to do more interesting and valuable work. It wasn’t good news for those who hated learning new things, but the flexible minded really did well.
That was an initial big win. Then we got the internet. And wow! Now you could sell things to people who were far away, and at low cost. Especially once the web was a thing. You could go online dating and check out and chat with twenty potential partners in an evening online. You could share papers with other people. Your knowledge and skills could travel the world, crossing borders. Software developers were always ahead of the curve on this – they dealt with tricky problems and liked to share code and ideas. In fact, software developers can be a good place to look to see where the future might lie.
We’ll get onto what developers are getting excited about in a minute.
The next big leap was, in my view, the smartphone. We went from only being able to make phone calls, to being able to look stuff up. Where’s that restaurant? “Oh cool,” we’d say, “that’s not far from here.” To then having a freaking map, right there, in our phone… of the whole world and the phone knew exactly where we were and telling us the way! Apps started to be developed for these phones and before we knew it they became the hub of our lives – communications devices for every eventuality. They weren’t just small computers – they were small computers with a rather standardised set of sensors and capabilities that gave developers a hook to exploit and a ready market willing to pay them money to do so.
Again, as a developer, I managed to get in trouble for social networking on a computer back in… 1989! Do you see where I’m going with this?
Social networking bloomed into the general consciousness in the later ’00s. Facebook was opened up as a thing for most online citizens in 2007, with rapid take-up thereafter. It grew massively and I see all generations on it, with differing levels of expertise of course. Some are really competent and fast, some are clumsy and mistake-ridden.
Once you see people over 60 using something, you can safely assume that it’s properly mainstream.
But just because something is popular with teens, doesn’t mean it’s the big new thing. I was social networking in the ’80s. Old people were too. Mainly because they’d created the early messaging systems.
OK, so devs are super early adopters
And that means you have to look at what devs are playing with and getting excited about, to see what could be massive in twenty years time. The devs *have* to get excited about a technology because they’re going to be the ones developing on them. If devs aren’t excited about it, there won’t be applications or features being added, and it’ll never reach a level where the general public will find it all so easy to use, that they’ll be able to just pick it up and go, in the same way they do with a modern smartphone and social networking.
Today there are several interesting technologies that are coming along that developers are starting to get interested in, and are definitely using:
Spoken language processing
Still a way off. I do see developers talking to their phones, but not that much. And I’ve not met any who are experimenting with APIs for them. I think they’re going to happen, but it’s not as close as you might think.
Touch, Faces & Gestures
Most people are only playing with touch in a limited sense. It still amazes me that the huge touch based desks that I hoped for 30 years ago haven’t come true. But Microsoft’s Surface Studio is the closest I’ve yet seen. And I really really want one. Touch is increasingly important because it’ll give us ways of interacting with content beyond the simple swipes on a phone. Imagine touch on a large screen, that identifies the different people working on it and handles their interactions separately. Or an application that can see your face and understand your mood and feed you content appropriately?
Virtual reality has really come to a nice point, but it’s largely an entertainment function – it’s very disorientating and cuts you off from the people around you. Social networking has been massive because it connects you with people. The web was massive because… it connects you with people. VR can’t do that – you can’t have a VR conversation and see the other person in 3D because they have to wear goggles to see you back.
This is much more interesting, and earlier along the curve than VR. AR adds to the reality you exist in already. Your phone does that – it tells you where you are, who is nearby, who’s looking for a date… it adds to your life. So will AR because those tools will be constantly there. But with added connectivity – you’ll be driving along with your route cleanly overlaid on a road. You’ll never miss a sign behind a truck because if your AR device realises something is hidden, it’ll do some magic to make the truck semi-transparent. AR, because it clearly adds something, is going to be big.
Virtual assistants/AI services
Every geek has played with these, and a lot are well integrated into computers already. I don’t know how high usage it is yet, but if I need to set a quick alarm on my phone it’s quicker to just say it, or to type on my PC “Set alarm for 5:30 pm” than it is to open up an alarm app, find the appropriate setting, change the time, and then close it again.
Integrating with and leveraging the intelligence and capability of virtual assistants and the AIs behind them could be really interesting – they know a lot about you and what you’re doing, so it’s not inconceivable that we should be looking at how they can get the information they need from your subscriptions and services. Imagine if, when I’m shopping for a washing machine, I can just ask my phone “hey, what do you think of this one?” and it simply says “Well, Which? reviewed it and said it was really good but very expensive. The one next to it is half the price and almost as good. Your choice really!”
You guys are web developers, so is the web going away?
No. The web as popularly discussed is actually a set of linked documents. We’re already a long way from that and a while ago we moved into web application development – solving specific problems for clients to help them with workflow and efficiency.
Instead, what most decent web developers are doing is tying together interesting technologies. We can provide high-speed APIs providing data that will be used by AR systems, for example. Or we can code up the underlying server software that will be required. We can still make websites that deliver interesting content to somebody in the AR world.
So the technology for that side of AR won’t change much. Just that instead of reading a webpage on your phone it might be through those funky glasses you’re wearing.
Similarly, websites need to be able to find ways of dealing with the digital assistant era. If all your site does is to provide news on who won a match, then that’s easy information for an assistant to fetch. You have to add value to that information or people will soon be going away. And how on earth do you monetise it?
The last thirty years have brought amazing developments, gradually and step-by-step, but leading to sudden jumps in what we could all do in our lives, as technologies came together. Right now, it’s firmly my opinion that AI based and augmented realities are going to become the big thing in about five to ten years. I’m still trying to work out how that’s going to be useful for our customers, and what the opportunities are that these will bring along.
What do you think? Let me know in the comments!