A brief history of Application Development

There's been a bit of talk lately about how CloudComputing and the way a lot of new software architectures have been designed in todays modern-era is actually a lot like how we did things in the past, and how we've actually taken a step backwards. While there may be some similarities between how things were done and how things are now done, they're actually still quite different. Let me explain by first giving a brief history of application development.

The MainFrame

In the beginning, there was the mainframe. This was a single SuperComputer with the fastest processing chips money could buy (which are less powerful then the calculator running most wrist watches in modern times), and they were a prized commodity. They were so large and took so much cooling and electricity, that typically you wouldn't even have one locally to work with, so you had to communicate with it remotely via a Dumb Terminal. Eventually these MainFrame's got smaller, but you were still forced to interact with them via these very thin clients that were intended to do nothing other then connect to your MainFrame. These MainFrames were designed to run a single process very quickly, so there was no need for anyone to even think about parallel processing, everything was simply done in sequence. MainFrames had to have a complicated scheduling system so they could allot a given amount of time to any single process but allow others to interject requests in between.


This method of client-server interaction was a huge leap forward from the original system with only one interface to a computer. In many cases, this is still a widely used architecture (although the Dumb Terminals have been replaced with Thin Clients). In most Retail stores, for example, the entire backend system hasn't been changed since the MainFrame was the only way to do things, and they're still based off of the same old code.

The PC Revolution


Eventually technology evolved past the powerful MainFrame and into smaller and more powerful devices that were actually able to be housed in an average room. These new devices revolutionized the way software was built, by letting application developers run everything locally on the client's system. This meant that the bottleneck of your internet connection was completely removed from the equation, and the only slowness you would ever see was from your own computer. As machines got faster and faster, software continued to demand more and more from your local system until now, when PCs are way overpowered for most average tasks. This lead to the interesting prospect of MultiTasking, where a single system can be used for multiple things at the same time. Originally this was simply handled by a much more advanced version of the Scheduler used in the MainFrame, but eventually became replaced with Hardware Threading, and even Multi-Processor systems.

So now the average computer can complete multiple tasks at the same time? Many developers refused to adapt to this new technology, but some did. Those that did went on to develop massively scaled systems that could run in fractions of the time of a single-process system, taking full advantage of the hardware at hand. This would help aid in the next big leap in technology, the fast internet.

The Fast Internet

Previously everyone was concerned with Network Latency and throughput, but eventually the telecommunications industry caught up with the rest of the market. What previously took minutes to send over-the-wire now takes seconds, or even fractions of seconds. With the introduction of the largest infrastructure system ever created, the internet was born and contained enough throughput to re-think how we re-architected our systems. Many people had come to terms with the idea that software needed to be able to be threaded, but now we took it one step further. We developed clustering.


Clustering took the idea of processing in parallel to a whole new level. We realized that all of these personal computers we were running had way more power then was being used, and most of the time they were entirely idle. A few years ago I heard a story of a Graphics Design company that was looking into buying a few servers to run their graphics processing. As you may know, Graphics manipulation, conversion, and processing is one of the most processor intensive things you can require from a computer, so usually you can't do it on your local system without bogging it down or it taking a very long time. Instead of buying a bunch of expensive hardware to run and maintain, this IT professional decided to take a revolutionary new approach.

His solution was to use the unused processing power on the employee's local systems for this graphics processing. He designed a simple Queue service that would accept jobs, and a simple client interface that ran on every single desktop in the office and would accept and process jobs only when there was downtime. This meant that during off-hours or periods when the employee wasn't using their computer, the jobs could be completed and there was no new hardware to buy! This idea of distributed computing across commodity hardware created a whole new way of developing software applications.

The Cloud

Then Amazon introduced the cloud. The idea of a Cloud is almost identical of my friends original distributed processing concept, except it uses dedicated systems instead of employee's systems to run the processing jobs. Unlike the employee's systems solution, this ensures that you'll always have the capacity you require to run your jobs. While it is more expensive (although not much so), it also means that you don't have to worry about how long something is going to take if you need it quickly. Again, this revolutionized the way people thought about software and how to design it, but suddenly everyone started taking a step back... Haven't we all seen this before?


Yes, indeed it does look very similar to our original MainFrame architecture, in fact from a purely black-box perspective it's almost identical. The big difference here is how our server actually achieves it's processing capabilities. We've combined the best of both architectures into one.

HTML5 and Local Storage

But really, can't we do better? Isn't there some new technology coming out that will let us merge these two separate systems better?

Yes, there is. With the advent of the Client-Server interaction, we're mostly talking about Web Applications. The problem with most modern web applications is that they have to ask the server for everything, they can't run any processing locally, and they certainly can't store enough data locally in a usable format.

In comes HTML5 with a local storage option. This new technology allows us to quite literally distribute every trusted action between client and server. The best non-web example that I can provide is Mercurial.

Mercurial is a distributed Version Control system. It doesn't require a central server to push and pull changesets from, but instead keeps a local copy of your entire repository within the directory you're working on. This means that you can continue to work, check in, revert, update, merge, branch, whatever, even if you're entirely offline. What this means is that all the client processing happens locally. If you then wish to share your changes with others, you can do so either by manually transferring update files, or using a central sync server. While it does support having one default sync server to push and pull changes to and from, that's not at all it's limit. It can push and pull from as many servers as you'd like without breaking your local repository. These sync servers are simply designed to provide a distribution point that is authenticated and ensures that the data it's providing is updated only by trusted sources.

Now lets take that a step back and think in the generic sense. What we're asking for here is to create a local application on a clients system that synchronizes it's database with our central server, and then allows us to perform regular tasks on it. Since we'll only get information from our central server that we have access to, this means we can open ourselves up and let any client connect to it, and we'll just plop our permissions on top and only let them see what we want to from the database. Unlike the old client-server interaction, this means our central server's full power is used and our client's full power is used. Anything requiring massive processing (such as processing images), can still be run on the server side so we don't bog down our clients systems, but simple things such as searching through a database and rendering records into visuals can all happen locally on our client!

The biggest question for the new-age application is where to put each bit of functionality. The best answer to this really is through trial-and-error. Other then for security, there's no true set rules on what should be processed where, it's just a matter of seeing what your client is capable of. For example, some clients are capable of running XSLT natively in their browsers, but some (ok, just IE), don't really do it right. If you can convince your clients to just simply not use IE, then you can probably offload all of that to them, but you may need to allow your server to run it if you need to support those pesky IE clients.


The Mobile Dawn

So really why are we moving away from an all-client infrastructure? Quite simply it's because of Apple. Mobile devices had previously only been thought of for large businesses, and even then it was usually just for email. Now we have two different scales of devices, the mobile touch pad and the mobile smart phone. These devices have revolutionized the way we think of the Client-Server interaction since we want to be able to pick up right where we left off with one device on another.

Lets look at Netflix for a good example. Netflix has recently announced that they'll be supporting both the iPhone and the iPad for live tv viewing. Additionally they've recently made available Netflix on your XBox360 and even on your Wii. They also provide some other devices if you don't want to buy either of these things. This gives you the ability to play a movie on your iPhone, iPad, iPod, TV, or desktop. The best thing they've come up with is the ability to sync where you're at in a show, and pick up again on any other device. This means that you can start watching a show on your way home from the airport, and pick it up again on your TV when you get home!

So what does that mean in the general sense? Quite simply users want a seamless interaction between their desktop and their mobile devices. They don't want to have two totally different systems, just two different interfaces. They want to be able to pick up right where they left off on their desktop when they move to their laptop or iPad. They want to have everything synced automatically for them without them having to copy files. They also want to have offline support so when they go into one of those pesky areas that there's no AT&T coverage, they can still continue to work and the system will simply sync the tasks when it can. These are all things that every developer needs to be thinking about when designing their systems.

Comments

Tara Root said…
Hey! There you are! Our essay writing service and essay editing service https://essaysrescue.com/essaypro-review/ are available to students who are looking for help with anything from term papers to academic theses. We even help with online exams and math problems. Whatever you need, our essay writing service has you covered.
Tommy543 said…
We pride ourselves on offering a professional essay writing service and essay editing service that amazes our clients. This is why us.payforessay countless students come to us time and time again, asking: write my essay for me. Each writer on our team is glad to help.
Anonymous said…
I am glad to see this brilliant post. all the details are very helpful and good for us, keep up to good work.I found some useful information in your blog, it was awesome to read, thanks for sharing this great content to my vision, keep sharing.


Job Oriented Java Certification Course in Pune
To beat a programmer, you need to think like one!
Moral Hacking is regularly alluded to as the method involved with infiltrating one's own PC/s or PCs to which one has official consent to do as such as to decide whether weaknesses exist and to attempt preventive, restorative, and defensive countermeasures before a genuine trade off to the framework happens.

Ethical Hacking Course in Pune
Henryjones said…
Would go towards the main topic as per the need of the time that my dissertations were done by the experts who have helped me with Dissertation proofreading and editing and have helped me with early submission of work and projects.
David01 said…
The inception of application development dates back to the era of mainframe computers in the 1960s, where developers used to write code using low-level programming languages. Over the years, the evolution of technology led to the emergence of tummy tuck surgery dubai high-level.
Harryjames said…
Development tools such as Microsoft Visual Studio emerged, making it easier for developers to write best blogs & articles and test applications.