Update your browser to see this website correctly.
Software development has changed a lot over recent years, and how we host it has changed just as much. Operational engineers used to spend their time building and configuring machines. But now you’re hearing about us using cloud providers to do the heavy lifting. So, why the change?
Historically, to increase the capacity of a system, engineers would have to purchase and configure physical machines on-premises. To cope with the peaks and troughs of demand, they would often have to over-provide or suffer periods of poor performance. This struggle between performance and operational cost is a real challenge.
In the cloud, we have almost infinite capacity at our fingertips and adding capacity is a trivial task. Taking capacity away is just as easy. In fact, we can take advantage of auto-scaling so that our system can scale on its own according to demand. Not only does this mean our system is ready to handle our customers, most likely it means we’ll save money too.
Cloud providers like Amazon, Microsoft and Google are giants in the industry. They provide services that are incredibly stable and secure. In most cases, far more so than any traditional set of servers on-premises. Just by using the cloud and making no changes to your system, you take advantage of this. By utilising managed services (software components provided as a service such as databases), you can make your system even more robust.
Customers are coming to rely more and more on software as their needs change over time. The ability to keep up with those changes is at the core of software development.
In the cloud, we can deploy huge numbers of different resources and tear them down again just as quickly. That changes the game for research and development. We can do whatever R&D we want to do with almost no limits.
If you want to learn more, take a look at Cazoo’s success story – the car company that built and launched its e-commerce platform in under 90 days.