Category Archives: Blog Posts

Captain Nemo’s Data Centre Under the Sea

Enthusiasts have been water-cooling PCs and even enterprise servers for years. 

So surely water cooling an entire data centre by dropping it into the ocean, can’t really be that ridiculous of an idea?… can it?

Well, it’s not.

Because that’s exactly what Microsoft did when they deployed Project Natick Phase 2 off the coast of the Orkney Islands in Scotland.

I was intrigued and naturally wanted to learn more, to understand:

  • Why?
  • How?
  • What are the technical and ecological benefits? 
  • Is it a viable business model?
  • Can it become a repeatable solution?

… and those are the questions that this blog post will aim to answer.

With some help from Jules Verne

Twenty Thousand Leagues Under the Sea

Something instinctively stood out for me. I couldn’t help but draw parallels to Jules Verne’s Twenty Thousand Leagues Under the Sea. The pivotal character in this well-known science fiction novel is known to us as Captain Nemo. Though he was later identified as Prince Dakkar.

Prince Dakkar was an Indian Prince who journeyed to the depths of the ocean in his submarine – The Nautilius

The parallels between the Prince, his submarine and Project Natick are central to this post. It’s not just the commonality between the names – Nautilius vs. Natick – but the parallels in the underlying political motivations that drove the creation of both projects:

  • Captain Nemo’s mission was driven by his aversion towards imperialism, whereas the social injustice of the British Empire was his primary antagonist
  • Project Natick’s mission is driven by our aversion towards global warming, whereas our ever-increasing carbon footprint is this project’s primary antagonist

Both creations were born from a drive to change the world, deliver benefit to those around them and leave a lasting footprint on the globe.

And where did they both exist? In the depths of the ocean. 

Why does Project Natick exist?

  • Data centres have a global annual energy consumption of between 200TWh to 500TWh – that’s quite a range, but it covers the disparity in the reporting and estimation
  • This represents between 1% to 2.5% of the world’s energy consumption, which is between 0.3% to 0.5% of the worlds carbon emissions footprint
  • When you fold the lower end of these estimates into the entire ICT sector – networking, digital devices, televisions and cellular comms – this industry today accounts for approx. 2% of global emissions. This is equivalent to the carbon output of the airline industry!!!

Cooling a vast array of servers, storage and networking equipment is the largest energy burn for data centres.

Which is why Project Natick exists; to deploy data centres in locations where the requirement for cooling is not only reduced but eliminated and where power can be sourced from renewable means.

Future Energy Projections

Our data centre energy consumption is bound to increase but there are two schools of thought here:

  • Data centre providers argue that compute is becoming more efficient and energy demand will be steady as our data requirements grow
  • Environmentalists are projecting an 8-fold increase in power consumption in as little as 5 years

It’s not surprising why there is a polarised view between these two communities. It’s also not surprising where there is such a disparity in the current actuals.

However, whatever side of the spectrum you lean towards, it’s largely irrelevant. Just measuring ourselves against today’s emissions – even at the lower end – is enough to justify why we need to act now.

What’s the specification of the submarine-like vessel?

  • The unit compromises of two core components, a pressure vessel and a subsea docking structure
  • It’s the approximate size of an ISO shipping container, the ones that we typically see on the back of a lorry
  • The payload in this pressurised vessel is 2 racks with 864 standard Microsoft data centre servers and 27.6 Pb of storage
  • It has a maintenance-free life span of 5 years and has the data centre designation of Northern Isles – SSDC-002

What are the environmental benefits of the Project?

  • It’s purposefully positioned in the EMEC – European Marine Energy Centre – around the Orkney Islands
  • The EMEC is the world’s largest site for wave and tidal based power, so the data centre runs entirely from 100% renewable energy
  • The vessel uses a saltwater cooling system adapted from a submarine

The operational power demand for the project is entirely carbon neutral.

What are the business benefits of the Project?

Business Benefit #1 – Latency

With more organisations moving and deploying services into the cloud, the physical distance from networking hubs/offices to cloud-based applications or data stores can be problematic:

  • The fibre optic cables that transmit data are limited to the speed of light. The further the data has to travel the longer it takes. 
  • You might think, why anything on earth needs to travel faster than the speed of light? Well, many data scenarios; especially synchronous data processing, where data writes must be acknowledged at the receiving end before processing can continue, can be performance limiting. 
  • The advent of Machine Learning and Artificial Intelligence coupled with the fact that 50% of the world’s population lives by the sea, will increase the demand for data to be physically closer to people and remote devices.

Business Benefit #2 – Time to Deploy

This is where the story gets really interesting. This project took only 90 days to build and drop into the ocean. Deploying a whole data centre in less than 3 months is an incredible turnaround time. It’s often taken me 3 months to just get servers purchased, racked and deployed into an existing data centre. This solves two problems for cloud providers:

  • Planning consent is likely to be quicker and easier to achieve in comparison to the planning and build of a new on-site facility
  • Acquisition time and variable purchasing costs are eliminated, as many hyperscaler’s have been increasing their cloud footprint by negotiating the procurement of existing data centres

Business Benefit #3 – Reliability

This model is going to drive more focus on reliability and redundancy of hardware. For hardware geeks out there MTTF – Mean Time to Failure – will have to be greatly increased.

  • As system architects, we usually design for a 5-year lifecycle
  • However, traditional deployments are rarely maintenance-free within that time frame
  • When a datacentre in the ocean has a 5-year maintenance cycle, the equipment within must have sufficient reliability and redundancy to avoid re-floating, servicing and resubmerging
  • Having to physically maintain the payload within it’s lifecycle is unlikely to be economically viable
  • This entirely changes the focus around architecture design and reliability in the minds of engineers, architects and manufacturers

Other Benefits

Deployment of such vessels is not limited to cold locations. You only need to drop a container to 200m below sea level – even in tropical climates – to leverage the same cooling benefits.

It opens up a vast gateway into the unknown around data sovereignty. Using the UK as an example The Crown Estate can only exercise territorial jurisdiction of up to 12 nautical miles.

The pandemic has shifted our use of real cash to digital and Bitcoin alone consumes 0.33% of global electricity. If the trend towards digital currency continues – which I’m sure it will – then this is another likely requirement that will drive energy demand upwards.

Is there a future business model here?

Absolutely…

However, if this data centre concept is going to be economically viable, then the payload will need to stand the test of time. Redundancy and reliability for a maintenance-free operation is a key requirement.

Ironically, I drafted this post before Microsoft floated the vessel last week. Though Microsoft stated a lower failure rate compared to equivalent land-based deployments, it’s far too early in the lifecycle of this programme to make predictions. There are no other independent studies – not at this scale – and there are indeed a set of corporate optics that the marketing team at Microsoft must align too. 

Captain Nemo’s Nautilus even for a work of fiction was an engineering achievement of epic proportions. Verne himself described the Nautilus as “a masterpiece containing masterpieces”, which itself is a testament to himself. His imagination and scientific foresight was unprecedented at that time.

It’s the same degree of creativity that will drive similar creations to Project Natick. A containerised deployment where cloud providers could allow customers to design and deploy their own subsea datacentres.

Imagine, customising a payload, having it fitted, the vessel screwed shut and dropped at the bottom of the ocean, to only resurface for a payload refit in 5 years? All this in 90 days, with a whole array of opportunities to scale out by simply buying more containers.

This concept as a commoditised product would be an achievement as grand as when Verne penned the Nautilus on paper. It’s an exciting space to watch, not only for the environmental benefits but to support the imminent increase in demand for edge computing. 

Subscribe to my mailing list and receive new blog posts straight into your inbox.

Mastery of craft

We rarely, if ever – use the term “craft” to reference any form of discipline in the field of IT. 

The term is largely associated with a carpenter, an artist, a musician or even a brewer of fine ales. By defintion – a craft is a skill mastered by creating something by hand, obtained by a high degree of both practical and theoretical knowledge of a trade.

As professionals – technology or otherwise, from senior leadership to programmers, aren’t we inevitably trying to master our own craft? 

Bobby at 7

I started my craft at the age of 7. Seriously – I did! 

Programming BASIC on a Commodore Vic-20, all 8 bits and 1Mhz of compute. I suspect my parents’ motivations at that time, were the games to keep me entertained with their high bitrate graphics. But the real magic was buried inside the user manual.

Chapter 7 to be precise – “Introduction to Programming”.

At the time, I don’t think my parents fully appreciated the magnitude of their investment. I suspect not many parents did.

I had no clue what I was really doing, what the syntax meant or even what a program was. But it was the fact that I, with my own hands crafted syntax, which made the computer come alive and respond to my commands. I influenced and created an outcome.

Naturally, this fascination only grew, and it wasn’t long before I was upgraded to the breadbin Commodore-C64. An extra 59KB of RAM to support my coding endeavours.

Though I must admit, 10 PRINT “HELLO WORLD” didn’t demand much compute.

Bobby the elder

We don’t always need an arena to master or nurture our craft. When we don’t have that – we have training courses, books, webinars and personal coding projects. But there is one crucial component that influences our ability to execute and deploy our craft. The 7-year-old me or even the 21-year-old me, fresh out of university, would never have appreciated its influence… the art of workplace politics.

When black and white becomes grey

There is an irony here. Coding is about logic, a 1 or 0. Children see things in black and white, right or wrong – clear as day. But the world of business is far from logical, it’s quite the opposite, it can be very grey.

Our success in the workplace can be largely driven by our ability to navigate the political spectrum. As with all youngsters, I was idealistic and for years I fought what at times felt illogical. However once I learnt to accept this and nurtured it alongside my craft, this opened up a wealth of opportunity.

Politics, is not something you can learn in books or be taught on a training course. Many people, just like I did – could never really contemplate how influential this skill is in the mastery of our craft. Some may argue that it’s on the periphery, but as I said, for me it’s integral.

Politics in the workplace is seldomly viewed as a positive vehicle. This is largely down to motives. But if the intentions are good – to drive a successful outcome for an activity or a project – then mastering the art of politics; is an immensely powerful tool.

An outcome of a project or your credibility can be highly dependent, on not only how well you respond to political moves but how you shape your own field of play.

Interestingly, it’s the one aspect of our craft that is ancient in its origin. The advent of agriculture in 7000-8000 BC saw human societies transition into tribal forms of organisation. Politics went beyond survival; it provided a framework for the growth of humankind. Enabling tribes to work together to compromise, negotiate and make decisions. 

This extends into every aspect of our lives; 

  • At some point, our children stop listening to us and we soon learn how to navigate that political spectrum. We provide them with tools, means and support to empower them. Hoping they will making their own informed decisions and learn from their mistakes. 
  • In the workplace, we may have the answer to a problem. But sometimes people aren’t receptive to our answers, so we lead, steer and provide insight to enable them to carve out the right answer. 

There is a fine line between teamwork and politics. Sometimes it’s not easy to distinguish. However, in both examples, managing personality conflicts and personal agendas can not only provide a better-quality outcome but is more likely to promote ownership. For which it’s ownership that drives real change.

In contrast, it’s also true that underlying political motives can also drive a sub-optimal output.

If I could go back in time

Look at your job or role through the lens of mastering a craft. Being the best at what you do, mastering the practical and educating yourself on the theoretical. All the traditional aspects of growth – be that climbing the rungs of the corporate ladder, or the size of your remuneration package would naturally come together themselves.

If I could go back in time, and advise 7-year-old Bobby on anything – it would be…

Master your craft, for which you don’t need to be gifted or talented. Pay attention to the process and invest time in improvement. Small gains every day have a compound impact – just like interest on a savings account. If you invest in experiences that in themselves can pay dividends. Don’t be afraid to make mistakes, it’s ok to deviate off your path to explore new avenues – no experience is ever wasted. 

In the next episode of my podcast I let you into a secret, something not many people know about me. For the last 20 years I looked upon a particular experience as a lost year. But when viewed through a lens that is now transforming my life, it’s enabled me to create something new.

I suspect many of you in the field of technology have a similar story. When that first computer landed on your desk or dinner table – all those parents would never have imagined the influence of that moment. Is this unique to technology?

How many other professionals were given the tools to start mastering their craft from an early age?

A writer inspired by a book; a painter inspired by their crayons…

Subscribe to my mailing list and receive new blog posts straight into your inbox.

You can also leave feedback about this blog post on LinkedIn or Facebook

Quality decision-making – What’s your boardgame?

A board room is one of those occasionally used spaces, off the grid of the online booking system and with its own sense of exclusivity. I should remember them for their character destroying building experiences. But it’s the scent of the room, that I recall most.

Undertones of leather masking an obscure aroma of heavy cleaning products. Reflection of a room frequently cleaned but infrequently used. When occupied, the smell of fresh coffee provides an almost recognisable warmth. But it’s not a room that one should get too comfortable in. The tide can turn quickly, and the board room can soon become – dare I say it – a war room.

As I stand around the table, I almost feel like I don’t belong here. Outranked by the industry equivalent of five-star generals and outflanked by an onslaught of questions. When the attack commences, this room can often be a lonely place. There are not many allies willing to pop themselves above the parapet for fear of the attack changing its trajectory. 

I’ve been here before. No matter how well prepared and how well socialised a final proposal is, the most talented leaders know when and how to ask the right questions. It’s these types of questions, designed to derail your presentation and to highlight inadequacies in your readiness.

And why shouldn’t they? I’m not only asking for endorsement of a significant financial commitment, but failure to deliver could have career impacting implications on the leadership.

As I pause to focus and structure my response, let me explain how I ended up here…

The process

Most IT projects are conceived in or around some type of procurement process. This reaches its apex on a crucial decision point. A preferred supplier – be it software, services or hardware; the differentiation is largely irrelevant.

If the process is run correctly, this shouldn’t come down to one or two decision making meetings. As requirements and the solution evolve, so should the stakeholder’s journey.

Poor decision making is one of the most common and costly pain points of any IT delivery project. The ability to make decisions, is one of the core tools in any leader’s toolbox. For a technology-enabled change, it’s often my role as Lead Architect to act as the advisor and influence the quality of that decision.

A chess Grandmaster knows, that a quality chess move is one that requires a piece to be moved only once. Several quality decisions provide leverage over the entire game. In the context of business, this leverage takes various forms. From risk avoidance to alleviating project time pressures and providing measurable value to the business.

Quality decision making however, is largely contextualised in the eye of the leader. We rarely see this through the lens of the advisor. 

Quality of the advisory

Decisions are made upon the status of information known and presented at a given point in time. As we progress through the project lifecycle we learn, we innovate, we find new solutions. I believe, there are three core principles that directly influence the quality of a decision;

  1. Frame – How you introduce your proposal, the problem statement, solution and benefits is a journey. Your stakeholders should be taken on that journey, slowly building up collateral and momentum as you approach the final approval gate. Properly framing a decision validates that you are trying to solve the right problem. This process is key in gaining executive alignment. Many ERP projects fail due to lack of executive support.
  2. Options – Should all be profiled around risk, impact, benefit and cost. Being creative around all the possible options, really engages the learning process. However, as that process evolves the number of options will naturally dial down. Only those that are achievable, meet the same goal, are complete (not strawman), but different enough from each other should be presented. Depending on the scenario, leading stakeholders into developing a hybrid/flavour of an option can be an incredible engagement enabler – however there is a fine line here.
  3. Time – Leadership respond well, when they have not only influenced the shape of the journey but been given sufficient runway for decision reflection and challenge. However, this must be balanced right-to-left, without compromising the critical delivery path.

Observing leadership styles and team dynamics can add significant value. The leadership, like any other team, are likely to have their cliques and internal political motivations. 

I once tagged along to leadership meetings, purely to observe. Familiarising myself with body language, mannerisms and picking up on small quirks. One that always stuck in my head was a certain leader’s distaste for colour gradient fills in a presentation. In his opinion; red, amber or green should be solid colours, not a gradient fill from amber-to-green as this indicated indecisiveness! This was a clearly a personal preference and may even seem trivial.

However, these observations are about removing the noise. Allowing you to maximise your limited time and delivering your message without compromise or distraction.

Back to the boardroom

I not only survive the onslaught of questions but provide leadership with sufficient confidence that the projects recommendation is the right way forward. My approach drives a debate across the options, and we secure endorsement to proceed. Not forgoing a few obligatory caveats. 

Lobbying for support outside of a formal setting is crucial in bringing risks and concerns to the surface. Addressing these, both officially and unofficially is a powerful tool in gaining support. But don’t get caught off-guard, any loyalty you gained can soon swing if the mood of the room changes.

However, in some cases, you don’t always need the full support of the room. With the right counsel, good leaders should have the confidence and experience to make a decision without consensus. There are always likely to be absconders; those sitting on the fence wanting to distance themselves from any future failures. Key challenges in the role of the advisor is to judge the political position, remove the emotion and allow the facts to be seen unclouded.

Let’s also not forget, that outcome is not always a reflection of the quality of a decision. Where the right technology/supplier decision was made, many projects I have delivered could have quite easily failed due to other issues. Many nearly did – however, that’s a post for another time.

Choice of technology and solution is only one part of the puzzle. It’s all about engagement, approach and education. Just like a chess Grandmaster looks for leverage with every move, enable your leadership to make quality decisions.

Subscribe to my mailing list and receive new blog posts straight into your inbox.

You can also leave feedback about this blog post on LinkedIn or Facebook

mumbai calling it project saved by a biscuit parle-g parleg Bobby Jagdev

Mumbai calling – The IT project saved by a biscuit

Let me take you on a journey, from Birmingham to the city of Mumbai. As the plane descends onto the runway, the first thing that strikes you is the sheer scale of corrugated iron and blue tarpaulin roofed huts. The sprawl of the nearby slum wraps itself around the airport, tightly consuming every square inch of surrounding space the airport has to offer. The slums of Mumbai tell a story in themselves, micro-economies that flourish in a city that offers you everything from high tea at the oppulent Taj Hotel, to the gritty energy of Leopold Cafe. As you brace for landing, be prepared for a city that will wrap you in it’s every emotion.

Welcome to Mumbai…

You may be mistaken in thinking this is a travel blog. It is and it isn’t!

I was reflecting on a business trip I took to Mumbai last year and the impact current events may have on travel in a post-pandemic world. Even if we wind back a few months to a pre-pandemic world, the emphasis on international business travel has been steadily declining. At a time when our working teams are more blended than ever, the cost of international travel has become more challenging to justify. However there is an argument that it’s actually needed more.

Naturally, the current crisis has seen an exponential shift to home working. However as I read articles, posts and even companies pledging home working commitments, will this come at a cost when it comes to relationships, creativity and the many other benefits of face-to-face working? A video conference, a phone call and even an email, is a poor substitute for some of the most crucial aspects of productive working.

The purpose of my trip

The primary objective of my business trip to Mumbai was to lead a team in solving a technical issue that would have severely impacted the critical path of a project. I was also keen to use the visit to build on the already established relationships with my team. In my view, both objectives were just as important as each other, but unfortunately in today’s world its harder to justify the latter.

This is a trusted team that I have known well for many years and a team dynamic that I thought I understood well. However, I was pleasantly surprised by my misjudgments! Some of the team dynamics were impossible to gauge remotely.

Outside the meeting room you see individuals work their magic on the office floor, negotiating, supporting and motivating the team. The casual conversations and playful exchanges are key to promoting healthy and productive team working environments. It takes certain individuals to drive the right culture and promote a way of working that not only delivers results, but promotes inclusion – which in itself is one of the key ingredients of “the team” recipe.

The biscuit

Inside the meeting room, the ability to use a whiteboard, look over each others shoulders to analyse data and get a feel for how ideas/solutions are being responded too is incredible. You really feel the emotion of a room and how people bounce off each other, which are key indicators to shape a creative energy that drives an outcome. After many days of troubleshooting it was an outdoor tea and biscuit break on a warm and humid Mumbai afternoon that we hit upon an idea. Just to be clear, there are not many things in India that can not be solved over a hot cup of chai (tea) and the obligatory parle-G biscuit.

The Parle-G is a rectangular shaped, malty and slightly sweet “gluco-biscuit”. India’s most popular biscuit is recognisable by its decorative border with its name firmly stamped in the middle. The Parle-G is the perfect accompaniment to tea as it has a reputation for sucking all the moisture out of your mouth.

It’s this opportunity and ability to look at problems and work as a team that made the real difference. The issue we ended up solving was key to the success of a multi-million pound project and it’s the Parle-G that you can’t share remotely, that we have to be grateful to.

Reflection

There are several characteristics that contribute to building relationships from trust to cultural awareness and these also have a bearing on how we develop our own interpersonal skills. These qualities can’t be built remotely they have to be built on experiences with real life contact and emotion. Innovation and creativity thrive in these types of environments.

Those all important relationships are called upon when you need them most. The support from your team, colleagues and leadership team is really defined by what you establish outside of the meeting room.

Depending on where you are reading this blog, I don’t necessarily need to take you to the other side of the world, to give you a reality check on the importance of face-to-face working. The principles in my message apply universally anywhere and everywhere.

Those unexpected 5 minute “water cooler” conversations, bumping into a colleague in the hallway and even exchanging a few words over a desk, all contribute to fuelling our productive machine. Incidentally I once secured a job, not by what I said in the meeting room, but by the snippets of conversation we shared in an elevator – this gave my interviewer a real view of me and my experience.

Businesses and people today are responding to an unusual event. Unseen in our times, but as we adjust to this temporary shift, we do have to remember it is only temporary. We will learn lessons, but inevitably the world will find its balance again and hopefully in a better state. What we have learnt from how we have responded to this pandemic, should contribute to shaping a better work/life balance.

I usually end my posts with key points to take away, but I don’t think this one merits it. I honestly don’t know how office based working, be that locally or even globally will look like in a post-pandemic world. How we exit this pandemic will have a huge impact on the direction of travel.

My key message here is not to lose sight of the direct and in-direct benefits of face-to-face working – how these contribute to our goals at work and how they shape our personal development.

Subscribe to my mailing list and receive new blog posts straight into your inbox.

You can also leave feedback about this blog post on LinkedIn or Facebook

Unclouding – Do you need a cloud exit strategy?

The CEO of a leading cloud provider was recently quoted as saying, that organisations not transforming their business to adopt the cloud were defying gravity and by not being “all-in”, toe-dippers were risking giving their competitors the edge.

The benefits of cloud computing for ERP are immense and well publicised. However I do feel as though, we have a limited counter balance to this view. This is largely fuelled by a growing impartiality between industry research and advisory groups, that are “supported” by cloud providers to push a cloud first agenda and leaving little room to consider the merits of other options.

Putting the cart before the horse

The most common error in technology enabled change is starting with the solution. Therefore, it’s not surprising that many cloud ERP implementations fail to deliver their intended benefits, whilst at the same time subjecting organisations to massive implementation and run costs.

Inevitably there is a growing trend towards cloud exit, or as I like to call it unclouding. It’s a real thing with big names like Dropbox who have reversed their approach.

The importance of an exit strategy

Companies running ERP today, fall into three categories when it comes to the cloud, all-in, partially in, or considering it. Whichever category you fall into and regardless of which stage you are at in your cloud lifecycle, it helps in understanding the reasons why organisations have, or are likely to uncloud. An exit could be both financially and politically costly therefore, developing and maintaining the right strategy is key for exit avoidance and/or readiness.

Why are organisations exiting the cloud?

The core reasons for “unclouding” are around security, regaining control and most commonly reducing cost – Yes! That wasn’t a typo – reducing cost!

One of the consistent messages coming out of the cloud sales push, is that you can’t run infrastructure as cheaply on-premise. But when you apply this to largely steady ERP workloads it might surprise you to hear that you can; of course, there are numerous variables in this statement. We constantly hear that cloud requires a different operating model to leverage cost savings and operational benefits, but again – is that not putting the cart before the horse?

A common example is where cloud providers, implementation partners and advisory groups are leveraging the flexibility of the cloud to architect “on-demand” solutions for ERP environments. This influences the business case to reduce operational costs, by shutting down test environments (when not in use) and scaling up/down your Production environment to meet demands. But there are some real issues with this approach:

  1. If you power down systems, you still have to pay for storage costs
  2. Database servers require pay-by-hour database licensing, but this is generally more expensive than purchasing perpetual licenses. License exchange discounts pitched in the sales cycle (moving from one database vendor to another) do not usually apply to the pay-by-hour model.
  3. Most ERP businesses are global and sizing/costing peak demand is usually limited to month-end/year-end processing. This level of frequency is usually insufficient to leverage significant cost savings. True elasticity in your demand needs daily extremes, which are not always common in most ERP solutions.
  4. System administrators are reluctant to shutdown test environments or scale up/down production, due to the disruption and associated risks. Automation tooling is still maturing and can be time consuming to implement and maintain.

Another cost factor that’s often built into a business case, is the reduction of your infrastructure operations teams, but in reality that team just changes shape.

I’m know I am only focusing on a few examples here and clearly business case modelling will continue to evolve. There are a multitude of reasons why cost comparison exercises between on-premise and cloud are likely to provide a distorted view of savings.

How do I develop an exit strategy?

An exit strategy is far wider than the mechanics of how you would perform the exit and where you are likely to land. We need to take a step back and identify the likely triggers (cost, performance, availability, security etc.), understand how we measure these and the levers to respond to them.

There is merit in devising an exit strategy whilst you develop your business case. The risks, impacts and assumptions that build a business case are key inputs into this strategy.

The key takeaways here are:

  1. Identify your exit criteria and levers, but ensure your governance model continues to monitor and measure these. Invest in an innovation team as operations are always too busy keeping the lights on. Empower this team to drive benefit by executing and identify further levers, to keep that exit criteria in check.
  2. Have a high level exit plan that covers what your landing options are likely to be, impact to the business, cost/risk of a migration etc. Also consider the impact of architectural limitations any future decisions will have on the exit strategy. Your exit criteria may identify certain apps and scenarios, even resulting in a hybrid cloud across multiple providers!
  3. Do your homework and due diligence by investing time and sourcing the right skills to develop a robust business case. This is a moving target and is constantly evolving, but that’s technology in general.
  4. Design to promote mobility – especially if you are greenfield. Try to avoid painting yourself into a corner with one cloud provider. Design your architecture to keep your options open and limit the impact of moving between providers/on-premise.

Benefits of an exit strategy?

Developing the exit strategy early not only provides that crucial “checks and balances” on the proposal, but once you are in a run state it provides a framework to govern and drive operational efficiencies. In the event of an unlikely exit (partial or full) you are somewhat prepared and also have some collateral to support contract renegotiations.

Don’t underestimate the cost and impact of an exit. Some organisations would have undergone costly transformation projects to move to the cloud and it is not uncommon for complex migrations to cost the equivalent of many years worth of cloud run costs.

If you have a made a solid commitment to your cloud journey and exit is politically damaging, there is still enormous value to your business in developing and maintaining an exit strategy.

Finally, lets not forget – without trying to contradict my opening sentiments in this post; we do have to applaud early adopters. It is those organisations that were brave enough to take the plunge, learn the hard lessons that have enabled the rest of the industry to benefit and develop more effective decision making.


Update – 8th August 2020: Not all organisations give into the market pressures to accelerate digitisation. Listen to the soundbite from Episode 3 – “Staying behind the digital curve” which explores the Aldi business model.

Full podcast episode is available here.

Subscribe to my mailing list and receive new blog posts straight into your inbox.

You can also leave feedback about this blog post on LinkedIn or Facebook

launching your erp into the clouds Bobby Jagdev

Launching your ERP into the clouds

The words “digital” and “cloud computing” seem to be embedded throughout every ERP presentation today. You can’t get away from promises of reducing risk, cost and faster deployments to enable your “digital transformation”. But when you lift the lid – what does this really mean?

When a mission critical SAP/ERP implementation undergoes a major technology enabled change (move to the public cloud, migrate to HANA or a system upgrade), there is a common denominator. The main event – the production downtime window to go live! 

The public cloud offers a multitude of benefits, however lurking underneath the covers is also an array of risks and issues, likely to sting you during the main event.

Don’t get me wrong, I totally support the cloud movement – though digital transformations can also be supported on-premise too, as they have been for the last 20 years. My aim however is to educate and inform to ensure the risk profile of production cutovers in the cloud are understood.

It’s all about visibility and control – these are the two key areas you lose.

The downtime window

Often mission critical ERP maintenance windows are fixed and sometimes agreed with the business a year in advance (sometimes greater). With such advanced scheduling there is likely to be limited insight into how they will be utilised and sometimes, insufficient for the change they are allocated too.

Even when you can influence the duration of the window, you will still be subject to business constraints and held to early experience based estimates.

Whatever environment you are operating in, there is a common challenge – execute a complex and often multi-dimensional change in a production downtime window that is never long enough!

So, we make the impossible possible by reducing the technical runtime, whilst at the same time reducing risk, eliminating variables and creating a repeatable recipe.

The approach

Over the years I have developed an approach to making that impossible possible. That itself deserves a separate blog, but in a nutshell, identify levers, variables, risks, benefits and then devise a strategy to rinse and repeat – prove it, break it and document it!

Cloud computing simplified and reduced the cost of the key on-premise prohibitor – compute and storage! The ability to stand up instances (at the right size/scale/config) and be able to pay by the hour for the privilege, became a real game changer. This allowed us to focus more on the creative levers to help solve the problem.

The team working on this will feel like the challenge is nothing short of launching a rocket into space.., well we like to think that!

The end product

Once we have mastered our recipe our toolbox is equipped with a technical runbook and detailed cutover plan. Somebody is assigned to ordering pizza, whilst some of us prepare for no sleep for the next few days (or catch what you can on the office sofa or even the floor – of which I’ve done both).

Authority to proceed

You secure a GO decision from the exec and the team quickly move into executing the plan. People mobilised, technical processes running, governance checkpoints governing and we are now full steam ahead.

There are two types of events that are likely to occur when things go wrong; something breaks and everything stops; or the most painful of all, it slows down! You are unable to achieve the benchmarks you recorded in your rehearsals and now your plan and contingency is at serious risk.

It’s only then in the dark of the night, that an exhausted team (with the eye’s of the execs reigning down on them) realise how vulnerable they are, due to a lack of visibility and control.

A recent project involved a complex migration of a large SAP implementation to the cloud. Even though this migration involved a database and operating system change, we soon realised there was less risk in moving all of SAP Production in a single event. Data transfer was one of our biggest challenges, we addressed this via a complex set of daisy chained events across several transfer links.

When it starts to go horribly wrong

and the adrenaline kicks in…

Once data hit the first staging area in the cloud something started to smell wrong, everything was running much slower than planned. Incidentally Hurricane Florence was battering the US East Coast during the same time. Even though our change was in Europe, there was news after the event that cloud providers were moving loads from North America to Europe to ensure availability. So replication of huge volumes of data and shifting of compute demand was likely to stretch hypervisors and push even the most highly provisioned storage solutions to their limits. There were no incidents, reports or status updates being declared by the cloud provider.

On another project an application upgrade slowed down far below our benchmarks. No hurricanes or world disasters to blame on this occasion, but we never really identified the root cause. However our analysis (once out of the heat of the battle) suspected this may have been unrelated to the cloud infrastructure.

Both examples bring us back to visibility and control. The decision for cloud providers to move workloads was entirely their own, they have availability SLA’s and other customer to look after too. When under pressure the lack of visibility across the full infrastructure platform severely impacts your ability to effectively troubleshoot. It soon becomes a distraction to the working team and stakeholders.

What did we learn?

You soon learn how to magic contingency from a plan that doesn’t seem to have any left – that itself is an art.

Lets not forget the cloud isn’t really this magical unified layer of compute that is always on, always performing in the sky somewhere. It’s a complex amalgamation of data centres, aimed to provide an unprecedented degree of scale and availability. But when it comes to maintenance of mission critical ERP you have to understand that decisions and changes by the cloud provider are not done with your go-live in mind. That lack of control is a real risk and if an incident occurs, getting the right level of visibility to support troubleshooting can be a real challenge.

The key take aways here are three things:

  1. Sufficient contingency – both planned and also know how/where you will find contingency if it becomes exhausted
  2. Set realistic expectations with the exec around the technical control/visibility risk of your go-live
  3. If you haven’t yet moved/deployed mission critical ERP in the cloud, reflect on the potential impact extended (if infrequent) planned downtimes may have on your business

Alternatively you could argue the cloud provided a degree of resilience to enable our change to be completed without a hardware failure and the ability tap into an immense amount of pay-as-you-go compute. Even if these changes were performed on-premise there is always a risk of hitting unexpected issues not witnessed during rehearsals (even a natural disaster). The argument can swing both ways, but again this is all about visibility and control.

Mission critical ERP are the crown jewels that run your business, the cloud is an incredible enabler, but does come with inherent risks/challenges that we should be tuned into.

Subscribe to my mailing list and receive new blog posts straight into your inbox.

You can also leave feedback about this blog post on LinkedIn or Facebook