Agile series: Evolved roadmaps

Building roadmaps can be tough, especially when working within an agile culture. Sadly, most roadmaps are the result of some form of Gantt chart or project plan, so you see a real waterfall type of process of what is being worked on and what’s landing when. In this post, I thought I would share with you the type of roadmaps I like to build or see.

Evolved roadmaps fit for the agile world

A product roadmap is a highly communitive and rather powerful tool. Because of this, it can also be the source of friction between the “business” and technology. Often if you see a date in a roadmap, expectations are that everything will be done by that date. This is because traditionally, everyone thought you could predict how long things would take to build, the method to follow was one of strict feasibility study, strict planning, understanding the problem and then knowing how long it would be to execute the solution. Throw in some testing time, release windows and add some tolerances and there was your rather “waterfall” highly structured project plan and roadmap.

The reality is, that structured project plans etc work exceptionally well when you do NOT have any unknowns. In technology, we have unknowns. We do NOT know how long something will take to build often, and if something is more innovative, depends on some newer technology, then the number of unknowns goes up. This all means and indicates that prediction of the future is going to get less and less accurate, rendering your nice Gannt chart / roadmap pretty useless. This is why we have seen the rise of “agile” in technology for many many years now, and its all about a simple principle which I have mentioned various times in this series, value “Delivery over Predictability”.

We should therefore think of our product roadmaps as more of a strategic roadmap. By thinking in this way we start to remove the specific dates and focus on what matters, delivery, so we are more focussed on outcomes.

Think of value

If you are able to remove your thinking away from a specific output, on a specific date, then you are starting think more along the lines of delivering strategic value, prioritisation and pathway forward. This isn’t too dissimilar from an old-fashioned roadmap, however the focus is on delivery and strategic outcomes, stepping stones to ultimately where you want to get to.

Since our focus is now on value creation, and we are following an agile engineering culture, we don’t need our roadmap to be highly detailed. Great levels of detail detract from the power of a roadmap at being a communication tool, if its too complex it loses its impact and macro view of your strategic objectives. With this in mind, keep it high-level, keep it to a single page.

Now, I know there will be many readers shouting “I need a date!!!”. I have had this discussion with “delivery” departments, exec and pretty much every type of stakeholder you can imagine, you don’t need a fixed date with a fixed set of deliverables. We think we do because it gives us some form of comfort that we can plan accurately. The reality is that its an illusion, you are not planning accurately. So, you can continue to live in an illusion, or you can accept it and ask, “what can we do then?”

Milestones

I don’t have issues with supplying some form of dates, however, I have issues with pegging dates directly to specific deliverables. For me, dates should be about “goals” and “milestones”. When we think in this fashion, we de-couple specific deliverables from the time. For example:

  • Goal is to deliver automation. We can set a time frame, which should be broad, say Q3
  • Milestone – automation for processes x,y,z all completed

Now the date is associated with the goal. And yes, the goal could hold detail like automation for processes x,z and z, but it’s not a hard delivery. The milestone has no fixed date, rather it is could be seen as a retrospective thing. We can therefore hit our goal, but not the milestone, since as part of our goal we only automated processes x and y.

Alternatively, you can have a goal and a milestone that are very similar as in, “we hope v1 of our product goes live on a specific date”. This is both our goal, and it will be a milestone. The key here is, not to over define v1, rather v1 is whatever has been delivered into production on that date.

MVP (Minimum Viable Product)

MVP is a fabulous concept. The concept being what is the minimum we need to be able to launch. You then iterate post launch to continue to deliver more and more functionality into your product. I say this is a fab concept, but I am not a fan, I am not a fan because it’s open to abuse. I have yet to see any form of exec or even “product” focussed management stick to a true MVP. What happens in reality is that MVP becomes bloated with things that we “think” the customer wants, or we “believe” we internally need in place. The truth is, unless the customer has specifically told you, then you don’t know. Secondly, there are so many internal ways of ensuring you can support your product live, they may not be ideal, but until you are launched do you really understand all of your needs, or the priority of those needs? So ask yourself, are you really pushing the MVP or are you adding what “you think” should be in there. The MVP concept is often not challenged enough.

Larry Ellison (founder of Oracle) reportedly asked his teams, when launching Oracle:

“When will we be ready to ship to our customers?”,

to which he got the reply “we need to add in x, y and z because our customers expect this”

“Does it compile” Larry replied.

“Yes”.

“Ship it!”

Now this is a true MVP. Larry new that by shipping there and then he had achieved a goal and a milestone, the goal to get product into the marketplace and the milestone of receiving revenues. Was the product ideal? No. Did it work how he wanted it or his engineers? No. Did it work how his customers wanted? Only his customers knew that, and now that they had received the product, they could tell him first hand based on real use.

Components of the evolved modern roadmap

Your modern roadmap is, as I have said earlier, about strategic objectives and focus. Let us look at the components that help you provide this. The best way to show this, is by showing a roadmap, noting that I am trying to keep this very generic, which is a trick in itself:

A modern roadmap sample

Vision

Your roadmap needs to ensure alignment, so include in it your vision statement.

Business objectives / intent

You may have different intents you want to focus on, relay these intents back to your technical roadmap for delivery, make sure they remain aligned and coupled. This should be easier to achieve if you have an effective product steering committee.

Strategic focus / timeframes

Keep this broad, do NOT have dates on there, rather try to show that things “closer” in terms of date are more solid in terms of as a deliverable, when compared to those later in the roadmap. The roadmap must remain agile and fluid, however, there is a point where things need to be locked. They have to get locked in order for engineers to pick up and actually work on what needs to be delivered.

Ice, Water, Steam and Air

This analogy allows me to show that things can become more fluid the further they are out, in terms of priority / as a deliverable. Now I have seen other roadmaps use terms like, “now, next, then, later”. When you are in water, steam or air, you may move work loads in to other columns, add to them, juggle as much as you want to. But once it moves into “ice”, you are locked and loaded.

Items

The items in the individual “blocks” that make up the roadmap, held within their relevant prioritisation column, will be high level and business focussed. For example:

              “Deliver feature x, to allow us to broaden our customer base”

              “Deliver feature y, as customers demanding this additional feature”

Now, you can decide to tag these with the relevant “epic” or “work items” that your engineers are keeping in say Azure DevOps, but this can actually become quite an overhead, so I wouldn’t say it’s something you must do, rather it could be of use if you can get everything tied together nicely.

Conclusion

A good roadmap needs to ensure strategic alignment primarily, this means it is “outcome” based and not based on specific deliverables or output of your teams. Never get hung up on specific dates, focus on goals and milestones. In doing so, you are valuing delivery over predictability.

Your roadmap, even without dates, gives you what you need. If you need to coordinate business operations, legal, customer communications, even head back to your investors to raise more funds, you know when to focus on what aspects and you get quite accurate goal dates. This is because you have that alignment, and you know what is being focussed on because work is now sitting in that “ice” area of your roadmap.

Now, you may not have specific timelines, but you get to understand delivery dates more and more as you witness work flowing into production. When you look at large technology companies, they now announce products once the product is already in production, or vast amounts of it are in production. More often than not, the product is already being used by pioneering customers. Now you know how to build a modern evolved roadmap, try implementing them and start to measure the benefits you will experience.

Liquidity, the overlooked component of payments

All too often, whenever we discuss payments, we forget about the actual underlying liquidity. This is true of faster payments (FPS), here in the UK, SEPA in Europe and pretty much anywhere in the world. The most glaring error though is overlooking liquidity when we talk cross-border payments.

So why is it overlooked? I can only assume its because payments are typically all about payment messages, rather than actual flow of funds. When we talk cross-border payments we are constantly talking purely about the messaging. SWIFT GPI is seen as a cross-border payment component, but all it does is track a message. A message which isn’t actually the movement of funds….When we talk FPS we talk about the messages, the payment instruction, the acknowledgement and then notifying our end customer. When messages flow over the FPS infrastructure, money has not moved, simply because like most instant payment systems, they are clearing systems. These clearing systems then move money via a scheduled batch processes which moves the money and completes the settlement element of the payment. Essentially, FPS is a message that contains a “promise” to move money / make a payment. The promise is backed by locked in liquidity, so the recipient does know they will get the money.

In terms of payments though, just think what is happening from a liquid funds point of view. The sending bank has liquidity locked away to back up its promise to pay the crediting bank. The crediting bank is paying that money to the end beneficiary, who may want that in true liquid terms and withdraw cash. That means the crediting bank is leveraging its own liquidity to facilitate crediting the end beneficiary.  So, my payment of £1 means £1 is locked away my bank, and the crediting bank has used its own £1 to complete the payment. In terms of liquid cash £2 could be being used for a single £1 payment. However, things get worse. For the sending bank, well the money that moves is not the money that is backing the promise, no, its liquid funds in the central reserve account that move as part of the settlement process. So, my £1 payment means my sending bank has locked £1 away, to back the promise of the payment, made the payment with an additional £1 taken from its reserve account, and the crediting bank has temporarily (potentially) stumped up its own £1 to pay the beneficiary – though it will receive £1 in its settlement account later. At a macro level then, moving £1 means I have for sure used £2 and may of needed £3…..

If you multiply this up based on the values and volumes that we see flow through FPS in the UK, we soon start to see that there are vast amounts of cash that is simply “locked”, it cannot be put to any real use. For the banks themselves, these are funds that could be leant, put to work, earning additional revenue for the bank. I wonder, has anyone looked at the true “costs” of processing an FPS transaction?

Liquidity costs money

When we started ClearBank, one of the big challenges was understanding what liquidity would be locked away. When you service other banks, this gets even more complicated as you need to take a bit of a “punt” on what money they wish to be moving. Treasury must put real liquid funds into a collateral account to back the promise of a payment in FPS, thankfully the same is not true of CHAPS (RTGS). If you get this wrong, guess what, your payments stop flowing and your customers get very unhappy (and rightfully so).

For smaller FinTech players, trying to access FPS directly is pointless. Though many are direct participants and no doubt many more will look to join. I say its pointless because a) you could just use ClearBank API and you get all the benefits of being directly connected and b) you have to make sure you have your own liquid funds to back the promise of the payments your customers will make.

If you’re a small FinTech then and your balance is say £5m, which is all your customers money, then how much of that are you willing to “lock” away? 1m, 2m, 3m?  Well whatever you lock, your customers can no longer access. So, if you have customers that want to pay away pretty much more than half their balance, you simply cannot make those payments. So you have the £5m, customers want to pay out £2.6m well they cannot, because you can only lock away £2.5m max. The maths do not lie. So, you need to borrow money, or raise additional investment in your FinTech to support your customers potential payment flows. How much has that cost you? If you are borrowing it, then you are losing money at a macro level. If you are raising the funds then, what have you lost in terms of your stake in the company as a founder going forward? That 15% stake that you have just used to fund customer flows, what will that cost you if you get to say a £250m valuation????

For larger banks, the challenge is about lost revenue opportunities. Lock up too much liquidity and you are losing potential revenues, lock up too little and you cannot facilitate customer payments.

Cross border is tough.

When we look at cross-border payment flows, things get worse. For a bank, you must either have a credit line with another bank or you are “pre-funding” accounts based on forward forecasts of what payment flows may look like. (So, you are locking up liquidity – or paying for credit). Things get worse, because not only are you locking up liquidity with other banks, you’re actually also having to lock liquidity away in your own base currency to cover any associated risk of another bank holding your funds. This could be anything between 7%-30% of the balance you are holding with another bank.

When we talk cross-border payments, the discussion gets confused between the payment message, and the flow of funds. A cross-border payment message is pretty real-time (ish) if, and it’s a big IF, your bank has funds in the destination currency already, and that both banks and jurisdictions are open. In such a case the payment message gets there in real-time(ish) and the payment will probably get processed in a similar time frame. SWIFT GPI will even show you the message flowing over the network.

However, in reality though, your cross-border payment journey was triggered off by locking up liquidity and trying to get that right, based on a forward forecast. This is before we look at the fact that your bank does not have a relationship with the bank that holds the end beneficiary bank account, or your bank does not even have a relationship with a bank in that jurisdiction. By over-laying these additional factors, by adding in “intermediaries” and correspondents, we also must multiple out the liquidity challenge across all the banks in the correspondent network chain – just to process my single payment.

Parting thought…

Locked liquidity and processing friction can be termed as financial friction. This is estimated to cost you, me, businesses, the banks themselves, the global economy if you will, some $15trn a year. This cost is pretty much down to the challenge of managing liquidity, the overlooked component of payments.

Agile Series: Risk, Compliance and IA

This maybe the most “controversial” article within this series, but its potentially the one that could solve the most headaches, and for a COO/CIO/CTO maybe the one that removes the most stress, if risk, compliance, and internal audit are onboard…

The truth across many functions within a financial institution is that they have not been exposed to agile concepts, and as such, they will place dependency on typical known structured approaches, probably along the lines of project plans and Gannt charts. Risk and compliance functions do not historically require agile types of approaches, since much of what they focus on can be highly structured, highly process driven. The same is true of Internal Audit. Immediately you can see that from a “culture” perspective, these are functions that are not connected into the agile engineering culture that you are cultivating, because of this, it can lead to a fair amount of friction.

So, what do I mean by friction? Well typically risk and compliance want to ensure that everything is done to mitigate risk and show that those accountable are the ones that make the decisions. Unfortunately, this can mean they want to see (and even introduce) a highly structured approach to how software is delivered, typically in the form of what they are most familiar with, project plans, Gannt charts, committee sign off etc etc. These things very much break a continuous deployment pipeline.

Breaking the pipeline

Continuous delivery/deployment can potentially see software making its way constantly into production, hundreds if not thousands of components on a highly frequent basis, maybe even daily. Immediately this causes a challenge for departments such as risk and compliance. Most risk functions will want to see a consolidated point where testing was signed off by “the business”. By this it means someone who has the authority and accountability to say “yes, this software works as expected, it may move to the next stage”. They want to see that conscious decision point, ensure it happened and be able to capture it as an auditable event, typically in a fashion that they are familiar with. But this breaks continuous deployment, it adds friction into the process and depending on how you wish to execute this approach, may add a great deal of infrastructural and resource overheads.

The pipeline needs to remain continuous, but it cannot simply just happen, rather these functions require conscious decision points. Don’t fall into the trap of reverting back to “manual process” here, or a favoured delivery/approval meeting where people sign off on the process. No, this type of accountability can be easily captured and included into the delivery pipeline, you just have to start capturing more into a continuous deployment pipeline.

Team construct

Firstly, if your team construct is not correct then you will struggle to implement these types of controls and auditable points that I personally favour. Teams must be autonomous, and therefore, if someone such as a Subject Matter Expert or Domain owner must provide approval for software to progress into production, then that person must be a part of your team. You cannot afford to have individuals / groups of people outside of your team part of a separate deployment process.

With modern DevOps tools, you can introduce “gates” and “sign off” points in your delivery pipelines, this means that your team member (the empowered team member) is able to be part of that pipeline and provide sign off that the software may continue onto its next stage. You can even include domain owners into the final sign off process. These steps can be automated somewhat, with the necessary people receiving an alert when their signature (so to speak) is required. Only once they provide that consent will the automated pipeline continue.  

As a Subject Matter Expert being part of the team, means they will already have the information about the suitability of the software on making it to the next stage. They will be able to review the automated tests, regression tests, even UAT results if you carry this out. They are the only people in many ways who can say if the software meets the requirement. As a team member they are also able to be there and review the deployment process in full, providing that sign off at the identified stages. Each one of these stages fully auditable and within the DevOps environment itself.

Segregation of duties

This is something that risk functions often raise as an issue when they understand that the engineering team itself is making the deployment into production. They question “access” to the production system and see that as a risk. However, being able to deploy your software is not the same as having access to production, rather the delivery pipeline has that access not the individual. If individuals require access, it is granted temporarily and is always supervised, monitored, and maybe even recorded. Noting that those that grant access are not related to an engineering team.

From a risk perspective, it is far riskier to have individuals have access to production who do not deploy software components daily. In such a model there is a great deal of human risk and a great deal of lack of knowledge of the software and its behaviours. This model introduces a great deal of risk in terms of not only the software deployment, but in terms of ongoing access management to production.

Active involvement

In some cases, Risk, Compliance even Internal Audit want to be part of the delivery pipeline, part of the sign off process. This causes a great deal of friction in the process and often, frustration. The process does not become any less risky, rather highly inefficient and depending on the number of deployments, could cause backlogs in software deployments which in themselves could cause risk.

A possible solution…

Oversight of a highly transparent process with strong access management controls is the solution. We do not want risk, compliance, and IA to be part of the process, however, they need to have that visibility and re-assurance that only quality approved software makes its way into production. This is all achievable while maintaining continuous deployment. Here is a bit of a framework that can be followed.

Delivery trains

A very simple concept. Software components must be on a particular delivery train to make it into pre-production and production environments. Delivery trains can run as frequently as you wish, maybe one every few minutes. The point here is that the components get associated with a specific delivery train, and therefore the software journey becomes visible to more parties. As teams execute their delivery pipelines, these pipelines can only be executed at specific times in line with the delivery train they are on.

By making delivery trains highly transparent, risk, compliance, internal audit any other interested parties can see exactly what software is coming into the pre-production and then production environments. This is actually a great feature for relationship managers, marketing and other departments that will have an interest in any new software features.

The delivery train links seamlessly back to the software components release / delivery / deployment pipelines. Because of this, interested parties can quickly view any “gates” where sign off has been requested. Those that sign-off onto the next step are clearly shown as an auditable event within the pipeline, providing comfort to risk, compliance, and audit that the right people are approving the release of the software.

Automated delivery and access management

Delivery into production maybe started by an individual confirming the process may being, but the process of deployment itself should always be automated. The delivery pipeline has the correct and required access to the production environment to deploy the software packages, the individual user(s) therefore do not require any access. Team members can review the progress of the deployment, but they do not have access into the underlying production environment.

Access management and control around the credentials used by delivery pipelines will all form part of that wider delivery experience, which should be documented within a good software development lifecycle paper.

Mitigating risk with rolling updates and or blue green environments

In the previous article we looked briefly at a “blue green” deployment, where both environments are production environments with one active and the other not. To mitigate risk further, deployments can be made to the non-active production environment, some light testing take place and once all is happy, the production environment switched.

Rolling upgrades also allow software to be upgraded without causing any downtime. Effectively rolling upgrades see services upgraded and then assessed for successful deployment. At that stage, new software is running in parallel with old software, however if the new software is working as expected, the deployment continues gradually replacing all the older software components with the latest version.

Both these approaches provide a level of comfort to risk and compliance that software is being checked as it is deployed into production, and that services will not become unavailable to customers.

Transparent Oversight

Risk, Compliance, and Internal Audit effectively need oversight of delivery. Delivery trains provide that first step, along with the pipelines that move software through the various stages into production. Obviously, there will be training required to ensure these functions understand how to use the tools they are being presented with, some of which can look quite intimidating for someone who is not that technically confident. But its not just about transparency of the delivery pipeline, nor the technology.

Transparent oversight also needs to include oversight of the engineering processes, how software is built, how it is tested, the testing outcomes and the controls that are in place, automated or manual that show that only quality software makes its way into production. Again, these functions will require some educating, but that is a good thing. By educating these functions you are empowering them to be able to execute their role in a far more effective and efficient fashion, removing false areas of concern and removing the risk of these functions insisting on very non agile processes being introduced. As the executive responsible for your agile efforts, this educational part is key.

Summary

In many ways, I see Risk, Compliance, and Internal Audit as part of the overall delivery experience. No, they are not part of the pipeline, no they do not have the ability to say what goes into production nor when. No, their role is not to dictate what a “pipeline” should contain, they simply do not have the technical know-how to do that, rather their role is that of a watching brief, in many ways like that of a regulator.

A regulator wants to understand the real mechanics of what you are doing, they want you to explain the risks, they want to be educated and empowered enough so that they can try to identify risks and areas of concern. They want and need to be educated enough so that they may effectively provide some form of “challenge”, if they believe there is something to challenge. They observe, and when a regulator believes you are “off-course”, they provide some guidance on the where/how to get back on-course. This is exactly the way a modern risk and compliance function should work, playing its part within an agile engineering culture.

The next article in this series will look at how to effectively construct a product roadmap, how to ensure you continue to value delivery over predictability.

Agile Series: DevOps

In all of the articles that make up this “Agile Series”, a fair few of the same themes keep on flowing richly through each article. This one is no different…If you want your teams to be autonomous, if you want your teams to build self-contained products, if you want to be able to value delivery, then you need your teams to be able to deploy their products seamlessly, consistently and in a rapid fashion. Deploying from one environment to another and ultimately into production in a highly repeatable robust fashion. Enter DevOps…

So, what is DevOps? Well, DevOps is simply the blurring of software development with what was traditionally IT operations and infrastructure. It’s not that long ago that your typical software development lifecycle had a massive dependency on your IT Operational department. IT Ops built out the underlying infrastructure, by which I mean physical machines, operating systems, networking and underlying components. They deployed software that solutions took a dependency on, basically they built your environment, ready for your software to be deployed on. All of this changed with DevOps. Essentially, think of all of these things as something that is achievable in code, in software itself, so you write software that builds out infrastructure, builds out dependencies, compiles and deploys software, it even ensures certain tests are run on that software. Since its code, its repeatable, its easy to deploy to different environments over and over again.

DevOps therefore is something that enables many aspects of agile. DevOps though is a set of practices that when they all come together, really do enable agile engineering culture.

Infrastructure as code

The fundamental element of DevOps is that your infrastructure is built using code, so you write software which builds your environments on which your software solutions will run. By being able to do this, we ensure that software is deployed into environments where the human element of risk has been removed, the software is deployed into an environment that is 100% repeatable and that variables cannot be missed form one environment to another.

Infrastructure as code therefore should form part of your actual software, part of any delivery within an agile environment. Infrastructure after all will either enable your software to work, or, it will stop it from being able to work. Your software ideally therefore should include the infrastructure on which it is to run.

Continuous delivery

As an executive within a financial institution, if your engineering team is being “agile”, then no doubt you have been involved in discussions that talk of continuous delivery, continuous deployment, or CI/CD. Oh, and “pipelines”.  Firstly, continuous delivery is not the same as continuous deployment. Continuous delivery is about the team’s ability to produce their software products in short cycles, iterative cycles. As they develop, they ensure the software can be reliable released at any time.

To make sure software can be continuously ready to be deployed, then any changes, updates, bug fixes, enhancements have to be compiled and the software made ready. Continuous delivery is in a nutshell that process, but its automated. So, as an engineer is happy with their code, they save it, commit it back to their source control and hey presto, this triggers the compilation of the software and the integration of it back into the wider product. This part is continuous integration (the CI part of CI/CD). DevOps makes this all happen.

The “CD” part of CI/CD however can be continuous delivery, or, continuous deployment. The latter is really where you want to get to.

Continuous deployment

The difference between continuous deployment and delivery, well the software is continuously, and automatically, deployed. Now this deployment doesn’t necessarily have to mean into your production environment, no, rather the next environment in the path to reaching production.

There are so many benefits of working in this way. As an autonomous, independent team, being able to continuously deploy your software has massive benefits, some of which include:

  • Ensuring deployments are consistent.
  • Ensuring deployments are working as expected.
  • The ability to trigger automatically, automated tests against software.
  • Ability to provide “gates” to ensure that only software that passes tests make it to the next phase.
  • Ensuring environments are always up to date.
  • Delivery of product into production as quickly as possible
  • The de-risking of deployment errors

The steps software takes makes on its journey into production can often be described as part of the “pipeline”, as in the software moves through the various steps within the pipeline on its way to production.

Some agile engineering environments even have pipelines that take software at the point of it being “committed” by the engineer all the way automatically through to being deployed into production.

I would always make this point regarding continuous deployment. Often, especially in regulated institutions, there is a fear regarding deployments, deployments are hard and they are risky. Because of this, they are put off until they really must be made. Now, because they haven’t been done that frequently, the deployment will be hard, re-affirming the mindset that deployments are hard and risky. But, like anything, if we do it often enough, frequently enough, practice it, then it becomes easy. If you deploy software very frequently then deployments become easier, and because they are easier, they are less risky, now because they are easier and less risky you are happy to make more frequent deployments.

Agile engineering cultures are often quoted making hundreds of deployments into production on a monthly, if not weekly basis, some put the numbers in the 000s. The DevOps impact alone here puts pay to any form of “release” milestones in your Prince 2 project management chart. DevOps makes this possible, but only if your architecture supports such independent deployable components, see the previous article.

Feature toggling

Software can make its way into production even when it is part ready, simply by “switching” it off. By this I mean, the feature (or product), which maybe not working, only partially built, but is deployable, can be deployed into production. However, as a feature it has been toggled to off, making it unavailable to be used within production.

You may ask what the point is of deploying into production software features that cannot be used. Well, the point is that the software is being tested to a fuller extent. It is ensuring the infrastructure, everything that makes that feature to date is deployed as expected through the various environments and into production. It also allows you to get a feature (product) into production in its entirety, even test it fully in production before making it generally available to customers. This form of toggling therefore de-risks deployments massively.

Environments

In your Software Development Lifecycle (SDLC), software always passes through a number of environments before it is deployed into production. These environments serve different purposes, but its crucial to remember that software must be working as expected before it moves onto the next environment. Typical environments may include:

  • Dev
  • Dev test
  • QA
  • Staged
  • Pre-production (or Golden)
  • Production

Since your software should be working as expected before being deployed into the next environment, automated tests are key to a streamlined process. Automated tests can be kicked off as part of the delivery pipeline, and only when these tests are passed may the deployment be seen as a success / complete. Some software may require manual testing, this can be carried out in QA, Staged and Pre-production. Only once ALL tests are passed should software move on to the next environment, this process maybe triggered manually, but it is an automated process.

If we have feature toggling, then its advantageous to take your continuous deployment right into production.

Consistent deployments within environments

If our infrastructure is code and is part of our deployment pipeline, then every time we deploy our software, the infrastructure is also re-deployed. The infrastructure itself, how it is built, configured, deployed is also being tested as part of our solution. When supporting so many environments, infrastructure as code saves many pain staking hours of your IT Operations, while removing the margin of error.

Build and destroy

Entire environments can be defined using infrastructure as code. Because of this it can take just minutes to re-create your systems entire infrastructure, even deploy your entire set of solutions, all because everything is deployable as code. This has a number of obvious benefits, none more so than the ability to build and environment, use it for a short period of time, prove what you need before destroying it.

Multiple production environments

Many organisations now run multiple production environments, typically one called blue, the other green. “Blue green” is a deployment technique that reduces downtime risk simply by providing two identical production environments. It allows you to run production through one environment, say “blue”, deploy your upgrades, complete final testing on “green” ensuring you are happy before switching customers to the “green” environment. You then ensure “blue” is once again identical to “green” by simply redeploying your infrastructure and software.

This type of deployment is only possibly because of infrastructure as code, and DevOps.

Summary

In this article we have looked at some DevOps concepts, infrastructure as code, continuous deployment, build and destroy, but the key thing to remember as an executive is that DevOps is the blurring of software development and IT Operations. Code replacing IT Operation functions.

Many principles behind agile engineering aren’t practical if you don’t have DevOps. That ability for autonomous teams, independent delivery is partly possible because of your software architecture, and partly because of an investment in DevOps. If your software architecture is your strategic approach, then DevOps is your ability to execute, both are required to operate a good agile engineering culture.

DevOps is massively interwoven into agile culture, but it is in part this desire for autonomous teams, continuous deployment of independent software components, made possible with DevOps that is often at odds with many a financial institutions Risk function. In the next article in this series, I will look at the role of risk, compliance and internal audit in terms of agile and DevOps.

Agile Series: Software architecture, an enabling factor

If you have the wrong software architecture, then your ability to really implement an agile engineering culture will be dramatically hampered. In previous posts, you will have noted that I outlined three core principles I like to instil within any engineering department.  These principles being:

  • Value delivery over predictability
  • Value principles more than practices
  • Autonomy is greater than control

Now, the theory behind agile and these principles is that individual teams are empowered to think for themselves, solve their problems, work in a way that suits them best and focus on delivering products into production. Note that I have put emphasis on individual teams. For this to happen, it means you need to have a software architecture in place that allows teams to build products that can be delivered independently of other teams/services, as much as possible.

In the very first post we talked about decoupling and distributed domains. We walked through the importance of strong cohesion and applied that to our teams. Now we need to apply exactly this concept back to our software architecture.

Strong cohesion refers to the degree to which the elements inside a module belong together and the stronger your cohesion, the more decoupled your module can be. In an ideal world, this would mean everything is pretty unique that lives within a module, and that it wouldn’t be required by another module. However, in the real world, that’s not the case. Often, we will find that the same functions that are needed in product one, are also needed in product two. Traditionally, this meant we might see product one interacts with product two, effectively binding the two together, one being dependent on another. The more this happens, the less likely you are to have individual teams being able to work independently of each other, you are effectively creating a dependency on other code and another team.

Now, this post is not going to push a given architectural model onto you the reader, nor is it going to delve into the real detail of software architecture, that’s for you to engage with your engineers and architects. No, this post is to help ensure the executive, especially a CTO/CIO/COO have the right concepts to mind when engaging with engineering about design and delivery. 

Fundamentals

In any software architecture, its important to set out the fundamentals of that architecture, and therefore the fundamentals that must reside within each engineering team. For me there are 3 fundamentals:

  • Security
  • Availability
  • Performance

Whatever you do, as an engineer, as an engineering team you must ensure you develop secure code, products that have high availability, they are highly robust and are able to meet the performance demands of your users.

Fundamentals should always be looked at as something to improve, constantly. In doing so, you move your product along with new innovations made on the underlying platforms, you migrate to new technologies, you look to take advantage of new concepts and ultimately, you irradicate legacy components from forming within your systems.

Microservices

This is a very common term now in software architecture, and rightfully so. However, there isn’t a specific single definition for microservices. Search Wikipedia and you will get a consensus of what they mean. For me though, what I would say to any executive / manger is the only thing you need to take away is this:

Services are small in size, messaging-enabled, autonomously developed, independently deployable.

In this single definition we cover off pretty much everything we need to know.

Microservices are small, they focus on providing just a few functions/capabilities back to the overall product. Being small, they are focussed, but makes it easier for them to be independently developed within a team and independently deployed by that team. If they have few dependencies (on other microservices) then they can almost be deployed at will, anytime. This means microservices have a few game changing capabilities than any architecture should follow:

  1. Isolated, easier to maintain, evolve and even replace
  2. Scalable in their nature

Point one focusses on our core principles, delivery over predictability and autonomy is greater than control. Here microservices allow teams to focus on delivery, pushing services into production independently of other services, independently of other products and independently of other teams. The second point focusses on our need to scale out solutions.

Typically, traditional architecture would have a great deal of focus on performance testing, bandwidth and bottlenecks within your solution. However, these considerations are now focussed down to specific areas of your architecture, these being individual services. This level of focus makes it easier to ensure your solutions perform better.  However, the point here is that a single service maybe able to perform X tasks in a second. But, since it is independent, I should be able to run multiple instances of the same service to effectively improve my performance, capacity and availability.

To illustrate this, lets think of a service which can process one payment per second. This is its maximum performance capability. You maybe thinking, “well that will never work for a bank, we process 000s of payments per second”. However, sine my architecture is set-up for horizontal scale, I can simply deploy a second instance of the same service. I am now able to process 2 payments per second. So if I want to process 5,000 payments in a second, from this service point of view, I simply deploy 5,000 instances of the same service. This is horizontal scale in action.

In a cloud environment, this is quite easy to do, scale up, and then, I can scale down, saving money on compute and storage needs. So while I may need 5,000 payments per second to be processed for an hour each morning, I may be able to scale my systems down to just 500 per second during the course of the day, and maybe down to just a handful overnight. In addition, I have an added resilience benefits, and that is, if a handful of my services stop working, the system is still processing payments. I can also replace “broken” service instances with new instances, ensuring my systems are highly resilient.  

Fabrics and containers

If we understand the concept behind microservices, then we can look at the concept of fabrics and containers. Both of these technologies allow your microservices to be deployed across multiple servers, removing dependency on a single server. So, just as we did with a microservice, here containers and fabrics allow us to run many more instances of our services, but scale them up at a server level, again providing us with greater performance, capacity, and availability.

We are even able to deploy our containers and fabrics over different geographic compounds within the Cloud. For example, using Microsoft Azure High Availability Zones, we can deploy our microservices across an array of servers which are running across three independent availability zones, which are actually three different compounds geographically separated by a number of miles, each working in an active:active:active fashion. Therefore, even in the unlikely event of an entire compound becoming unavailable, our systems still operate seamlessly.

Modern architecture almost always leverages a form of microservices running within a container or on top of a fabric.

Shared services and packages

So far, we have talked about the need for strong cohesion, in the real world we know that there will be dependencies on shared functions, simply because we know that coding out the same function time and time again is not good practice. Why have multiple teams write the same code and maintain that very same code? In order to solve this, while still maintaining strong cohesion we have two options,

  1. Create shared services
  2. Allow teams to take “packages” of shared components and embed them within their code base.

Firstly, there is no wrong and right answer here, its all down to what you are trying to achieve. However, at a macro level, shared services when updated are made available to all other areas of your solution that use those shared services, as in they all now use the updated service. This does weaken your cohesion within your platform, however, if they don’t change that often and are able to scale out horizontally, then this implementation maybe a wise option.

However, many more teams and architectures now utilise shared packages. These packages are effectively the service itself, however it is “added into” your own code base. A package therefore is pulled in, which means multiple products can use the same shared functions, but its their copy of that function, enabling strong cohesion and strong decoupling to persist. There is an additional upside, and downside to this approach. The upside is that if the shared function moves on and new versions are released, you don’t necessarily have to try and keep up with that release cycle. So that decouples your teams to an extent. However, the downside to that is that you don’t want your code not to be using the latest version, especially if a bug is found. So, for all other products across your platform that use a package, they need to ensure they upgrade their references to those new versions of those packages and re-deploy themselves. This means a single package update could lead to multiple updates and deployments elsewhere within your system.

Domain Driven Design (DDD)

In an earlier post we looked at Domain Driven Design. For me, this is about stepping back further and further from the real underlying solution. So, if microservices are our low level, then stepping back up, we see their dependency on the same principles within fabrics and containers. Stepping back further, we see how products are forming, and the need to keep things autonomous amongst our teams. For me, this is where DDD blends architecture with the teams that build the solutions.

DDD has a strategic design section where it really addresses the design of your products back to the teams that build them, which in turn has an influence on what code is built and “where” that resides in our overall architecture.

Bounded Context deals with larger models and is highly explicit with regards to their relationships with each other. I personally like to use “domains” to help drive out other management concepts, such as departments, focussing on the context of the products that are to be built. I also use the term domain for “sub domains”, as in other smaller products/modules/services within a given contextual domain.

I think a key thing that the executive must remember with Domains is that while ideally, they should be independent, they will share some entity context, for example a sales domain will have a customer, just as a support domain will have a customer. However, that does not mean that one domain takes a dependency on the others implementation of a customer, no, rather each will have their implementation of a customer (which maybe the same if shared through a “package”). The same applies to “data” behind a customer. Both domains may hold different data on a customer, data that is needed for their specific areas, and both may hold the same data, take a name for example. However, that name data will be “duplicated”, so it resides in two very separate databases, one dedicated to the sales domain, the other to the support domain. Think of this data as distributed. This replication of data isn’t anything to be afraid of, rather it allows the domains to remain independent, autonomous and decoupled.

APIs

Application Programming Interface (API) is a term that everyone within the financial services industry is getting to grips with. At an executive level, at 50,000 feet, an API is basically an interface that allows another bit of software to call it and tell that component to do its job. Simple as that.

What we must remember is that while APIs are very powerful, we should not always be looking to have components dependent on calling APIs. Why? Well for a simple reason, you are dependent on that API, and yes, they can change and therefore our software can break. Limiting your dependencies on APIs within your own solutions is therefore becoming increasingly important, especially if you really want to maintain autonomous teams and services that can be deployed independently. By changing an API you don’t want to break other areas of the system, so APIs need to be kept to a minimum and have strict change control associated with them.

So, while the financial services is starting to embrace everything API, the world is moving on at a rapid pace, replacing direct calling of APIs with event orchestration…

Event Pattern Model or Event Driven Architecture

Events allow us to capture specific moments within our software, these can be also seen as specific business moments, such as a transaction being initiated. Think of a bit of software that produces an event, say a customer that initiates a transaction. The software raises that event, it’s the producer of the event, which is posted onto an event broker. The broker has a certain consistent interface, so it doesn’t change, however the data you post into the event broker will. Other software components will subscribe to that event, so when its posted, all subscribers to that event receive it and can process accordingly.  

The beauty with this model is that you can have multiple software components listening for the same events, enabling parallel processing to take place. However, it also means that your software components aren’t directly linked to each other, effectively in-place of using a direct API, your software can post to a consistent event broker, ensuring further decoupling from other services.

This event pattern model is rapidly becoming the “norm” in software architecture, especially within Cloud environments. It is a trend that is growing when we look at how third-party applications interact with our own, be that receiving or sending data. The benefits of using an event broker between your own software components/products/sub-domains/domains are exactly the same when we look to interact with third parties, allowing third parties to subscribe to events our system raises, or to initiate events by posting onto a broker.  

Summary

An event pattern model really does enable that “domain” approach to be taken by your engineering teams. From an agile perspective, following that domain approach and implementing an event pattern model really enables team autonomy, removing dependencies on each other. This is further achieved by using packages for shared components, building microservices and deploying them within containers. But while these architectural approaches ensure your teams can be agile, follow your agile engineering culture, they also give you the right architecture to be able to scale to meet customer demands, capacity requirements and remain highly available and robust. In the next article within this series, we will look at DevOps and why this is the brother of agile…

Agile Series: Motivating engineers

If you have spent most of your working life outside any form of IT department or engineering team, you probably won’t have come across many of the personalities that you find within those areas. I have been a software engineer since my days graduating from university and I can safely say that we are a unique bunch, more so when compared against a cross section of a financial institution. I am always amazed at how many “managers” of IT based functions have never been engineers for any substantial amount of time, in most cases they have never been an engineer at all. For me, this is a worrying sign for various reasons, primarily the lack of understanding of engineering effort, the lack of understanding of engineering complexities and the lack of understand of the personalities that can be found across engineering teams.

Often a lack of engineering experience within more senior functions, or management functions within IT, especially within financial services, lead to the embedding of misplaced values when it comes to agile and agile culture. In addition, the lack of experience interacting with the personalities that make up IT also leads to an inability to keep engineers fully engaged, fully motivated and firing on all cylinders.

Expectation

Often, we have an expectation that teams, and individuals are fully motivated, motivated because of the renumeration they receive. It seems that within financial services this is a very common mistake. Sure, renumeration is a part of why an engineer stays with your company and yes it may motivate them slightly, however it’s actually a small part of the overall picture. Sadly however, renumeration is often perceived to be the only motivating factor.

For engineers there are other factors that motivate them. If the executive need hard proof, then remember this, for the majority of engineers they have the allure and option of “contracting”. This does take money out of the equation, as more often than not, engineers move from role to role as contractors for the same daily rate. So it is clear other things motivate engineers.

For me, working on new technologies, the challenge of solving a problem that no else has yet looked at, building a proof of concept, proving you can make a solution work, these are all motivating factors. This is true for almost every engineer out there I’m sure. However, this isn’t always enough either.

So, what does motivate?

Engineers are as I said, unique, so it should not be surprising that the things that help keep them motivated, aren’t similar to how you may motivate other functions across your organisation. Over the past 20+ years of engineering, here are a few key areas that help to ensure engineering remains motivated.

Autonomy

Engineers are highly intelligent, highly logical people. They are problem solvers, so it is imperative that they are empowered to solve problems and to make their own decisions. Teams must be autonomous to ensure they remain motivated. One of the challenges that the executive must overcome is centralised thinking. All too often, at that management/executive level, decisions are made regarding the definition of “what the problem is”, and therefore “how to solve it” is also decided. This thinking flows down and ultimately removes that autonomy from engineering teams, after all, they are being dictated to on what to build, when to build, how to build.

Autonomous teams should have all the skillsets and specialists within them to best solve the problems the team will face. Sure, a strategic approach will be set at the executive from which “product” can be derived, but that product is only derived by engaging with teams, and, individual team products are designed, solved, and built by autonomous teams, keeping aligned with the bigger picture and strategic goal.

Engineers have an inherent need to have to use their brains and solve problems, if this is taken away from them then their motivation soon follows.

Mastery

Engineers take pride in their work, fact. Sometimes far too much so, which leads to some internal friction within teams and can also lead to scope creep within your product, but that is all manageable (and another reason why you need a good coach for your team – see previous article within this series). Management therefore needs to embrace the fact that engineers love to master their trade and remember that they need the space and some time to be able to do this (within reason).

Mastery coupled with autonomy is why we see so many engineers heading home after a long day, only to switch on their personal machines and contribute to the open-source code bases. They want to get better at their trade, they enjoy their work, they enjoy the autonomy of solving a problem, and they enjoy the concept of building something that is engineered well.  Management must understand that these engineers aren’t getting paid for this work, they do it because they want to contribute to something they believe in, contribute because it challenges them, because they can problem solve, and they get the opportunity to master their trade. Money is not on the table here…

Updating your core values

If you look at your core values across your engineering culture, do you call out autonomy? In the previous article within this series, I provided two core values, delivery is greater than predictability and principles are greater than practices. Here is a third:

“Autonomy is greater than control!”

Bring this into your core values and ensure your teams get to be as autonomous as possible with their ability to define problems, solve them and given the time to master their trade and bring quality products to production.

Communications and alignment

With autonomy comes some challenges, however these are solved very simply, and that is by involving teams in the process of product definition. Your executive and board may set the strategy, but the conceptualisation of a product needs to involve the teams, this gets them all aligned with the bigger picture so that when they define individual product, when they solve their problems, they are all working towards the same end goal.

Great leadership understands that great communication and inspiration leads to great alignment.

Consistently challenge and stretch

Pretty much all engineering efforts have an element that isn’t challenging for the engineers, it maybe time consuming but not necessarily challenging. This can de-motivate individuals and it is this that makes things feel like a bit of a long slog.

In my experience, engineers need to be constantly stretched in terms of their problem solving, so they need fresh problems on a regular basis. In a product driven world this isn’t always possible, which is why so many engineers move from organisation to organisation, they are seeking that constant stretching of their mind.

To overcome this, it is often worth thinking about what innovation is. Within an organisation, especially one within banking, innovation is limited to the products that are to be brought to the market, which are aligned to a specific strategy. However, there is always space for additional innovation, new ideas, concepts, out of the box thinking. So why not provide engineering teams with the opportunity to innovate, to come up with their own ideas.

In a few instances now, I have implemented an innovation day, or days. These are days, typically once a month where all teams can take a break from their daily efforts and set their own work, think of it as a day/day’s hackathon on work that the engineers have set themselves. At the end of the hackathon, they demonstrate what they have built to all other engineers, this is topped off by a bit of a social gathering. What is great here is that you can flex the concept of an innovation day – but it encourages engineers to challenge themselves, look at new technologies and build product that we would never have built normally. In many cases, the “winners” from an innovation day have their products moved into the product backlog and help enhance the overall product offerings.

Unfortunately, an innovation day like this can often be looked at by the executive as a “costly” day, or wasted cash. This really isn’t the case and is a very narrow view of what is actually going on: with an innovation day, engineers are being stretched, they are becoming re-vitalised within their job, they are forming social relationships (something that always needs promoting within engineering), they are less likely to want to move organisations and they come up with ideas that in many cases, enhance the financial organisations overall product offering. This is nothing but a win all round.

Summary

Motivation is not always about cash, especially with individuals that can seek challenges for the same money elsewhere. Engineers are motivated by the challenge and gratification they get from overcoming that challenge, they are motivated by making their own decisions, solving problems in their way and having the time to master what they do. Smart people like engineers need to be constantly stretched, so look for ways to stretch individuals minds outside of the day to day job.

Motivated teams function much more effectively, they are far more productive and, they ship better products with less bugs. Keeping your teams motivated is probably the most important aspect of a COO/CIO/CTO responsibilities back to engineers. To date, this agile series has very much focussed on creating and maintain the right “agile engineering culture”. Moving forward, we will start to look at how software architecture, design patterns help enforce an agile engineering culture. We will also look at the role of specific functions, from DevOps to RISK

Agile Series: Applying values and principles

In the previous article we looked at underlying core values and setting principles to help create the right environment in which a good effective agile engineering culture can grow. In this article we will look at how you can apply these to really build a great engineering culture.

You need to coach a team

Like many people I have long followed and played in many a team sport. There are so many aspects of sport that promote being part of a successful team, not just a successful sports team, but a highly productive, successful team within the workplace. I personally have learnt much more about agile concepts, what it means to be part of a team, even how to deal with personal agendas by playing basketball than I have working within the financial services sector.

In sports every team has a coach. Football teams have many coaches, basketball teams have coaches, volleyball teams have coaches, my list could go on and on. Coaches bring with them different talents, different experiences and different beliefs; however, all have the same drive, how to best get performance out of the team. Your engineering teams are no different.

Values and principles maybe the foundations of your engineering culture, the principles that guide your engineering teams, but there are other key building blocks that you need to really drive performance. Because of this, an engineering team is just the same as any sporting team, it needs a coach to help get the best out of them as individuals, and the best out of them as a collective.

Each coach needs to bring with them certain tools to help teams succeed, but these tools can be very common across all your teams. Because of this, I personally like coaches to set “mottos” that the team abides by. Now, following our concept of “principles are greater than practices” not all “mottos” should be the same for each team, however, there can be a core that all teams abide and follow. Here are a few I have found to be useful when dealing with engineering teams (and the personalities within them):

  • Leave your ego at the door
  • Do the right thing
  • Team first
  • No politics

Your agile engineering coach can install these in the team members and no doubt others. The great thing here is that these “mottos” are simply building blocks upon your core principles, aimed at ensuring individuals really buy into and believe in the culture.

Chapters, Charters, Guilds…Its about shared knowledge, not control

One of the common issues I have heard within financial services is that agile engineering lacks central control. If for example you apply principles over specific practices, then you will have engineering teams working in different ways, so how do you control them? How do you ensure they are building the right things, working in a way you can trust, meeting the standards you hope to set? It’s not just financial services that suffers with these topics, many IT departments struggle to reconcile these concepts of consistency with flexibility. The issue becomes one of control many believe, however, it really is about shared knowledge and alignment.

For example, how do you as a CIO/COO/CTO know that Quality Assurance (QA) efforts are consistent, that the QA performed on product released by one team was of the same standard, followed the same patterns as that of a different team? How do we get that level of comfort and confidence when each QA is in their own team working as part of that team and not as part of a QA department? The answer is shared knowledge and enforcing cross team relationships for specific issues. There are many ways of achieving this, but the most common is using a “Chapter” and “Guild”.

Chapters / Charters

Chapters and Charters are very similar, however I personally prefer the use of a Charter to a Chapter. I prefer Charter because of its historical definition, Charters wield great power and freedom, so I prefer that word. For me it is more aligned with accountability. At the end of the day they are pretty much the same. The shared concept is that a charter/chapter will:

  • Set guiding principles
  • Set specific practices
  • Set specific standards
  • Set specific methods

Now this may seem to go against the concept of principles over practices, but it really doesn’t. Here we have a grouping of skilled individuals that are applying their skill in a consistent fashion based on shared learning, shared understanding and shared desire to do the right thing. This means that though the individuals within the charter may work within very different teams, the skills they bring to that team, the practices, standards, and their methods are largely consistent. This means a charter is ensuring you have that consistency and control across distributed teams, teams that could work in very different methods.

Guilds

These are far less formal, they do not have “power” as such, they are not accountable or expected to meet certain standards, rather guilds are about knowledge sharing and wider training. Individuals in teams often belong to a single Charter, however, they may be part of many different guilds, guilds that share knowledge and provide training on areas that that individual finds interesting but doesn’t deal with on a day-to-day basis, nor form part of their expected skill set.

While a Charter may have a “lead”, Guild leads are more aligned to being evangelists and mentors, they do not form some form of reporting line.

The following diagram is from Spotify. It shows clearly how Spotify implements a formal Chapter across each of their teams, in their case termed “squad”. While the Chapters are clearly structured, you can see the less structured nature of a Guild in action.

Reflection…

So far in this agile series we have looked at the relationship between software concepts, concepts such as “cohesion” and how that can be applied to thinking about engineering departments and teams. We have looked at how core values and principles can help drive the right mindsets and form the foundations of a solid agile engineering culture. In this article we have looked at how to embed that culture, how coaches can add key building blocks to help individuals work well within teams. We have looked at how the introduction of Charters and Guilds provide alignment, standardisation, and consistency in key skillset areas within teams. All of this is leading to, in theory, a highly productive agile engineering culture, one that can be highly productive and successful. However, before we get into specifics of how this all fits together and can work beautifully within regulated business, we must look at how we keep highly intelligent people motivated.

In the next article we will look at some of the challenges of keeping engineers motivated, an aspect that is almost always overlooked / ignored.

Becoming a CTO

It’s funny how the new financial year brings new opportunities, new challenges and at the same time, a period of reflection. In the past week alone, I’ve had a several friends and people I’ve worked with in the past, reach out to discuss “what makes a great CTO”. Now, for some, they are already in a CTO role, for others, it’s a role they are hoping to move into in the coming weeks. After handing out my humble opinion, one simply said “you know if you had posted about this, I wouldn’t have had to take pester you.” A fair point. So, I thought it was time that I shared my thoughts on what makes a great CTO, thoughts based on working with some pretty inspirational CTOs in the past, and I hope, valuable insight from my own experiences over the past 8-10 years as a CTO.

The definition

There are lots of different views on what a CTO is, their duties, areas they should focus on, even what they should be strong at. This is pretty clear when you put a group of us in a room at the same time, some are uber technical, still coding daily and are really part of the low-level engineering effort, others are far more business managerial and simply aren’t technical at all. For me, a great CTO must have come from a technical background, they have to have been on that journey of being a software engineer, someone who has worked on making their code efficient, secure, robust and re-usable. They have to been responsible for identifying the right technologies to use, the right infrastructure, understand how to architect out enterprise-wide scalable solutions while at the same time, enforce software development patterns and lifecycles that work well. But here is the rub, many engineers who are great at these things, simply do not want to take on the other challenges that you need to have experienced to make a great CTO. Many technical roles can be solitary roles, and there is nothing wrong with that, however, a CTO isn’t one of those types of roles. For me, you have to then go on that journey of learning how to lead a team, how to structure engineering team efforts, create and cultivate a great agile engineering culture. Then we have the business side of things, you have to gain experience of the business, almost become or have that ability to be a sort of Product Owner or old school BA. Once you have done these roles, you’ve gained the experience and foundation of what makes a great CTO.

You see, a CTO for me is the one who is at the sharp end of technological innovation and product development within an organisation, even more so if the organisation is a technical company or has a technology platform at its core. The CTO is responsible, scratch that, accountable for creating a solid engineering environment and team mentality, they are responsible for ensuring engineering follows the 3 fundamentals and they are the ones responsible ultimately for building a great technical product. They will help shape technical strategy to the point of forming actual strategy for the organisation itself, something that many “business” focussed individuals will disagree with. But, remember, almost every business in the world now has its success governed by its technical prowess, its IT capabilities and strategy…It’s a fact.

In addition to all of these areas, a great CTO must also be an inspirational leader, great public speaker and in many cases, quite entrepreneurial.

Building a team around you

This challenge is very different from organisation to organisation. Some of you will be required to build a team from scratch, others will inherit a fully functioning engineering department and senior team. So the challenge in execution can be varied, however, the objective and outcome is the same.

A great CTO will look to get the right talent around them straight away. For small teams this maybe a strong engineer who can double up as an agile type coach / scrum master, for larger teams, it maybe to identify those with some leadership qualities. I personally like to get a structure of a small number of people who are uber talented around me. I look to have senior architects who form a “guild” to ensure strong alignment amongst the people who really structure the product from a technical point of view. For scale, or when you need to scale teams, I believe you need to start looking at “Heads of engineering” across the areas of real challenge. The key then is not to enforce an engineering culture or structure, rather to collaborate with those senior people you’ve now got around you to get something you all like in place.

I’m a strong believer in a good clear agile engineering culture with autonomous teams, but here you need to view leadership about alignment, not dictating or bossing people about, nor looking over their shoulders. Working in this way has many many undisputed benefits, however, it can be challenging depending on your organisation’s expectations in terms of reporting or their own mindset / culture. The benefits outweigh the challenges every time though, IMHO. Setting a culture of autonomous teams empowers your engineers, keeps them challenged and motivated while at the same time, instils pride in what they do.

As I said, I am a strong believer in “guilds” and “charters”, having groups of people coming together to help move engineering efforts forward. I also believe that having group meetings every other week, almost like a sprint planning session, works well with your senior team. In the past, I’ve been set against people having things on paper in these meetings, however, in recent months I’ve found that its beneficial to all collaborate around a “One Note” document, or a tool like Monday.com Though, if you can keep the mentality to a “stand-up” I believe it keeps your meetings down to around 20-30 minutes, and lets face it, for an update and alignment session that’s on a regular basis, this should be enough.

Setting the culture

Engineering culture is something that isn’t set in stone, a great CTO should always be looking to see how it can be improved, how to iteratively get more from the structure and culture that is in place. You should be asking yourself, what could be changed to make life easier, or more productive for my teams. As Churchill once said, “To improve is to change; to be perfect is to change often”, and that’s how we should think about engineering culture.

IMHO you have to follow an agile engineering culture, deliver as much empowerment as possible to your engineers and their teams, help “coach them” rather than dictate and help them remain focussed on engineering tasks, and not administration. I used to promote the use of Scrum masters, however, in recent years, I’ve moved more away from a specific agile practice, and adopted a more agile principled approach, meaning that I recognise that each team is unique, and therefore each teams way of working can be different. This is true agile for me, ensure agile principles are valued more so than any specific agile practice, and to do this, I believe you need to leverage agile coaches as opposed to a scrum master.

The right culture will keep your teams motivated, keep your engineers engaged and ensure they take pride in their work. All these things ultimately lead to better quality code, better execution of product deliverables, shorter deliverable cycles and ultimately happier customers. I’ve been painfully aware of this for sometime and it’s the single greatest reason why I promote things like an innovation day, or two days across the entire engineering department. An innovation day is all about letting teams form themselves, letting them innovate and build simply whatever they wish. It taps into the same drivers that lead us to see engineers working hours on end for open-source projects for free, it taps into something I call “Mastery”. I say this, because engineers love to master their trade, they want to get better at it and they ultimately enjoy that. If engineers become stale in the workloads that they have, and let’s be honest and open here, lots of work can be repetitive and un-challenging for engineers at time, then they won’t take as much pride in the work they deliver or will simply look for challenges outside of your organisation. So, by having a day or two set aside to pure innovation each month, gets those creative juices flowing, promotes teamwork, collaboration, and engineering mastery. Though this may seem like a waste of money on the company’s time, its actually highly productive, and in most cases, teams will build something that really should end up on your product roadmap.

Oh, a beer afterwards (virtual from home or face to face) could also help the culture.

The product

No matter the business, the type of organisation, view everything as a product, and never ever ever let anyone start talking about a project. I know this could be seen simply as language, however, from language comes a certain way of thinking. If we think product, we think of re-usable solutions, off the shelf and at most a bit of configuration before it can be re-used, and the business enjoy revenues from it. If we think of a project, we focus purely on what is needed for this particular requirement or customer. As soon as you do that, you have killed any benefits of the work you are undertaking outside of that specific customer need. In addition, a project should have a clear start and end date, that feels very waterfall to me. In contrast, a product needs to be maintained and can be iteratively improved over time, ensuring customers continue to use that product, and more importantly, new customers are drawn to it and start to use it too.

The big challenge for any CTO is getting those business requirements into the engineering teams accurately. There has been so many ways of how to do this over the years, but I would simply say it’s largely down to having the right talented people in seat, as opposed to following any specific description of a job role. Feel free to use a BA, or a Product Owner, but ultimately there are two things you need your teams to remember:

  1. Understand the customer problem statement, their need their pain.
  2. Do not blindly follow customer recommendations, challenge their thinking and ask why they recommend this.

I believe in design-based thinking, with the customer at the centre of everything we do. However, you have to understand the pain and think of a clean new solution, rather than get caught in improving what they have. This is tough to explain, but Henry Ford once said (though this maybe more legend than fact), “If I had asked people what they wanted, they would have said faster horses”.  I’ve spent many years around tech solutions which essentially take what the customer has, and makes it a bit faster, or a bit simpler. How many times do we find workflows in systems that mirror good old-fashioned manual processes, work is now just in my digital in-tray as opposed to my physical paper based one on my desk. This is technology being used in the wrong way, yes it sorts of solved the problem, but it didn’t innovate, and didn’t deliver as it should have.

As a CTO, you have to inspire around this area, and you have to get the people in those positions who can think outside of the box. I often like to get people involved who are not part of engineering nor even part of the business. Rather their experience is in other areas, you soon see different methods of thinking and potentially very different solutions. As a CTO in this area, I believe you have to be quite disruptive in your thinking, challenge the proposed solutions and try and ensure the people around you think out of the box. You have to be almost an entrepreneur…

Technology strategy

IT Strategy is one of those things that is interpreted in a different way with almost every exec team I have ever worked with. Some like to see a pure strategy, as in let’s see the strategic based decisions on the technologies and platforms that will be used, what strategic benefits the business and IT should get from these decisions. That’s what I call strategy. However, some want to see specific plans on execution, as in plans, timelines, what challenges could be expected etc etc. For me, this is far too much detail for a strategy, and really, that’s a separate piece of work on implementation.

The best generic advice I feel I can give on the basis of forming your IT strategy is this:

  1. Know how you want to operate your platforms and what support staff you think you will need to maintain what you deliver
  2. Know when to Buy, Partner or Build
  3. Asses the chances of an innovation dead-end and clearly avoid that

Most of your strategy will be based off of these 3 main points. First off, how do you want to operate and what support staff do you want. This really does make you start to think of the types of solutions, scalability, performance and ongoing support. A great example is looking at Cloud vs on-prem and if you are opting for cloud, then PaaS / SaaS vs IaaS. Once you make strategic decisions here, it starts to ensure you make better decisions later on. Take point two, knowing when to buy, partner or build. Well dependent on your strategic decision for point one, this will have a massive bearing on your decisions on point 2. One thing I must add, is when you are considering when to buy, partner or build, think of two things:

  1. Where do I want to spend my IT budget, or “dev dollars” as I often am quoted to say. What will give me the best bang for my bucks
  2. Do I see this as core to our proposition, or in some way just causing dilution / lack of focus

I think if it is core to your proposition, as in you can make an impact on customer outcomes and add value to the valuation of the company, then yes, you’ve got to build. However, if it doesn’t, and there is nothing really unique to it, then purchase. When partnering, you’re looking at something that add’s value to your own proposition, but is something that is either not cost effective for you to build and maintain, or you simply need to take advantage of some form of aggregation / specialist skills that the partner brings to the relationship.

A great CTO is an entrepreneur, they need to be if they are to set IT strategy that will continuously work for the business and add value to it.

The product roadmap

These are powerful things, however, don’t get hung up on dates, don’t predict the future. Far too many CTOs try to predict the future in terms of timelines, I’ve fallen into that trap a number of times myself, either by my own doing or by accepting the ask from others. If you have to think dates, think of very wide barn doors, because until you have teams actually scoping out the real detail of the work, you don’t have much more than a “hunch” to go on in terms of effort and delivery timescales. A good product roadmap should form what you want the product to do, it should meet the demands of the businesses strategic approach, effectively defining product that will help the business meet its strategic objectives.

A great CTO will take the time to understand the difference between a strategy, or strategic direction of the company, and the fact that product isn’t strategy, rather product (and it’s roadmap) is geared around meeting that specific strategy. Now, strategy isn’t something that typically sits with a CTO, however, I believe if you are a CTO of a company that’s product is its technology, then you need to have a strategic input. See Porters strategic triangle to understand the three main areas of macro strategy that your product will fall within. Take time to think about your product roadmap and how it drives the business to meet its strategic aims and objects.

Capturing data and analytics

Data and analytics are the tools that enable CTOs to ensure their teams refine and improve solutions and the way in which they work. For example, if you can’t monitor how your teams are performing, if you don’t have transparency of delivery of product, then if all feels a little “blind”. Transparency into progress helps you drive forward, and the best way to have transparency is to ensure you have a culture of continuous delivery. Now this isn’t always possible, but if you can see what’s in the release pipeline at all times, and the various stages, then comfort is there that as a team, as an engineering department, as a business you are “delivering”. Real delivery is far more valuable than trying to predict the future, trying to predict the delivery of something.

Delivery focussed data and analytics can help you identify the teams that deliver product faster, or product that is more robust than say another teams. It allows your teams to get more accurate at predicting when products will be released because you have a history to work from.

Data and analytics can also serve you in two massively valuable ways,

  1. Monitoring your systems – allowing you to identify potential performance issues before they happen
  2. Diagnose issues and fix them
  3. Provide additional MI that could open additional revenue streams

Now the first 2 have in recent years caused me personal problems, not having sufficient monitoring or diagnostic capabilities meant at times you feel like you’re flying by the seat of your pants. There is a real temptation as CTO to ensure your teams push forward and solve real problems quickly – sometimes at the expense of sufficient monitoring tools and dashboards. Take it from me, ensure you get great monitoring and diagnostic tools into everything you do, even as part of your minimum minimum viable product, because if you don’t, not only is it hard to identify potential issues or to solve them, its painful having to take time out of busy engineering schedules to retro fit these capabilities.

The third point, well that comes down to the entrepreneur inside of you…

Alignment and efficient meetings

Efficient meetings for me are short, sharp, to the point and are called because you need to solve a specific problem within that meeting. It must have an outcome, one that is tangible, if not, the meeting wasn’t needed, or it is needed but required a lot more work to be done and brought to the meeting. Unless you are solving a real problem, something complex as a team, I would recommend daily stand-ups, check-ins as and when you need and keep everything on your actual feet. Sure there are times you need a committee style of meeting, lengthy meetings to ensure alignment across a wide variety of issues, but these should be run based on a strict agenda and only cover things that people don’t already know. Don’t fall into the trap of going through things as a tick box exercise, or just to pay lip service, or to “share knowledge”. If you are doing these things, they can be done informally or formally but via shorter more focussed meetings.

I’ve always tried to keep any meeting I am running under an hour and a half, and that includes Charter and Guild meetings where we are trying to solve large challenges. You’ve got to keep moving the discussion forward, oh and if you have notes, make sure someone takes them accurately. Again, I prefer a shared One Note, but appreciate that many will want more formal looking minutes.

In order to get alignment right across the floor, I really like a CTO to stand-up and give an update to the entire department. The update can be on whatever, but it needs to be enjoyable and something people want to listen to. It’s a great time to share the bigger picture, but also the perfect way in which to ensure there is clear alignment right across the floor. After all, that’s one voice that everyone is listening too.  Try to be inspirational, and not boring though 😉

Public Speaking

I think many technology focussed individuals struggle with this, and it maybe one of those key reasons why there are so many CTOs or people at an executive level that look after IT, that haven’t been through an IT based career fully. In certain studies, it has been shown that some individuals (and this isn’t limited to IT) are more fearful of public speaking than of death itself. I know, may sound nuts to those of you who are comfortable with public speaking, but it’s a real fact.

I personally don’t mind public speaking. I’ve had to do a hell of a lot of it in the workplace and externally for probably close to 15 years now. I’m not saying I’m great at it, rather I am comfortable with it. So what do I have to share around this area, well, try and do the following:

  1. Only agree to speak on subjects you are passionate about
  2. Never ever put too much content into a presentation in word format. Use illustrations – sweeping graphics and rely only on a few bullets on the slide
  3. Look to get engagement from your audience right away – pose a question or start with something thought provoking.
  4. Avoid lecturing or listing points together, it gets really boring listening to “we do this, and we do that, and we do this better because of….”
  5. Try to provide something personal to you about the subject matter, make it relatable to yourself and the audience
  6. It’s an old saying, but “fail to prepare, prepare to fail…”

The fist and the last points are by far and away the most important, even more so if you are nervous of public speaking.

On the preparation front, I will share with you some of the habits I have, which to be fair I was given by my wife who used to be a stage performer. The first part is the structure of your presentation / talk. Make sure you structure it around “hero” type points you want to make. So, list the points out, then get them into some sort of order that makes sense, you can have the support messages under your hero points there, but don’t make too much of a big deal about them. You will end up speaking about them, but don’t clutter your presentation or your mind for that matter. Look to tell a story from hero message to hero message, with the story or narrative largely including sub messaging you want to get across. In your head the story should be from getting from hero message to hero message – and will help you be able to talk without reading slides.

I use imagery a lot, simply because I want the audience to look at the slide, digest its content pretty quickly at a high-level and then have them listen to what I have to say about the content. Having a story in your head here massively helps, because you don’t have to remember word for word what you want to say, rather the narrative you want to get across and the key hero messages.

Once you have all of this together, practice it. I practice larger presentations a few times, especially memorising my hero messages or the odd link between them. The story should be in my head fully formed, so I rely on slides for imagery, graphics, quick ways of showing content to the audience, hero messages and memory pointers.

Forming relationships and taking feedback

A CTO has a hard life in terms of the variety of relationships they need to form. You have to be able to form great relationships with engineers, and this is the most important one. If you come from a technical background you will find this quite easy. If you haven’t been on that journey, then this will be really really tough to do, especially if you want to understand your team’s personalities and characters. A CTO also has to act as someone who can inspire engineers, command their technical respect while at the same time not come across as a CTO who is telling people what to do, dictating how things should be done. That goes against the autonomous team culture you want to set up, so it’s a balancing act.

As CTO, you also have to be able to form relationships with other business departments, the heads of, other executive and the board. It’s wider reaching than this though, because you will no doubt have to be able to work well with client coverage and sales teams, and that then means forming relationships with your peers and other execs from external companies. The variety of the departments and type of people within these departments makes it hard for a CTO to really form strong relationships, but it’s something you have to do and do it in a way that shows you are authoritative in your own space, while at the same time, very mindful of others and their needs. Unfortunately, there is a lot of “ego” in and around tech, and as a CTO you have to be strong minded, confident but not arrogant or judgemental of others, and be ready for the odd “idiot” being polite there, who will want some form of “tech-off” with you….But just see that for what it is, remain focussed on your messaging, what you want to achieve and listen.

I also believe, that a great CTO identifies possible relationships with other companies. They identify their strengths and are able to put together the jigsaw of how these companies can help enrich your own product set or company profile. This is becoming increasingly more important as pretty much every business that operates successfully today relies on its technology.

This all leads into the area of taking Feedback, and how you take that feedback, especially if its negative or highly critical. I personally look to try and take all feedback onboard, trying to understand the other persons perspective of where they are coming from. I also try to understand exactly what the outcomes could be if I do take that feedback onboard or if I don’t. Though I think being a CTO can be quite a lonely job at times, one thing that a CTO is never short of, is people’s feedback and opinion. Let’s face it, everyone has an opinion on IT or technology. I think only marketing has the same problem if I’m being honest. While this means you’re never short of an opinion on your performance, it does also mean you find yourself with a lot of feedback that is, well at best contradictory, at worse highly ill-informed. So, the key is understanding exactly what feedback you need to embrace wholeheartedly, is understanding the micro-points that lead to that feedback, where it is coming from and the knowledge of the people providing that feedback. It’s only at this level of detail that you can really asses if the feedback is something you want to take on-board.

As CTO, relationships and feedback is a real tight-rope, there simply is so much poor feedback based on frankly a lack of understanding. But, forming good relationships can be one of the more rewarding aspects of the role.

Keeping calm and dealing with crises

My final point of this post is to say this, every system fails, every system goes wrong and at times you will be in crises mode, fighting fires and feeling very stressed. You can handle this in two distinct fashions,

  1. Get into the micro-level of detail, getting involved and really become a micro-manger / engineer at the same time
  2. Trust your team, empower them, and be there to provide them with the support they need

I know many who fall into option 1 here without knowing it. I have met many who also think this is exactly what a CTO should be doing. I’m here to say you are wrong. If you find yourself in option 1, then you are looking at career burn out and probably a highly limited personal, social and family life. That level of stress has an impact on the people around you, without you knowing it. So, I am here to say that managing crisis is more about team than any other aspect of your business.

You have to have a team, and when I say team here I mean a team made up of individuals but not specific individuals. You need a supporting cast, you need the bench, you need bench rotation etc etc. You cannot have the same players on the pitch for every single game in any sport, and managing crisis and being on call means exactly the same in the workplace.

Individuals go on holiday, they suffer with illness, personal issues, they have other commitments outside of work and so you cannot rely nor should you be able to call upon specific individuals in crises. The same is true of the CTO. You have to have a team of people that can respond and resolve the issues. Your role as CTO, is to ensure you have that team in place and that you have processes around managing crisis. You are the supporting act here, if you cannot empower your team to investigate and resolve issues, then you have failed. If you are dependent on specific individuals, including yourself, then you have failed. If you don’t have structure around handling crises (which keeps people calm and able to focus on the right things) then you’ve failed.

The CTO role is to enable and empower autonomous teams and make sure the support structure and processes are right. Apart from that, get out of the way and only be involved if teams ask for your input / call upon you. Remember you must stay calm, encourage and support and always have your focus on the bigger picture, not get pulled into the weeds.

Conclusion

A CTO role is tough, it’s a balancing act and most CTOs will struggle with some aspects of the job, be that public speaking, being inspirational, being entrepreneurial, being able to effectively communicate technical issues with non-technical people, or being able to form the right type of IT strategy. Who knows, my point is, if you are thinking of being a CTO – just know that all of us who have been, or are acting CTOs, struggle with some aspect of the job. Look to get better in the areas in which you struggle and don’t be afraid of them. Always be trying to learn how to do things better, educate yourself across all aspects and lean on your peers. Us CTO’s need to stick together…

Agile Series: Values and Principles

When we talk “agile” there are a few big issues that I notice, especially from management and the executive within financial institutions. These are preconceptions, thoughts of what agile is, what agile means based on previous IT project type experiences or from what they have picked up listening to podcasts, reading articles like this or what they have heard on the floor. However, the biggest issues are often subconscious mindsets and values, both of which can seriously damage even a successful agile engineering environment.

It’s all about “culture”

The first thing here is that agile is very much a cultural thing. In banking we talk a lot about culture, but often, that culture point, the culture we hope to cultivate is not about how we work effectively, or how we work best as a team moving towards a common goal, rather it is about how our culture ensures good conduct. Now, there is nothing wrong with ensuring you have a culture that addresses and promotes good conduct, but simply focussing on this area does not mean you have a productive culture. Agile is as much about culture as it is about delivery.

In order to implement agile successfully you have to have a culture that everyone buys into. I have always said, if you want a great engineering department then you need to define your culture, understand culture, share it with each other and believe 100% in it, because if we all do that then not only can we preserve it, but we can also constantly improve it. If you have a great culture then you will find that your implementation of agile is highly effective and you will reap the rewards in terms of productivity, flexibility and speed to market.  

The issue with culture though is preserving it, especially if you are in an industry that is used to working in very specific ways, or, if you are experiencing rapid growth/expansion or a high turnover of people/contractors. People bring with them their experiences, all highly valid, but also that means their pre-conceptions and ways of working. This can be highly dangerous to your engineering culture, making selecting the right candidates to join you unbelievably important and equally important, that “induction” process being in-depth enough, and that they really stress the culture you have, the culture you are building. As I said earlier, everyone needs to understand your culture, share in it and believe in it for it to really be successful.

Underlying mindset and values

Typically, the way you work, especially at that “departmental” and even organisational level, is brought about by a certain mindset and set of values. The executive typically set or frame this, either consciously or sub-consciously (which one depends on their experiences). As a CIO/COO/CTO you have to become aware of underlying values and mindsets and ensure you have the right ones. The wrong ones, once they filter down can be highly damaging to your agile environment, and that is because these values drive culture.

For me, there are 3 underlying mindsets that are very common, very dangerous and need to be changed.

The first is that “people work in a department”, a departmental mindset often leads to work being carried out and then passed on to another department for them to pick up and take forward. When you think of many BPM (Business Process Management) tools, many of them even promote that kind of process thinking. This is something you really don’t want to find anywhere in your engineering culture, you want the complete opposite. If you have read the previous article within this series, you will understand the importance of “cohesion”, making sure you have everything in that one component, in that one team, no departments or division of efforts. Unfortunately, many organisations have “agile” development teams which are dependent on “departments” for specific knowledge. This departmental thinking typically leads to the department building specifications to hand over to engineering, this is not what you want ever.

The second is that often, we as the executive believe our “teams can predict” how something is going to be built and how long it will take. I use the word “believe” because that is essentially what is often asked for, the question goes “a rough estimate, when will that be ready?”. By asking this, you are essentially saying I believe you know what we are building, how we are building it and how long it will take. Unfortunately, as with any complex problem/scenario, you don’t have all the answers up front, rather you have to learn as you go. The problem with predictability is that it leads to thinking in a “project” type fashion, as in we start something, we build it, it ships we walk away. Again, if you have read the previous article in this series you will understand that project mentality is not a productive one, rather we need to be thinking of product, and products get delivered, evolve and improve.

The third, and often one I see even in organisations that use agile well, is that “we prescribe to a specific agile practice”, right across all engineering efforts. For example, every engineering team must follow SCRUM and stick to the practices within SCRUM. Now, that is buying into specific agile practices and not principles, ideally you want to acknowledge that team A, may perform better if it followed a different agile practice than say team B.

Set core values

I believe the foundation of agile and the success of your use of agile lay with setting core values. If you are able to boil these down to something very short, very easy and something that everyone buys into, then you are starting to be able to set that good cultural environment.

Here are a two core values I almost always set:

  1. “Value Delivery over Predictability”
  2. “Value Principles more than Practices”

Delivery over Predictability

I think this core value gets at the heart of agile. Essentially if we value delivery of product above all else, then we will work in a way that promotes understanding, learning, transparency and bringing product to market. This directly impacts our mindset around understanding exactly what we are building. This one core value alone removes bad habits such as producing project plans and believing teams can predict the future.

Value Principles more than Practices

This core value kills the concept of a department, rather shared principles bring people together within a team. At the same time, this principle acknowledges that people are individuals, that what works well for one team doesn’t necessary work well for another, so it gives them the freedom to implement the agile practices that best suit them. This core value has a real impact on productivity and team morale.

Summary

Setting core values and principles helps address those forces that can negatively impact a good agile engineering culture. Your values serve as the foundations on which your agile efforts will be based upon, shaping not only engineering efforts, but ideally the mindset of management and the executive. If you can get buy in to these values, if everyone believes in them then you really will be creating the right environment in which a productive culture can grow.

In the next article within this agile series, I will take a look at applying your values and principles.

Who to pay attention to in FinTech?

Visionary in FinTech

This may seem an odd post, but I have often been asked, except for including myself, “who should I listen too, or follow in the FinTech world?” The truth is, great ideas and insight can come from anywhere, they really can, but here are 5 people who I would say you must follow / listen too.

But before I share my list, here are a few people that just missed out, which means you should also be follow and hear what they have to say… Nigel Verdon, Dr Leda Glyptis, Brett King, Theo Lau and Chris Skinner.

1.      Nick Ogden

So, any of you that know me will know that I have been working with Nick for a great number of years and over that time has become a mentor and great friend to me. But don’t think that means I am biased, no, here is why…

Nick is one of the early pioneers of the internet, infact he was part of actually setting up the internet and had an even greater role in that in the Channel Islands. He also is the founder of eCommerce, bringing the world its first ever eCommerce online store! He missed a trick not getting a patent together for that one, but you cannot deny the vision. But things do not stop there. Nick went on to create WorldPay, which if you don’t know who WorldPay are then you are probably reading this blog post by accident. You may now be thinking, wow, that is a great deal of achievements, after all, how many entrepreneurs are creators / founders of such concepts and companies that impact the entire world we live in. But the list doesn’t stop there. Nick went on to create the Voice Commerce group and Cashflows, the UKs first ever challenger bank, long before people were dreaming of creating Starling and Monzo. In late 2014 Nick shared with me his idea for creating a new clearing bank in the UK. As one of his founding team members, it was a great journey, but in terms of firsts, Nicks vision was to create the UKs first ever cloud based Clearing Bank, the first clearing bank in over 250 years, and as part of that we decided it was time to re-imagine how agency banking should work. We wrote up on our whiteboard in 2014, “Banking-as-a-Service” (BaaS) as well as “Payments-as-a-Service”. Now, not only is ClearBank having an impact on how financial service organisations deliver their solutions to customers, helping promote competition and an ever-increasing number of FinTech solutions, but BaaS is now a global phenomenon and often touted to be the future of banking.

Yet I am not finished here. Nick is also the visionary behind RTGS.global, the worlds first ever liquidity network, delivering Liquidity vs Liquidity transfers (LvL). A revolutionary approach to solving friction associated with transactions, which has massive ramifications for cross border payments for us individuals, but also for SMEs and large corporates. LvL has a positive impact on wholesale payments, cash settlement off the back of say FX swaps / derivatives and brings to the financial services industry “Just-in-Time Liquidity”.

All in all, Nick has proven to be a visionary for a very long time, constantly at the forefront of thought and what is technically achievable.

2.      Tom Bloomfield

This maybe a bit of a shock entry, but like Nick before him in this list, Tom has somewhat shown that he spots opportunities within the financial services sector. Though mainly known for being part of the Founding team of Starling, which famously parted direction to create Monzo, many forget that Tom is also one of the co-founders of GoCardless. Now, GoCardless is a big player in the world of FinTech, add in Monzo, the challenger bank unicorn and you can see why I have Tom at number 2.

3.      Anne Boden

Ann is a seasoned banking, and she is very open with her comments on what she thought of the ethics many of the banks she worked for displayed. These are some of the drivers behind her deciding to start her own retail bank, Starling. As a founder, Anne has been smart in her approach to trying multiple revenue generating streams for the bank, ranging from the Starling marketplace through to attempts to deliver some limited Banking-as-a-Service capabilities. But what Anne has been great at doing is noticing what moves the needle for Starling, and with great focus on customer experience, the retail and now business banking segments, Anne has achieved something that most banks in the UK right now are struggling with. Starling is a profitable bank….

With Starling valuation now putting it as a unicorn, Anne has shown that it really is possible to build a digital only challenger that is not only making a difference for customers but is also profitable. Hence, Anne is in at 3….

4.      David Brear

David is the founder of 11FS, a financial services-based organisation that does things very differently. David is one of the world’s great analysts, he observes what’s happening in multiple industries, and then applies thinking across those industries to identify opportunities for financial institutions. His grasp of technical capabilities and the application of user experiences from other fields, provides great insight for those organisations that leverage his and 11FS capabilities.

David has also transformed how many of us within the financial services sector consume industry related news and discussion, with 11FS creating such shows as “FinTech Insider”. Its now common place within FinTech to have podcasts and streamed shows, but it really seems that David and 11FS really got the ball rolling on that one.

5.      Dave Birch

Now Dave Birch is the odd one out in this list, because I don’t see Dave as a “doer” in the FinTech world, rather as a powerful voice and influencer. Dave has his own consultancy which provides valuable services to several sectors, not being linked solely to financial services. However, as an influencer, Dave is often at the sharp end of discussions regarding banking, banking experiences and the role of identity in these.

I would also add that Dave can be highly amusing to listen too, his knowledge is second to none, but his delivery is quite dry, and he always manages to put in a little content which is rather comical, which for me makes him far more entertaining to listen too – and lets face it, we all hear better when we enjoy listening to the person speaking…

So there you have it, my top 5 people in FinTech to pay attention to….