Correspondent Banking 3.0: The importance of discovery

Visionary in FinTech

When we think of cross-border payments, wholesale PvP or any sort of activity cross-border, at the end of the day a Correspondent Bank is required, if only to facilitate an onward payment to the end beneficiary. What amazes me, however, is how many solutions overlook this fact, and worryingly how many do not take into consideration frankly, the hassle and time it takes, setting up a correspondent relationship.

Yes, there are easier ways of doing things, like using a global spanning bank that provides multi-currency, but all you are doing is hooking into that banks correspondent banking relationships and you pay a premium for that – meaning your transactions cross-border aren’t as attractive as the competition, they just can’t be.

If you really want to be able to compete, to offer customers great rates and a great experience (and who doesn’t want near real-time payments?) then embracing correspondent banking is the only way. Or is it?

RTGS.global, more than just PvP

RTGS.global is a financial market infrastructure that enables settlement, be that off of the back of a wholesale PvP trade or a need to execute a cross-border payment, RTGS.global provides that settlement certainty.

As a Settlement Fabric, it also provides the transparency of who else is available to interact with on the Settlement Fabric, now that means other participants (Banks, MTOs, FinTech or Corporates) but also other FMIs and Central Banks (currency). We call this our Discovery Service, which does exactly what you might expect, allows you to discover other participants on Settlement Fabric.

But, it does much more than just Discovery. Discovery is so important as it short-cuts the need to try and source a Correspondent bank in any given currency, it short-cuts that process because they are right there, listed. But Discovery isn’t just about finding a Correspondent, it’s also about forming relationshjips, and what does that look like.

Standardised enhanced due diligence

The Discovery Service provides an opportunity to introduce a standardised due diligence process. Within Settlement Fabric, participants can form bi-lateral relationships, which allow secure messaging tunnels to be created off of the Fabric between the two participants. In doing so, participants can now exchange identity proofs cryptographically, this forms the first stage of any due diligence, knowing your partner (or Knowing Your Customer).

Once you have some basic KYC carried out, you can request enhanced Due Diligence. This can be in the form of a detailed Risk Profile, which incorporates a Fabric wide standard for Sanctions Screening, PEP identification and AML. You may also request additional information, this comes in the form of expected documentation, such as AML policies etc. These are all bi-laterally shared, yes, truly point-to-point and 100% private to the relationship between the banks.

Securing Correspondent Banking

If all goes well in terms of enhanced DD, then a relationship can be agreed between the two parties. Typically, with Correspondent Banking, this triggers off a complex process with regards to identifying the connection you will make over SWIFT. That means RMA exchanges and various other processes, all of which are of concern to any CISO and require some heavy operational lifting.

With Settlement Fabric though and the Discovery Service, that’s all removed entirely. The use of Digital Identity here and the secure bi-lateral tunnel means your systems have already formed a highly secure connection with the entity, knowing that that entity is who they say they are. Additionally, all your security key exchanges are automated, so no human interaction is required, no lengthy security processes leading to a secure connection in seconds. You’re in real time, ready to trade, ready to make payments, ready to leverage that correspondent relationship.

Conclusion

The Discovery service within the RTGS.global Settlement Fabric short cuts all the pain and time when looking for a Correspondent – well Host is the terminology with Settlement Fabric. We say Host because there is also no need for Nostro/Vostro account opening processes, no credit risk and therefore a lot of the relationship is lighter touch than Correspondent Banking.

Settlement Fabric with the Discovery service, well that’s Correspondent Banking 3.0 right there.

RTGS.global Participation? It’s not just for banks.

I’ve often been asked if RTGS.global, as a Financial Market Infrastructure, is just for banks. I say ask, that’s not true, I have it replayed back to me that only banks can participate. Well, I can categorically say that it’s not just for the big banks. And let me explain how and why….

With the RTGS.global Settlement Fabric there is a concept of a Host. A host is capable of funding a segregated account at the central banks, so think Bank of England in the UK, Federal Reserve in the USA etc. The Host is also the participant that can have money sent to it from that segregated account. A host is therefore a special type of participant, and typically this means they are bigger banks, though they could be a large MTO which has access to Central Bank accounts.

Now the Host can play host for other participants, and in doing so they act as a sort of Correspondent. I say sort of because there is no need for a nostro/vostro, no credit lines being opened, no collateral needed for the relationship etc etc. But from a regulator’s perspective, they are the correspondent.

Various types of relationships and participants

So if a Host is a type of relationship on Settlement Fabric, what other types of relationships are there? Well a few, but the only other one of real concern is a payment partner. A Host may allow you to hold a given currency and therefore trade it, it can also act as your payment partner to make onward payments to end beneficiaries. But you aren’t limited just to your Host to do that.

Payment partnerships facilitate onward payments only. This can be payments within the Settlement Fabric itself, or payments that extend beyond the direct participation in Settlement Fabric over a domestic payment rail. Taking the UK as an example, that means a Payment Partner could be used to make payments over FPS as opposed to your Host bank.

With these two relationships now understood, you can see that there is no need for a participant to be a bank itself. A participant needs to discover and form relationships with at least Hosts, that’s true, but they don’t need to be a bank.

Participants can therefore be any organisation that has a need to purchase foreign currency, and to make payments with that currency. That’s it. So with that in mind, any FinTech can participate in Settlement Fabric, any MTO (Money Transfer Organisation) can participate, any organisation that has a need to make overseas payments can participate. Each participant saving in FX fees and enjoying near real time payment capability.

FinTech, MTO and Corporates

While Corporates are able to participate in Settlement Fabric directly, they won’t bring much to Settlement Fabric in terms of new innovation, rather they will simply benefit from the efficiencies and risk reduction that Fabric brings. However, FinTech firms and MTOs, they may well bring additional innovation to the marketplace that leverages the capabilities of Settlement Fabric. Such innovations maybe used by other direct participant banks, or corporates, but it will no doubt deliver better customer outcomes – all possible because Settlement Fabric enables a wide range of direct participation.

Introducing Correspondent Banking 3.0

We are all becoming familiar with terms such as Web 2.0, Web 3.0, and yes, Banking 2.0, 3.0, and now, I am introduction you to Correspondent Banking 3.0. Well not just now, I started using Correspondent Banking 3.0 as a term back at the start of 2022 and spoke to it when talking at Sibos 2022 about Settlement Fabric.

Now if the traditional model of Correspondent Banking is Correspondent Banking 1.0, then what is 2.0,  let alone 3.0? Like most 2.0 versions in tech or the web, v2.0 is “platformification” (yes, I make that a real word) of a capability. So Correspondent Banking 2.0 is when a platform can make correspondent introductions between banks and potentially help shorten some of the tricky elements of due diligence that is required. Banks may also communicate with their correspondents via the platform, so a lot of the correspondent banking management aspect is provided by the platform itself. Correspondent Banking 3.0, just like Web 3.0 is all about decentralization – something that is either exciting or blah in the world of banking and finance (there is no in-between it seems).

Correspondent Banking 3.0 is therefore a decentralized approach to “platformification”, as in you get all the benefits of 2.0 but there is no central platform. This removes several dependency risks, costs and operational overhead. But to make something 3.0 I believe it needs to do more, not just deliver the same but in a new way. The big development with Correspondent banking 3.0 is the move away from Commercial Bank money, into the world of Central Bank Money, or Central Bank backed funds. With this development, we can also move away from “nostro/vostro” (finally).

Why is Correspondent Banking 3.0 important?

Correspondent Banking is somewhat broken. Right now, we are seeing a fall in the number of bi-lateral agreements between banks, and this is a direct result of two main factors, cost and complexity. From an operational point, maintaining correspondent relationships is a heavy lift, there are many moving parts ranging from teams supporting and maintaining the complexity of technical communications (SWIFT RMA exchanges and managing that system) to teams maintaining good interbank communications and service delivery. All this complexity comes at great cost. As a result of increased costs, the number of relationships maintained by banks are falling, and for new banks, the costs prohibit them to really get involved in the world of cross-border, hampering their customer experiences.

The reality is, if we want to meet the call of the BIS and various CPMI reports into cross-border trade, if we want same day settlement of wholesale cross-border trades, if we want immediate cross-border payment capabilities, if we want reduced fees, if we want to meet corporate and retail customer expectations, deliver improved customer outcomes, then we must either ditch the entire Correspondent Banking model (and we see this with Blockchain based approaches) or we evolve Correspondent Banking. History has shown us that evolution is far efficient way to go, and for many, much more palatable ….

How does Correspondent Banking 3.0 work?

There are 4 main features that must be in-place to deliver Correspondent Banking 3.0, these are:

  1. The ability to discover potential correspondents and carry out the necessary due diligence.
  2. The ability to easily form secure bi-lateral connections (both operationally and technically).
  3. The ability to replace commercial bank money with central bank backed funds
  4. The replacement of “nostro/vostro” with a single Network Account

In later posts I will go through these 4 elements in more detail, but for this introduction post I will focus on three main components which deliver points 3 and 4, which for many are the big business areas of focus.

Central Bank Funds

A move away from Commercial Bank money to Central Bank backed funds reduces all the risks we associate with correspondent banking. Be that settlement risk or credit risk, this evolution to Central Bank funds saves larger banks potentially billions of USD each year in trapped capital. The move also puts an end to Settlement Windows, all caused by the operating hours of the central bank to determine settlement. Operating hours effectively make 24x7x365 settlement activity impossible, until you move to the Settlement Fabric model.

The RTGS.global Settlement Fabric solves this by the service operating a Central Bank Account on-behalf of its participants. Note that RTGS.global isn’t in the flow of funds here, nor do they own the funds, rather RTGS.global is the operator of the Central Bank Account. Participants move funds into these accounts, which allow the Settlement Fabric to lock liquidity, then settle trades with that locked liquidity, within that account. (A Patented mechanism is used.) The beauty here is that by operating 24x7x365 and in this fashion, settlement activities can happen whenever desired, and there is no limitation placed on settlement due to operating hours of the Central Bank, because the funds are already within the Settlement Fabric.

The second benefit of this model is that liquid funds are ALWAYS there, known and available, be that as a result of a FX PvP trade or because of receiving an individual payment.

Hosting

Think of Hosting as providing indirect access to Settlement Fabric, much in the same way that in the UK there are indirect participants that access Faster Payments. Hosts are those banks that are able to move money directly into a Central Bank Account, operated by the Settlement Fabric provider, in this case RTGS.global. They are connected to and regulated by that central bank (or regulating body). These Hosts are the only participants that can move funds into and have funds withdrawn from the Central Bank Accounts – they are effectively the legal owners of all funds within the account.

Hosting therefore allows other participants to participate and enjoy the benefits of Correspondent Banking 3.0 and Settlement Fabric, despite them not being able to access the Central Bank Account. Hosts charge participants that they host (both domestic and foreign participants), for this service. Hosts may also charge for services such as on-ward payment capabilities – as in payments made to end beneficiaries.

The Network Account

The final crucial element of Correspondent banking 3.0 is the Network Account. Effectively a Network Account removes “nostro/vostro” accounts and replaces it with a single account which ultimately represents funds held within the Central Bank Account operated by RTGS.global. It is a banks Settlement Account within the Settlement Fabric for a specific currency. The account delivers a single pool of liquidity that can then be used by the owner to make payments or settle FX trades PvP atomically, without risk.

For each currency a participant wishes to trade/make payments with, they require a Network Account. For those currencies where a participant isn’t directly connect to the Central Bank (which can also be domestically) the participant needs a Host Bank. Network Accounts and Hosting are tightly coupled, you cannot have a Network Account in a given currency if you a) can’t host it yourself, or b) cannot form a Hosting relationship with a bank. Please note that you need only one Host relationship in order to open a Network Account.

Summary

Clearly there are more aspects of Correspondent Banking 3.0 and the RTGS.global Settlement Fabric that needs to be discussed, and over the course of several future posts, I will look at various aspects, such as the discovery service, the various relationships banks can have, why Correspondent Banking 3.0 significantly changes the due diligence required, how DD can be delivered in seconds as opposed to months, the technical connections between participants and the role “intermediaries” have to play. I will also look at opportunities for all sorts of participants that Settlement Fabric delivers.

RTGS.global Participant Gateway

Now for those of you who don’t know what RTGS.global is, then a quick intro. RTGS.global is building the world’s first Settlement Fabric. Initially I talked of a “liquidity network” but that messaging appears to have been a little confusing and didn’t actually talk to the entire proposition. So, what does a Global Settlement Fabric mean?

Well, it means using the Cloud, RTGS.global are bringing together Central Banks, FMIs (Financial Market Infrastructure providers), Clearing banks, Banks, Credit Unions, Money Transfer Organisations, Corporates and yes, you and I. This Settlement Fabric enables significant innovation across an entire stream of solutions, including immediate settlement of wholesale FX trades PvP, retail cross-border payments even the immediate settlement of intraday swaps. All in central bank money.

Now, on to this post. Today I am sharing more information on how a participant gets connected to the Settlement Fabric (connected to RTGS.global). In previous posts I have spoken about the Network Connector, a downloadable connector available within the Azure Marketplace that connects you at a networking level to RTGS.global. In addition, RTGS.global provided a SDK to allow participants maximum flexibility on how they wished to consume messages from the system and how they wished to send instructions.

Now with all good businesses, they listen to feedback from early adopters and customers. RTGS.global is no different, and over the past 6 months feedback from customers has shown that while many love the aspect of total flexibility and freedom of controlling how they integrate with the Network Connector, the vast majority want to lose that flexibility in favour of having all that lifting delivered for them. So that is something that has been looked at….

Enter the Participant Gateway.

What is the Participant Gateway

The Participant Gateway is essentially a “micro-gateway” that is downloadable via the Azure Marketplace. It’s uber light touch, hence the term micro. It is so light that the highly scalable Kubernetes infrastructure is even abstracted away, delivered within an Azure Container App. This means that the underlying infrastructure is all run as a service by Microsoft as an ACA within the participants own tenant. This makes scaling the gateway something that is automated and handled for the participant. Neat right…

The gateway presents itself to the participant as an application that acts pretty much as a black box. Communication with the participants applications is carried out by a secured RESTful JSON API which is ISO 20022 compatible. Incoming messages are raised as events from the gateway via Azure Event Grid, providing a resilient and highly scalable orchestration capability.

The following illustration provides an overview of the Participant Gateway.

The Participant Gateway, your light touch micro-gateway to access the RTGS.global Settlement Fabric

 Gateway Services

Within the Participant Gateway you will see a component labelled Gateway Services. This effectively manages the protocol buffer connectivity with the jurisdictional RTGS.global hub, which runs over the RTGS.global Network Connector (which is now incorporated into the Participant Gateway download from within Azure Marketplace).

The Gateway Services component also manages how messages bi-directionally streaming over the protocol buffer, are handled within the gateway. By this I mean, how they are handled in a highly scalable and resilient fashion, so that in the unlikely event of component failure or failure by the participants integrated solution, messages are not lost, and on a re-start, processing can resume as normal.

IDC

IDC refers to the ID Crypt Global Cloud Agent, a component that is required as part of their Patented decentralized approach to securing messaging systems (DPKIps). Here, Gateway Services interacts with the incorporated IDC Cloud Agent to ensure messages are correctly signed using bi-directional security keys. The Gateway services also interacts with the IDC Cloud Agent to trigger activities within DPKIps such as security key rotation.

API and Events

The Participant Gateway only receives input via its secure exposed API. As you can see from the illustration, the approach is very much an API first approach, where the Participant Portal, also part of the Gateway, interacts with the Participant Gateway, and therefore ultimately with the Settlement Fabric via the same API.

Events are raised out and consumed by “subscribers” to those events. The Participant Portal subscribes to some of these events, not all, while a participants own system typically subscribes to all inbound events from the Gateway.

This API approach first, and the use of Event orchestration via Event Grid also enables a number of additional solutions to subscribe to events originating from within the Settlement Fabric. More on the significant benefits of that for FX, Treasury and Operations another time.

Participant Portal

This is a web based portal that provides access to a number of features exposed via the Gateway and ultimately available from / across Settlement Fabric. The portal though shouldn’t be seen as a UX in which FX trades can be settled PvP, rather the Portal is used more for configuration and management of relationships with other participants across Settlement Fabric.

Settlement Fabric along with IDC is able to deliver a Correspondent Banking 3.0 model to participants, this significantly removes the operational and management overheads associated with correspondent banking, enabling many more bi-lateral relationships to be established over Settlement Fabric and delivering significant money movement efficiencies.

Time to get up and running?

Well, RTGS.global with Settlement Fabric is very much a next gen FMI. So you would expect a next gen installation and set-up experience, and that is exactly what you have here.

Being a deployable component from within the Azure Marketplace means you are presented with a custom installation window, embedded within Azure. Some initial set-up parameters are required, which RTGS.global provides the participant with, along with some initial setup configuration and security aspects. The custom install should take just a matter of minutes to complete.

Once completed, the download, installation and configuration are all automated and as a result in approximately 11 minutes, yes 11 minutes, a participant will have a valid connection to Settlement Fabric and will be seen across the fabric as a new available node!

Though now connected to Settlement Fabric, there are still a few steps required before you are able to start settling Central Bank PvP trades and much more. A participant needs to find partners across the Settlement Fabric in order to trade. But with the Participant Portal providing “discovery capabilities” and with many participants Piloting, these relationships are already formed and with Settlement Fabric, take seconds to verify and make active.

Conclusion

The Participant Gateway delivers to participants a highly resilient, scalable solution for interactions with the RTGS.global Settlement Fabric. Though the flexibility of how that solution was built has been taken out of the hands of a participant, the net benefit is that within minutes, participants have a highly secure, highly scalable and highly resilient connection and solution in place.

With the gateway exposing a ISO20022 standards based RESTful JSON API and Events over Azures Event Grid, integration of core systems with the gateway becomes, quite simply, an easy task. In terms of PPT (People Process and Technology) there is nothing new here, no significant uplift for a participant, unlike many infrastructure solutions that require significant changes in resources, their skill sets, understanding of new technologies and the processes that bring that all together.

This frees up engineering efforts to look at the integration of the capabilities that Settlement Fabric delivers back into the specific business lines of the participant. This is where the real work takes place, ensuring systems are able to process 24x7x365 and in real-time. But this is where participants reap the rewards and benefits of Settlement Fabric, and their focus can be here and not on technical integration.

Where RTGS.global uses the Blockchain

Ever since Nick Ogden first spoke of his idea for brand-new infrastructure for cross-border settlement, namely RTGS.global, there have been 3 main misconceptions that I have to answer.

  1. RTGS.global uses a coin/token (could be even XRP)
  2. RTGS.global uses DLT blockchain for settlement
  3. RTGS.global doesn’t use any DLT blockchain technology

I can confirm, that ALL 3 of these statements are untrue. So, in this post I want to explain where RTGS.global uses blockchain technology to deliver its Global Settlement Fabric and equally important, where it doesn’t use blockchain technology.

Tokens, coins and XRP

I will start with the easy one. The “why” behind designing out RTGS.global was always to remove financial friction from cross-border activity. When we looked at a true end-to-end journey, it was very apparent that introducing a token, coin or utilising something like XRP was actually adding friction to a process we were trying to reduce friction from.

For quite a while now there has been this notion that to deliver more efficient cross-border settlement and transactions, you must use a blockchain. Now, where this notion has come from, I am not too sure about, but I would guess from blockchain companies themselves and or those invested in a specific coin (XRP gets a lot of support here for this very use case).

The reality is, RTGS.global Settlement Fabric deals in removing friction and the settlement of value. In its purest use case, that is good old FIAT currency held within a central bank. Please note, FIAT held within a central bank. We are not talking commercial bank money here; we are talking the highest quality liquid FIAT asset you can hold.

Now because the Settlement Fabric deals in value, it can be used to settle assets other than FIAT currency, but to settle cross-border and facilitate 24/7 immediate cross-border payments, you can use FIAT with RTGS.global. So why swap into a token, a coin or leverage someone else’s token like XRP? All you are doing at that point is adding friction (unless you believe we live in a world where all end beneficiaries want to receive a crypto asset and or can spend freely that asset).

So, in short, RTGS.global doesn’t use any form of token, coin and it doesn’t use XRP to facilitate cross-border settlement and transactions…

DLT Blockchain for settlement

I personally love DLT and blockchains, there are some very complicated and highly important problems that are solved with DLT and the blockchain. However, when it comes to cross-border settlement and transactions, this isn’t one of them. Even when we talk of domestic transactions, again, I doubt the blockchain is ever the right technology for that.

Let’s start with an issue that every single cross-border solution that uses the blockchain seems to miss. And I am including lots of CBDC discussions and proof of concepts. The fact that, every single transaction is recorded and replicated, and that every participant can therefore see all activity on the blockchain. This is a good thing, in some ways, but also a very bad thing. It’s good because, it is that which provides immutability of data, and that is mission critical for any payment system. It’s also a good thing because it allows others to simply act on transactions that appear on their nodes that they must act upon. There is no need to understand routing, rather the blockchain replicates to the relevant nodes/banks and hey presto they have the data.

But, now for the bad. Firstly, data replicating to all nodes is a bad idea for two primary reasons. The first is that data sovereignty becomes tricky, especially when you span multiple jurisdictions and geographies. We have seen within financial services time and time again; data needs to be sovereign in terms of storage location and even data paths must be known. So, this immediately makes the blockchain hard to adopt in the real world, especially for a cross-border situation or even if you have foreign entities able to interact with that blockchain. The second is, everyone with access to a blockchain node has access to the data. So, all the participants on your blockchain can read each transaction (as I said a moment ago that can be a good thing). However, it also means that I can see my competitors trades. I can see what other banks are doing on behalf of their customers. With some simple volumetric tool, I can quickly and easily take that data and essentially gain insider trading information.

The bad here outweighs the good, but the good points must not be lost.

Now, if point 3 is also untrue (RTGS.global doesn’t use any DLT blockchain technology) then where DOES RTGS.global use the blockchain?

Where the Blockchain is used

RTGS.global has three specific use cases where it leverages blockchain technology. Let’s start with ledgers and immutability of data. As I said, this is mission critical stuff for a payment system, so RTGS.global DOES use the blockchain to provide immutability of data. However, we use it in a way that isn’t the common approach…

RTGS.global runs currency ledgers all across the globe, e.g. a GBP ledger is run for all transactions that GBP is involved in. That ledger is sovereign within the UK. A EUR ledger is in the EU and yes, USD in the USA. Each one of these ledgers is based off Azure SQL DB Ledger. I have blogged about this previously, and you can read that post here.

Azure SQL DB Ledger gives RTGS.global all the typical flexibility that SQL Database gives you, however, it backs into a private Blockchain underneath. Private is the key here. The blockchain nodes are not accessible by participants, and they don’t need to be, rather data is pushed into the blockchain as it is written to the SQL DB. The blockchain therefore provides an immutable ledger, one that resides in that specific jurisdiction and one that doesn’t replicate data across the RTGS.global Settlement Fabric.

RTGS.global therefore uses a blockchain per currency that is within the Settlement Fabric. So yes, RTGS.global for sure uses Blockchain technology….

Now, in addition, Blockchain technology is also used to provide identity of participants within the Settlement Fabric. ID Crypt Global, an RTGS.global partner, provides sovereign digital identity capabilities utilising a public global spanning, you may call it traditional DLT Blockchain. This allows participants to discover and identify other participants on the network, and this feeds into the RTGS.global Correspondent Banking 3.0 implementation.

Finally, and yes, more Blockchain technology, RTGS.global utilises the Blockchain to secure Settlement Fabric itself. The Blockchain provides a decentralized approach to security. A centralised Public Key Infrastructure (PKI) is replaced with a Decentralized approach (DPKI) which enables true end-to-end encryption and immutable proof that messages have not been tampered with. It also removes many operational processes associated with security and replaces them with automation. Blockchain technology here also ensures that the lives of security keys are short lived, typically being rotated in hours as opposed to annually!

Final Blockchain thoughts…

In a nutshell, RTGS.global doesn’t use DLT or Blockchain technology to route messages around the world, nor to act as a shared ledger or for the actual settlement of transactions. Therefore, you will not see RTGS.global claiming to use the Blockchain to assert settlement, which is why most will say RTGS.global doesn’t use the Blockchain.

However, RTGS.global uses Blockchain technology where it works best, to provide immutable truth of identity, decentralized security, end-to-end encryption and yes, immutability of financial transactions within a specific currency.

The need for immutable data, but the challenge of residency

In every industry there is a real push to make sure you can be confident in the data that you hold. That means understanding how data is captured and stored, making sure its accurate and complete and then ensuring you have timely backups. An old mentor of mine always said, the challenge with data is, “shit in, shit out”. His comments were as true then as they are today. How many systems and platforms around the world and in every industry (especially financial services) is hampered by poor data integrity, out of date data or even worse, fields of data that just have garbage in them, or nothing at all.

Now how many systems struggle with data because it has been unexpectedly changed? The output therefore is inaccurate because data has been corrupted or worse still, tampered with? In the world of financial services, the ability to ensure your data is all present and correct is almost everything. Because of this challenge, there has, over the past years, been a real drive by certain technologies to push themselves as the de facto solution for immutability, I’m thinking blockchain here. Not only does blockchain make a strong case for it to be the “ledger” of truth for financial services, but it also makes a strong case for its ability to ensure operational resiliency.

On the surface here, blockchain may look like the perfect solution, but there are several challenges associated with that. Is there a better way than using a typical blockchain architecture? Does it really fit with financial service providers, more importantly, does it fit well with financial market infrastructure (FMI) providers? This one is of great interest, as many push a blockchain architecture to address current challenges with cross-border payments. But, is there something else?

Let’s look at the 3 big challenges for ledgers regarding ensuring the data they hold is all present and correct:

Data corruption

When we were setting up ClearBank, I had many a discussion with the Bank of England, the regulator and certain payment systems regarding data corruption – especially with regards to data corruption being replicated across nodes. The point is highly valid, that corruption can in some implementations replicate across your multiple nodes, and therefore into your backups. That makes the issue of understanding your RPO especially hard, simply because you need to understand at what point that corruption took place.

Cyber attack

A big challenge with a cyber-attack, especially when it targets your underlying datastores is, has the attack damaged your data – has it been altered. Again, this can be tough to address, especially with regards to your RPO. Sure, there are ways to capture these changes, looking through logs etc. but you ultimately rely on your data backups at this point, and again, identifying which data backup is accurate / pre-attack. This really can play havoc with your RTO and RPO as there is simply a lot of work that must be done to identify where data has been changed, and the impact that change has had going forward. In the world of financial transactions, you can’t just unwind every debit/credit that occurs after the point of tampering.

Insider threat

This is pretty much the same issue/risk as that which is presented by a cyber-attack. The difference here is, your logs may have easily been tampered with too, since the threat has come from an internal source. Granted this could also be true with a cyber-attack, but it is more commonly associated with an insider threat. So, in this case, how do you prove your data is correct?

The solution: An Immutable datastore, an immutable ledger

Most financial services ledgers are not immutable. There will be lots of operational and some technical workarounds to try and ensure data isn’t being tampered with, but at the end of the day, on the main, the data held in the worlds financial systems is not immutable.

Immutability of a ledger is common with hash chains and blockchains alike. So clearly the technology, which has been around a lot longer than most are aware of could be a great way to prove data is accurate, and that it hasn’t (or even better) or cannot be tampered with. For this section, I am going to split it into a typical blockchain approach and a not so typical blockchain approach…

A traditional blockchain solves the issue of immutability, but it sadly presents a few others, these include:

  1. The integration overhead (cost and lines of code to maintain)
  2. Underlying infrastructure requirements
  3. IT support considerations, good old People Process and Technology (PPT)
  4. System performance considerations (your choice of blockchain – will it be quick enough)
  5. Data residency, where is the data being replicated

The five points above are major deal breakers, they render a blockchain based solution as pretty much a non-starter, if your system is a high volume transactional based system that spans different legal jurisdictions (geographies). This though hasn’t stopped many experts, central banks, banks etc investigating just what the technology could deliver in this space, and rightfully so. The quest for immutability and resiliency is always going to be core to any transactional type of system, especially a payment system.

A traditional blockchain solution would provide several nodes across the geographies in which you want the system to operate, so that would mean spanning different legal jurisdictions if you want something that could work for cross-border use cases. Several nodes provide great resiliency and the nature of a blockchain provides that immutable ledger. However, simple distance between nodes introduces latency, that’s before you look at the blockchain implementation. The location of the nodes also causes so many challenges regarding data replication and therefore residency. Can you imagine a regulator in one jurisdiction demanding to see data on all aspects of your solution because data is no longer located just in that specific jurisdiction? We’ve seen numerous examples of this inside and outside of the financial services sector. So right now, data replication and residency is a deal breaker for the technology.

So, while a traditional blockchain solves the immutable ledger challenge, it introduces many other challenges. All too often we see examples of technologies desperately seeking a problem to solve, in some ways this is blockchain in the traditional implementation sense, especially within the financial services sector. I personally believe in trying to solve the problem and identify the technology that best fits….With that in mind, how have we solved this at RTGS.global? What do I personally believe is the right approach/solution?

Database first, immutable ledger second…

A traditional database is suited to high transactional throughput and data storage. It’s proven to be the go-to tech for decades now, and with evolution of this technology, like Microsoft’s in-memory Azure SQL tables, the performance of a database is lightning. Couple this with micro-service-based architectures and really, it’s hard to see a better technology for high transactional based systems. Database technology now also includes concepts of nodes and high availability, couple this with a good old backup strategy, and almost all of the challenges are solved. Now, if you are smart with your implementation, then you can also ensure data residency and that’s true even for cross border solutions like RTGS.global.

The one challenge remaining is verifying that data is accurate, that it hasn’t been tampered with. This is where I need immutability. But the ask here is subtly different, which gives you room to use technology in a different fashion. Here I am asking to prove a traditional data store data is accurate, I am not asking for the primary data store to be immutable and I am not using that primary data store as a form of messaging across geographies…The solution is therefore a way of utilising an immutable ledger (like a blockchain) to confirm / verify the accuracy of my database. Now, if you can do this, then you have all the benefits of traditional database systems, including the ease for developers to use it, coupled with all the benefits of a blockchain. Enter Azure SQL Database Ledger.

Azure SQL Database Ledger

When we first started working with Microsoft on RTGS.global, our ask was a way in which to prove data was accurate. We had assessed many blockchain implementations and concluded that the technology added more obstacles to building out RTGS.global than challenges it solved. So, our ask of Microsoft was how do we prove immutability and verify our data is accurate. Luckily, we were able to work with them on Azure SQL Database Ledger.

Now for those of you who do not know what this is, at a macro level it is essentially an Azure SQL database. However, the data that you capture is also backed off into a blockchain implementation asynchronously. What this means is you don’t have the blockchain slowing you down, in terms of how long it takes you to leverage the technology, but also in terms of you transacting. In this case, the blockchain is in the background building out the immutable ledger.

The ledger can build “digests” which too are in themselves immutable, as they are stored and accessed within an Azure immutable storage. The digest can then be verified against the data held within a database by running a simple Stored Procedure (SP). The output confirms that the data within the DB is all accurate, that it hasn’t been tampered with and therefore, you have solved your immutability challenges.

At RTGS.global we run a distributed approach to our data storage, essentially each jurisdiction in which we operate has its own Azure SQL Database Ledger, that follows a high availability model in that jurisdiction.  Data residency is within the specific jurisdiction, with the blockchain that backs off of Azure SQL Database Ledger also being private and located in that jurisdiction only. We also showcase to central banks and regulators the immutability of the data within our ledgers by providing a portal view to the digest verifications that we run.

For us, the solution is simple, its elegant, highly powerful and solves all the challenges of performance, residency, people process and technology and obviously, delivers immutable data storage.

Many of the worlds financial systems ledgers are not immutable, and that’s a grave concern/big risk. However, a traditional blockchain ledger isn’t the solution. The solution is to use blockchain technology to prove data integrity, use the technology for what it is great at doing, delivering immutability.

For more information on how RTGS.global uses Azure SQL Database Ledger, check out the Microsoft case study here: Microsoft Customer Story-RTGS.global automates financial transfers, unlocks trillions in capital with Azure You may also want to look at the technology in more detail yourself, then this is a good place to start, Announcing Azure SQL Database ledger – Microsoft Tech Community

For my very technical readers, if you really want to get under the hood of the technology, then read these two blog posts from Jason Anderson

Azure SQL Database ledger PART 1 by Jason M. Anderson – SQLServerGeeks

Azure SQL Database ledger PART 2 by Jason M. Anderson – SQLServerGeeks

The need for true resilience of payment systems

The world is becoming smaller and as a result, everything we do or want to do is faster, in many ways these two points are a direct result of the world we are living in becoming increasingly digital. Our expectations, be that as a consumer or as a large multi-national company, are things should be instant and always available when I need them to be. In the world of financial services, these expectations are more than just an expectation, we are starting to demand immediate, and always on, always available services. Payment systems, and by extension settlement services, therefore must deliver immediate capabilities, capabilities that are always on, always available, not sometime in the future, but now.

The challenge though is one of operational and cyber resilience for both payment systems and settlement services. Having something that is always available, always on, always working and delivering immediacy is tough and expensive. But that is the demand and expectation we all have of our banks, and by extension the payment and settlement infrastructure our banks use. Regulators around the globe are starting to put additional pressure on banks and payment systems to meet these demands. And to be frank, rightfully so.

On the 23rd October 2020, the high value payment system that services Europe, Target 2 went offline. This caused no end of transactional pain, frustration and came at a cost for no doubt many businesses and some individuals. The pain was felt not just on domestic European transactions, but also international transactions coming into Europe. The outage was caused by a “third-party network device” in the internal network of the Eurosystem of national central banks. Interestingly, the failure didn’t lead to a failover situation, with Target 2 “backup” systems taking over. Essentially, resiliency plans didn’t work. The impact was still being felt by some SEPA transactions 3 days later.

Target 2 isn’t the only high-profile failure in recent years. If you spend some time online you can find lots of instances of bank IT outages, central bank outages and payment system outages. In 2014 the Bank of England’s core RTGS system suffered a nine-hour service outage. This time the root cause was defects introduced as changes made to the RTGS system in April and May the year before. The failover into MIRS (the contingency solution) wasn’t undertaken. On New Year’s eve, just the year before, the UK Clearing house suffered a major IT outage, with the nature of the problem creating obstacles to reverting to contingency arrangements.  What’s consistent with these outages, and what is the important part to note is, that contingency solutions were not or were unable to be triggered. This effectively renders them as very expensive systems, systems that sit idle, carry out zero workloads and at the time of need, seem to not be able to be used…Resiliency therefore needs to be looked at in a different light than that of a traditional Disaster Recovery (DR) failover.

CPMI: Cyber resilience in financial market infrastructures

It seems an age ago, but in November 2014 (yes 6 years prior to the Target 2 outage) the CPMI (Committee on Payments and Market Infrastructures) published a paper on Cyber resilience in financial market infrastructures. The report goes into to detail regarding “why cyber risks are special”, how to adopt an “integrated approach to cyber resilience” and more interestingly, “sector-wide considerations”. One of the key takeaways in this section is titled “Non-similar facility”. In this section, the report identifies resiliency failings if you use contingent solutions that have shared components with the primary solution. The thinking is, if you have a shared component, then if it fails in the primary, it will be impacted or worse fail totally in the contingency solution too. This is to be looked at every layer within the payment system stack, from your underlying networking to the platforms used, data storage and even how banks potentially connect into the payment system.

The non-similar facility (NSF) seeks to replicate the core functionality that is provided by the primary service, it need not provide all the same capabilities, but the core needs to be covered. This means that in the case of a failure/service outage, or a cyber security event, banks could switch over to the NSF. The report goes on to state:

“An NSF could create a backup of an FMI’s data to facilitate resumption of operations after data corruption, with services running independently of the FMI’s primary system (and hence remaining uncorrupted). This may require an independent communication channel. One possibility could be for an FMI’s participants (or other holders of its data) to send their data directly and separately to two difference facilities (e.g. the primary facility and the NSG)…”

A key element of an NSF is its ability to hold and store data that is independent of any data store the primary solution interacts with – or replicates to. The CPMI report rightly identifies the challenges of data corruption, specifically if caused by a cyber event. As such an NSF should have an independent source of data from the primary data store, utilise different data technologies and be housed on different infrastructure, therefore ensuring any corruption doesn’t impact the NSF.

An NSF in all the cases I have highlighted in this post, and many that you can find online, would have meant services had a very limited period where they were unavailable. Interestingly, if the participant banks had a separate submission channel – that was being constantly used, then you could argue that the downtime experienced would be measured in minutes, maybe seconds in all cases.

How can payment systems deliver a non-similar facility?

This has proven to be historically a tricky point, largely down to how the payment system was built (proprietary build) or by the limited communication channels that are used across the sector. For example, many high value payment systems are built around SWIFT messages and the SWIFT network. Many failover options again come from SWIFT, share the same underlying components, networking, data store technologies and of course, the submission channel itself. In other geographies there maybe a proprietary payment system build, but the failover is provided by the same provider. If you look at most Faster Payment Solutions (FPS in the UK included), the failover is a secondary site built by the same company with the same underlying shared storage components/technology, networking, business and application logic. So, in all of these cases you aren’t delivering a NSF.

In some discussions, some payments experts tout CBDCs as that resiliency option, however if we look at that argument, we see far more failings than potential for a real solution. The first point is that a CBDCs aren’t FIAT currency, and therefore a payment system failing over to a CBDC infrastructure would be fraught with challenges and many questions that need to be thought out. Secondly, the investment needed in a CBDC infrastructure, used just for payment system failover makes it prohibitive immediately, and that’s before we look at the costs that would be undertaken by participant banks. Yes, before we go into this debate and unpack it further, these two points alone means the argument for CBDCs as a resiliency solution simply makes no sense. However, when we add in a third point, that there are already NSF solutions out there, then the CBDC discussion is a non-starter…

RTGS.global ARK

One of the NSF solutions that central banks and payment systems can utilise, is the RTGS.global ARK product. RTGS.global provides a core product to participants around the globe that allows them to source liquidity in central bank funds, on demand, and to make immediate payments 24*7*365 even if the central bank systems are closed. Banks connect to the RTGS.global network through a highly available connector which is hosted in the Microsoft Azure Cloud. RTGS.global implements a high availability model, which sees services and data stores available from three independent physical data centres (availability zones) separated between 10-40km in a specific geography for that jurisdiction. What I mean by this is, the RTGS.global system is distributed across the globe, but data residency and network connectivity is local i.e Banks in the USA have their connectivity and data residency in Azure facilities within the USA. UK banks connect to Azure UK facilities across the UK.

Some see an ARK as a very large life-boat – I see it as being prepared for anything the storm can throw at you

When it comes to ARK, payment systems and central banks get a true NSF, one which has an independent communication channel for participants that is already being utilised, is already on and already available. ARK provides non-similar capabilities across the entire stack, including:

  • Networking
  • Data centres and sites
  • Platform OS
  • Data storage (immutable)
  • Application layer logic
  • Business logic
  • Participant communication channel

ARK may not provide all the capabilities that the primary system delivers, however, the core capabilities are provided. There are no shared components with a central banks core systems or a domestic payment system whatsoever. I cannot stress enough that it’s worth noting this includes the fact that there is zero usage or dependency on SWIFT messaging or the SWIFT network.

ARK can be used in a cold standby capability, or as a hot /active standby, where it already contains transactions that have taken place today. There are also two different methods of populating transactions into ARK, one coming directly from communications received by the central bank/payment system, the other a replica of the transaction coming from the participant bank source itself. This really provides implementation flexibility.  It is also worth pointing out that, data stored in ARK isn’t a result of data replication from the primary, rather it is a store of the messages the primary solution took in. The ARK data store is also immutable and therefore protected against any corruption of data that the primary may suffer.

However, the real beauty of ARK though is the ability for participants to have an independent communication channel, one that is already being utilised for other communications. This connectivity piece alone means banks are already using their connectors daily, the technology and the investment they have made is not just sitting there gathering dust, waiting for a failover situation, no it is delivering value back to the banks every single moment. The connector also gives banks confidence that in the scenario when ARK is needed to be used, banks systems are ready and able to utilise ARK, there is no failover rather a re-route of some payment traffic to a different connector.

The arrival of RTGS.global ARK means payment systems and central banks now have a true NSF solution that they can utilise, one that meets the recommendations of the CPMI and one that delivers values back to participants, as it provides connectivity into the wider RTGS.global network. In the same way that the core RTGS.global solution is built today but ready for the future, so too is ARK. ARK supports not just FIAT transactions, but in the same ledger can also be used to support other assets, yes including CBDCs, oh and it can be deployed in minutes…

For more information on ARK contact RTGS.global directly.

The need for cross-border payment message harmonisation

There are many challenges in the wider world of cross-border settlement, these same challenges flow into most examples of international trade, none more so than cross-border payments itself. One of the many challenges relate to actual data, or more importantly how that data is represented.

To date, data associated with cross-border payments and settlement, are shall we say, quite “fluffy”. By this I mean that most banks provide data in different ways, their interpretation of message formats can be varied and as such, this lack of harmonisation adds friction to the processes. Legacy message standards are not tightly defined, and as such, banks construct their messages in different ways. When you have varied data messages, machines struggle to interpret them, rather they see the message format as very black and white. The result is lots of examples where messages cannot be processed by banks core systems, they end up in exception queues hoping that a human can “fix the message”. This in itself significantly increases risk associated with transactions, but let’s not getting in a risk post. All of this requires resources, takes time and ultimately increases banks costs – which unsurprisingly are more often than not passed on to the end customers.

ISO 20022

The saviour of messaging standards in financial services is ISO 20022. However, even when following an international standard, there is so much room for interpretation, that harmonisations for messages to facilitate better cross-border payment capability just simply isn’t there.

However, what ISO 20022 does deliver is a great starting point. The standard incorporates a great data dictionary and there are many aspects where it is easy to carve out a “core” message. In the world of IT this means you can tightly couple the message definition, which in turn means it becomes highly interoperable with other systems, which enables banks to bank communications across systems. So, ISO 20022 in the right hands, with a tight core definition does deliver on that promise of, “the language between financial institutions”.  

Harmonisation building blocks

The CPMI in its look at how to enhance cross-border payments, identified 19 building blocks, each one has its vital role to play to enhance cross-border payments. A number of these look at standards, as in the harmonisation of ISO 20022 for message formats, harmonisation of API protocols, establishing consistent and unique identifiers for messages.

When designing RTGS.global, we had several core buildings blocks that we had to incorporate into the solution design, we had to because without those building blocks the network wouldn’t function as efficiently as we desired. You will not be surprised to read then, that consistent messaging, data formats and API harmonisation are core to the RTGS.global solution.

The RTGS.global network operates in a distributed fashion; I say distributed because there isn’t one central infrastructure, but the network doesn’t use DLT (Decentralised Ledger Technology). Part of its distributed nature are components that enable a participant to communicate with the network, these are deployed as a network connector into a banks cloud tenant. This connectivity can be established in approximately 45 minutes, because one click installation capabilities, a result of embracing modern DevOps processes. These components force banks systems to:

  • Follow a standardised and tightly defined ISO 20022 compatible message format
  • Embrace a singular harmonised modern API protocol
  • Utilise digital identity to establish unique identifiers and remove the need for some proxy registries

Clearly these components only enforce these standards on participants with regards to their integration with the network, but the point is, all banks on the network operate in the same way, they follow the same operating rules, their messages are standardised and there is no room for interpretation. This harmonisation across the RTGS.global network ensures far greater levels of transparency, accuracy and straight through processing (STP) capabilities, driving down operational costs for participants, and ultimately this could lead to reduced costs for account holders.

In summary

Cross-border trade, payments and settlement is only possible when participants can communicate effectively. The RTGS.global network identifies the need for tightly coupled standards to ensure those communications are effective, efficient and that’s achieved by being consistent across all participants. At RTGS.global, we’ve worked hard on making sure the network works efficiently as possible, we’ve invested a great deal of time and effort in delivering a harmonised integration approach that enforces a core and strongly coupled implementation of ISO 20022. The network connector is the key for enforcing this right across the entire global network and all of its participants.

As a final thought, an old mentor once told me, “when you’re dealing with computer systems, remember, if you put sh** in, you get sh** out” … His words were true then, and they remain true today, without tight control over the messages that flow between banks, you cannot expect great outcomes.

Links:

Connecting to the RTGS.global Network – FinTechAndrew – The blog (wordpress.com)

ISO 20022 | ISO20022

RTGS.global

Connecting to the RTGS.global Network

Just in case you aren’t aware of RTGS.global, I will spend a sentence or two providing a brief overview before I dive into how regulated financial institutions can connect to the network, become a participant.

RTGS.global is a cross-border settlement infrastructure. That’s it in a nutshell. As a cross-border settlement infrastructure, RTGS.global enables / delivers various services relating to cross-border activity, ranging from intragroup liquidity and payments, intraday liquidity sourcing (in real-time), immediate cross-border payment capabilities, settlement of intraday swaps and many more. The infrastructure removes settlement risk totally and provides settlement finality (a technical term). There are also significant benefits for participant banks regarding the treatment of cross-border assets, their capital and liquidity treatment.

The vision:

“To eradicate market friction, risk and latency by transforming the infrastructure which underpins the world’s international payments”

Right, you are probably here to read more about technology.

Connectivity

For a participate to hook into the network, they have to get their core systems (banking or payment hub etc) connected to the network. Connectivity must be 100% private, so there is no going over the public internet to make that connectivity. In addition, when data/traffic makes its way onto the network, it must remain end to end, totally private and totally encrypted. Now, to achieve this on a global basis, there are only two options.

  1. Leverage MPLS infrastructure in various geographies
  2. Utilise a Cloud infrastructure that has its own dedicated backbone across the globe

Option 1 is the legacy approach, knitting together various telecommunication providers MPLS to get into your data centres of choice. It’s how SWIFT, FPS and most of the financial market infrastructures work around the globe.

Option 2 however, has only very recently been made available, and there is currently only one Cloud provider that has total coverage across all of their sites in terms of their own dedicated and highly resilient infrastructure, and that’s Microsoft with the Azure Cloud.

It is no surprise then that RTGS.global has collaborated significantly with Microsoft to get the RTGS.global network up and running. The net result is, connectivity into the RTGS.global network MUST be carried out via an Azure subscription, that a participant must own / operate themselves. If as a participant you don’t use Azure, not to worry, you can connect from pretty much any other cloud infrastructure directly onto the Azure backbone, meaning you can communicate with Azure securely and over private connections. If you’re still on-prem, again not to worry, MPLS with Azure Express Route technology will ensure you can connect your on-prem infrastructure privately and securely to Azure.

RTGS.global Network Connectivity is via Microsoft’s cloud, Azure

Network Connector

The Network Connector is made available via the Azure Marketplace, which means it feels and acts very much like a native Azure component. The Network Connector is installed into a participants own Azure subscription, ideally one specifically reserved for integration with RTGS.global. Once installed, you have a totally secure and private connection between the participants Azure subscription and the RTGS.global Network hub.

Installation of the Network connector is a one click affair pretty much, it takes just a few minutes to get the connector deployed and connected. Compare this with any other payment / cross-border network which is measured in weeks.

RTGS.global operates a distributed network, with jurisdictional hubs in specific locations across the globe. This means participants connecters connect into a local jurisdiction, not some centralised datacentre.

Once installed you can communicate directly with the Network Connector via its gRPC API. However, this isn’t recommended, rather it is far quicker and easier to utilise the SDK or the Open-Source Framework.

Securing your connection and identification 

Once you have a network connector, you need to deploy an RTGS.global ID Crypt security component (IDC Agent), this forms part of the wider security implementation known as DDPKI (Dynamic Decentralised Public Key Infrastructure). I will post independently about this technology sometime soon.

The IDC Agent requires an AKS cluster to be available, you can configure a cluster, or you can let the installation templates set one up for you. The AKS cluster should be split across multiple pods with pods themselves split across multiple High Availability zones within Azure.

Again, installation here is measured in minutes, not weeks or months.

Utilising the SDK and test harness (PoV)

As participants start on their journey with RTGS.global, there is a wider PoV (Proof of Value) step that is available, it’s also step 1 on the integration journey. The PoV includes the network connector, security components and the SDK along with a test harness being deployed within the participants Azure subscription. The PoV enables liquidity requests, liquidity locks, settlement processes and payments to be made across the network without the need to integrate core systems.

Again, this is a one click deployment with the test harness being deployed into the same AKS cluster as that of the IDC Agent.

PoV total installation time?

To get up and running with the PoV (which includes all the necessary components that would be required for production) is typically completed in sub 10 minutes! However, for first attempts setting configuration and reading documentation can take some time. First time installation of all components (which form the PoV) on average takes 45 minutes…

“YES 45 minutes to go from a blank Azure subscription, to one that is fully connected to the RTGS.global Network!”

What other payment system or cross-border solution can you get connectivity up and running within 45 minutes? Another example of the efficiencies of a cloud first approach and DevOps templates.

Integration options

In later posts I will talk more on how easy it is to integrate core payment systems, liquidity management dashboards etc with the Network. However, I should just take a moment to provide the two main options:

  1. Direct SDK consumption
  2. Open-Source Framework and orchestration

Integration is available via the SDK, which does most of the heavy lifting to get securely working on the network, however, many cloud based software solutions – including core banking systems, payment hubs etc are moving to an orchestration event driven model.

The Open-Source Framework is a set of templates and components that RTGS.global is open sourcing. When using this framework, participants can effectively integrate their core systems with RTGS.global by posting messages onto an orchestration technology, like Service Bus or Event Grid. In the same way that you can post messages, you can also subscribe to topics and therefore receive events containing the necessary information from the network.

The Open-Source Framework provides participants with a significant helping hand in getting their systems integrated and leveraging the many benefits of the network quickly and efficiently.

Summary

This post provides a quick start overview to connecting to the RTGS.global network, including the steps for executing an integration “light” PoV. More detailed posts will be made by the engineering team when the SDK and Open-Source Framework comes out of Private Preview and into GA.

If you are interested in connecting and becoming a participant within the RTGS.global network, reach out to the team at www.rtgs.com

Design a site like this with WordPress.com
Get started