Agile Enterprise Architecture Management

Delivery Thinking Enterprise Architects (DTEA)

In my last two articles on this topic I mentioned the interesting shift in Gartner’s definition from of EAM 2013 to 2018 towards innovation enablement and disruptive challenges.

Also I have mentioned some initiatives from Germany which are creating new approaches of EAM either on a semi practical/academic or fully academic level.

If you look at how agile EAM is understood in other parts of the world and how big technology companies actually do EAM you might get a better understanding of how this topic has evolved and where this shift in Gartner’s definition comes from.

EAM derived from Disciplined Agile Delivery (DAD) is an interesting approach to have a deeper look at. It is amazing how pragmatic it is crafted and yet it does not seem to end up in chaos.

Principles for Performing Enterprise Architecture Agilely according to DAD are

  1. Evolutionary collaboration over blueprinting
  2. Communication over perfection
  3. Active stakeholder participation
  4. Enterprise architects are active participants on development teams
  5. Enablement over inspection
  6. High-level models
  7. Capture details with working code
  8. Lean guidance and rules, not bureaucratic procedures
  9. Have a dedicated team of experienced, enterprise Architects

You can read more on this on disciplined agile delivery’s site.

We from deliverythinking.com fully agree with this approach and understand ourselves as Delivery Thinking Enterprise Architects (DTEA) in such a manner.

Do Enterprise Architects Accomplish the Shift Towards Agility? Part II

Part II Agile EAM …

In the first part of this article I tried to demonstrate that mankind is trying to manage complexity in enterprises for more than a hundred years and most of those processes, tools and techniques that we nowadays use to summarize under the discipline “enterprise architecture management” have been there before. My intention behind that was to demystify Enterprise Architecture Management by explaining the ideas behind it from a historic perspective.

Part I stopped with explaining the shift of Gartner’s Enterprise Architecture definition.

Sunflowers

Gartner’s new definition of Enterprise Architecture Management is an implication of two main effects we have experienced in the last ten years.

EAM’s Reputation

First one is the sobering insight that actual results of EAM do not match the hopes enterprises put into that topic. Well, that doesn’t sound very surprising. That happens every time a new thing goes through the hype cycle. But in case of EAM it was a bit more than that. Enterprise Architects were known to be sitting in an ivory tower without any deeper knowledge neither of business nor technology. The governance concentrated approach and the concentration on frameworks was meant to be responsible for slowing down innovation and hinder usage of latest technologies to bring forward business.

New Digital Business Models

The second effect which forced a rethinking of the role of EAM was the rise of new technologies like cloud computing, mobile, big data, internet of things, artificial intelligence and distributed ledgers (aka Blockchain). Each these new trends created a whole universe of possible new so call digital business models. The impact of the latter we can even read directly in Gartner’s new definition.

In the same time agile software development and DevOps models brought more flexibility and increased time to market in software projects. Design Thinking and Business Model Canvas brought new interdisciplinary and lean ways of invention, innovation and business development for digital customer communications in particular.

Bimodal IT

But that’s not enough … Enterprises started to ramp up so called bimodal IT organizations to have a proper answer for digital challenges.

EAM’s suffering from its reputation, new digital challenges and new methods, tools and techniques and the challenge to bring together requirements of bimodal IT organizations motivated people to rethink the way EAM defined itself in the past.

Is Agile EAM the Answer?

There are several approaches to make EAM more lean, agile and pragmatic. Nearly all of those postulate to focus on business values rather than putting architecture frameworks into the foreground. Enterprise architects need to have closer communication with agile teams to understand their demand and to get informed about latest architecture developments earlier. Another pragmatic proposal is to create an artefact only, if there is a stakeholder for it.

In terms of tools and techniques it is no surprise that new approaches try to integrate design thinking and business model canvas tool set into enterprise architecture management.

Interestingly some initiatives coming from Germany criticize the framework centric approach of the past but then again they try to introduce new process models and method toolsets they call “Business Architecture” or “Architecture Engineering”. Those tool sets try to integrate design thinking and business model canvas elements. These initiatives hope that new processes will deliver better results than processes came before with those exiting frameworks.

This might have to do with the traditional way the discipline called “Wirtschaftsinformatik” has created results in Germany. I still do not know how to translate it properly. Is it “Information Systems” or “business informatics”? I really don’t know. What I do know is that creating process models, methods and toolsets for a certain class of enterprises challenges have always been the core of it in Germany.

To me this doesn’t sound like a pragmatic lean agile approach though.

What’s agile EAM to me?

To me a more pragmatic approach sounds reasonable. Agile core principles must be put into front not processes or methods:

  1. Define your goals and have them in mind. Keep in mind that your goals must create a measurable value!
  2. Check your set of actions against your goals continuously. Do not things because a process definition tells you to do so. Do things only if it helps you to achieve your goals.
  3. You’re not done when an artefact is created; you’re done when you have reached your goals.
  4. Always consider risks. You have to live with a high level of uncertainty and dynamics. Make use of anything that helps you reduce or mitigate risk. E.g. do things iteratively, communicate often and make things transparent.
  5. Make use of common sense!

Enterprise Architects and EAM organizations have to consider these simple rules to follow a reasonable agile and pragmatic approach. In that case they can use processes and frameworks whenever it really make sense. It really doesn’t matter if they use TOGAF or any of those new agile process models then.

Enterprise Architecture Management has always been a matter of soft skills not a matter of processes and frameworks.

Do Enterprise Architects Accomplish the Shift Towards Agility? Part I

Part I let’s talk a little bit about history …

For more than a hundred years, almost in parallel with the rise of corporations, business administration as an academic discipline offers ways to manage complexity in organizations. Although it has IT roots, Enterprise Architecture Management is one of the latest contributions to this field. I would like to have a short look at this history to give you an idea what happened in the past on the field of management of complexity and why that happened.

Enterprise Architecture Management

Organization Theory

Organization theory was the first and very early approach which splits up an Enterprise into functions, tasks, workers who execute tasks. It claimed that splitting up an enterprise can either be done by separation of different execution steps on the same task object or by distinction of task objects that are to be treated in each task. Procedural instructions and operating procedures defined the way how a single task or a set of tasks should be executed.

Cybernetics, System Theory

In the 1950s cybernetics and system theory appeared. The latter originally being defined on the field of biology found its way into many other disciplines. Both cybernetics as well as system theory influenced economics and information technology significantly. Enterprises started to be looked at as systems having interrelated components and serving a certain function.

Organization theory has dominated management of complexity in enterprises for decades (actually until the 1970s). Upcoming economic challenges as well as the emergence of computers were two main forces in that time which brought new methodologies into light from the 1980s on.

Business Process Engineering

Business Process Engineering aimed to increase efficiency and reduce time to market by introducing a new behavior centric definition of an organization and considering improvements information technology can bring. Processes (behavior) moved into foreground instead of isolated functions (structure).

IT Architecture

On the other hand, almost at the same time in the 1980s programmers started to think about managing complexity in software and IT systems by introducing a new discipline called “IT Architecture”. Some of those postulated software architecture considerations others introduced frameworks which aimed to systematically define the role of IT in an enterprise in terms of contributing economic value. All those initiatives were IT-centric though.

In organization theory’s terminology, enterprise IT systems are nothing more than a particular type of workers executing tasks. Hence, is obvious why IT systems found their way into new approaches to define an organization’s composition.

Convergence of Disciplines

 

In the 1990s there were bunch of different business process modelling frameworks which allowed describing an enterprise from business as well as IT perspective. These were the beginnings of enterprise architecture management without naming it that way. But in the real world’s implementation it still it was an IT people’s playground. I remember times when we discussed with business how to optimize processes and derive requirements for it landscape transformation from those processes but they didn’t understand.

Enterprise Architecture Management

In the 2000s complexity increased in IT departments. Internet widely spread in the society and brought the net economy. New multi-tier architectures were established and open source were introduced for business purposes. Sustainability and stability of business as well as IT operations became very important.

Enterprise Architecture as a term was about to be established. Enterprise Architecture in that time integrated methods, models and techniques other management methods like organization theory, system theory, business process engineering and IT architecture had introduced before and added even some more. Enterprise architecture was about streamlining business functions, services and applications as well as technology and was meant to contribute value in order to achieve enterprise’s strategic goals.

If you look at how Gartner used to define Enterprise Architecture back in 2013 and how they changed their definition in 2017 you may recognize that there is a shift from ensuring sustainability and stability to enabling innovations and giving answers to distributions.

Gartner’s Definition 2013

“Enterprise architecture (EA) is the process of translating business vison and strategy into effective enterprise change by creating communication and improving the key requirements, principles and models that describe the enterprise‘s future state and enable its evolution.”

Gartner’s Definition 2017

“Enterprise Architecture is a discipline for proactively and holistically leading enterprise responses to disruptive forces by identifying and analyzing the execution of change toward desired business vision and outcomes. EA delivers value by presenting business and IT leaders with signature-ready recommendations for adjusting policies and projects to achieve target business outcomes that capitalize on relevant business disruptions.”

So apparently “evolution” was not enough anymore. Is that actually a reaction on the impact mobile revolution and new digital disruptive business models started to have on the whole economy?

So what happened until then? Did Enterprise Architecture Management keep its promise according to the first definition in 2013? Is it going to be a proper answer to the latest challenges enterprises are facing according to the new definition?

In the next part of this series I will have a deeper look into why this shift of definition happened, what this has to do with agility and how this new approach has changed tools, methods and hopefully mindset (which is the most important thing) behind Enterprise Architecture Management.

Account Information Consent Flow

openpsd.org starts its first user story

Bootstrapping a new initiative is always amazing! You have to study specs and implementation guides, think about components you need and choose technology you want to use. In our case 

we decided to provide a test server for Berlin Group’s NextGenPSD2 specification. You might think that we are going to provide yet another set of boring mock implementation of the rest API Berlin Group is currently specifying. And I tell you no, we won’t do that! What we want to have in place is a set of components which allow you go through a complex use case.

Consent
As consent is crucial to psd2 we decided to start with Account Information Consent Flow. This is a very complex multi-dimensional process both from business as well as technical perspective. From business perspective you need to consider that a consent itself has different information such as type of account service it is granting access to, frequency of its usage and its validity duration. Then there are different consent models like detailed, global and bank offered.

Technically Berlin Group defines redirect, OAuth2, decoupled and two different embedded strong customer authentication (SCA) methods with which a customer (PSU) may give its consent to the bank. Berlin Group’s implementation guide defines a sophisticated sequence diagram for each of those SCA methods.

Berlin Group Redirect SCA
[Source: Berlin Group's Implementation Guidelines]

Our approach is to have components in place which allow to go through all those steps in a complex test case. And that’s a lot more work to do than simple mocks for rest api interfaces.

PSD-Server, SCA-Server, Modelbank
We started to implement a modelbank, a psd-server and a sca-server. Our modelbank will simulate an Account Servicing Payment Service Provider (ASPSP) in this user story. Other roles a bank can take over will come in future. Our target is to have a simple model bank which can provide account servicing and simple payment processing services. PSD-Server provides the rest api and SCA-Server will provide different SCA approaches mentioned in Berlin Group’s implementation guide. Both PSD-Server and SCA-Server will use modelbank’s services.

Go, Go, Go
We decided to use golang for all of our component’s implementation. This is not because we hate Java, no. It is simply because of the dynamic progress golang is facing during last few years. We think it is one of the most promising programming languages for now. There are a lot of articles out there emphasizing golang’s strengths and comparing it with Java. We don’t want to go through this here.

Clean Architecture
In UHUCHAIN we already introduced our clean architecture approach in golang projects. We continued this approache in our components here as well. All of our components will separate technology dependent stuff in a provider layer. They have controllers, use cases and entitites separated from each other by using input and output ports in each layer.

Implementation is not completed yet but we have reached our first milestone to bootstrap everything and have a clear understanding of what is going to be done next.

Platforms, Open API in Finance 4.0

Nice, but how can that look like in a real world’s scenario?

A lot of articles in our blog deal with platform and open API on a strategic level. We have talked a lot about them in the past. Many decision makers and IT people didn’t ever see an actual platform working in their businesses. We almost every time point to GAFA to demonstrate how platforms can work. But people want to know how to tackle this technological challenge and where to start in their actual IT scenario.

  • How can a platform look like from a technical perspective?
  • Which different APIs are we going to provide?
  • How do we create APIs? How do we secure them?
  • How can we manage service versioning as services may evolve over time?
  • How can we manage service provisioning and limitations?
  • Can I do API-based batch jobs and batch job control as well?

Well, there is a place where I could see how all these questions have got at least some practical answers which seem to be working out somehow: I played around with salesforce’s platform a bit.

Salesforce Trailhead

Salesforce is known as a cloud-based CRM solution in town, but technically it is much more than this. You can create and run every kind of business App on Salesforce’s platform. Many things like IoT, Artificial Intelligence and Big Data services come for free which you can use on top.

In trailhead, salesforce’s training platform where you can find tons of different technical and business tutorials (Trails) around Salesforce and CRM, I registered and started some specific trainings about platform and open API.

The good thing about these “trails” is that they often come with some hands-on sessions where you can really create technical stuff. You are able to see immediately how your results work on a training sandbox salesforce platform which has been created for you during registration.

The other thing which I found amazing is that you can start nearly everywhere you want to become familiar with the platform. I decided to start with Open API, IoT and Mobile Application Development for the platform.

But up to now I really enjoyed this gamification style of trailhead most!

For IT guys in Particular …

Yes, you have to be an IT guy with at least some development background to get through those trails. But if you have that background you will get a very good idea about how an Open API paradigm practically can look like very quickly. If you want to see some real world’s IoT scenarios, no problem. You can even build one yourself too.

In Salesforce there are a lot of different APIs. Every business object and even custom objects which you have created yourself are available via a REST API. There is a Metadata API for customization, a Bulk API for mass data operations. It has built in security mechanisms based on standards like OAuth 2.0.  It supports synchronous and asynchronous calls. Versioning and Access limitations are considered out of the box as well.

As long as you develop stuff on the platform or access exposed APIs, you even do not need to install an IDE. All stuff you need (REST Workbench, Development Environment) come with that platform itself. To play with the SOAP API I had to install SOAP UI. For Mobile App development you need stuff like node, npm, git and cordova and some additional salesforce tools. That’s it.

Happy End?

I am not here to make advertising for Salesforce. Of course, Salesforce will definitely have its own pitfalls as well. This is not about “They introduced Salesforce and they lived happily ever after!”. Stuff like this may also be available on ther vendor’s plattforms like SAP or IBM.

But my message to all those people who has to deliver results on platforms, Open API and all the other 4.0 topics like IoT, Big Data and Artificial Intelligence is to take a few hours and play around with salesforce’s offerings on trailhead. You definitely will get some new impulses for your every day’s work. At least you can see how others have solved issues that you are thinking about as well. And for all of you who want to see some actual use cases of open API beyond GAFA it is really worth it.

You can have a look at my trailhead profile to see what I am talking about.

Do Banks have a Chance in the Platform Battle?

Will Amazon take it all in Banking as well?

A discussion on LinkedIn around Open Banking which I have been part of, has encouraged me to write a few words about this topic once again.

I gave a first comment on this earlier in “Digital Transformation to Keep up Revenue”. There I already mentioned that the winner still will take it all because of the network economics.

This is exactly what people in that discussion pointed to. Existing platforms such as Amazon are for so many reasons in a far better position than banks in Germany in particular. And I absolutely agree with this. Banks need to open up their processes and services an integrate them into these existing platforms as one of the comments explains. Another questions which I am asking myself is what’s so new about this?

Back to the future …

I remember around 15 years ago there was already the call for banks to transform from a value chain to a value network. In Germany there have been a lot of papers under the name “Industrialization of Banking” about opening up product, sales and settlement of banking into three decoupled and separated main processes each of which can easily be recombined in value networks. In that concept one bank could develop products, the other one could offer them to the market and a third one could offer settlement of those products in a loosely coupled manner.  That was the time when first banking fabrics for payment or securities processing came up.

If one takes the concept of value networks seriously than there is only one small step to an open banking idea where a bank can integrate its services into an existing platform such as amazon.

Why hasn’t that happened yet then? Is there some unfinished homework to be done?

If you can’t beat them join them

Indeed it could be too late for banks to become a platform themselves but it is not too late to integrate with existing platforms. If you can’t beat them join them.

On the other hand, as we know, existing platforms are becoming more and more powerful and network economics tend to create monopolies by their nature. This is being observed by competition authorities too. I hope they will not stand still and make soon sure that competition is upheld in this field.

An Introduction to Data Strategy

A short history of Data

Tons of data are produced at different enterprise levels every day. Many different ways exist to classify data. There are operational data as well as analytical data, structured and unstructured data, internal and external data, actual and metadata.

On operational level one can find transactions, transactional states, transaction Metadata, reconciliation data, master data, pricing and calculation data, data about operational risks and customer classification. Last but not least there are a lot of business partner communications over different channels in several unstructured formats.

On analytical level many enterprises have created data warehouses and data marts during the past 20 years. The first and foremost target of those early analytical data stores was to support a comprehensive management information system by creating flexible reporting facilities.

Data Mining came into the play a few years later. Techniques such as automatic generation of decision trees with entropy measures like GINI index, clustering algorithms, found their ways into data analysis in the early days of data mining.

One big issue with data ware houses has always been the data quality and data enrichment. This is where data cleansing methods were introduced. External data sources have been used to enrich data or to get higher quality.  But this was not the only intention of adding external data sources into the system. Enrichment can be used to get more data related to a subject and to correlate them with the existing data in order to achieve new insights.

The Rise of Big Data

As technology evolved and new methods of handling and analyzing any kind of data appeared the value of data as an economic property increased. Big data, data science and artificial intelligence brought new opportunities to create Knowledge from Data. To relate different kind of data to each other in ways they haven’t been related before came into the spotlight.

With those new capabilities new type of data became more interesting for analytics purposes. Data about data so called Metadata which one can imagine of as the data definition or data access statistics on the one hand side and unstructured external data such as content of social media sources like facebook, twitter, etc. on the other side.

Many topics like data extraction, transformation and loading, data cleansing and enrichment, data protection and data privacy one will face on technical level in big data initiatives have been in place in data warehouse implementation already. Others like handling streaming architectures, automatic data type recognition emerged as specific topics to Big Data.

The organizational challenges are much bigger in big data since big data has a more open scope than the very specific scope of Data warehouses.

Data Strategy, what’s that?

Data become strategic and hence there is a need for a systematic approach both on business as well as on technical level to improve value created by data.

On business level an enterprise need to define

  • what should be achieved with data (vison),
  • which data are needed for that goal,
  • which methods are they going to apply to create new insight from that data and
  • what income that knowledge can lead to?

This is what a data driven business model is about. It stands at the very beginning of a data strategy an enterprise has to develop and follow in terms of monetarizing data.

A data driven business model can be thought of the same way one creates a common business model. Imagine a business canvas. There you have a rectangle for key resources a business needs to create a value proposition. Data are such key resources. Hence, a data driven business model must give answers to the same questions asked in a business canvas.

Coming from a business canvas approach it is also necessary to define a set of key activities to be taken in order to create the value proposition. What is an enterprise going to do with the data? Which kind of analysis methods should be applied in order to create information which leads to that knowledge? By finishing this task the most important part of a data strategy will be already delivered.

Causation vs. Effectuation

When it comes to choosing the right analysis methods for the data it is important to know the kind of problem that exists. In decision making there are two different classes of problems.

In the first one there is a given predictable effect with a known probability. The target is to choose information gathering and analysis methods to select between the means to create that effect. This is called causation.

In the second there are several unpredictable effects in place. The target is to find out which effect is more likely to emerge with given means. In this case the analysis has to apply experimental techniques to discover the underlying distribution of the unpredictable effects for given means.

In data analysis one might follow a hybrid approach where both analysis methods can be applied.

Different Data Strategy approaches from here

During getting deeper into this subject I have discovered different definitions of a data driven business model. Some experts require a data strategy should also define a project plan which describes how an actual subset of the data should be analyzed with milestones, budget and all that project management stuff we all know.

Other experts of data strategy development stop with creating a data driven business model.

One interesting approach I have recognized starts with definition of key actors and key data. Then it switches to the customer and creates a customer profile. Value proposition is created by a so called Data-Need-Fit. From value proposition other parts of the business model such as key activities are derived.

But ….

Experts always call out the need for a systematic approach when a new idea and method appears. I have seen a lot of them in the past. Well prepared and best sold systematic approaches to master new challenges.

At the end of the day one can measure the actual value contributed to the enterprises success by the impact an initiative has to the enterprises revenue regardless of how systematic and well prepared that initiative was.

Unfortunately many of those approaches end up helping their inventors to create value.

If you want to have an advice, choose the people for implementing your data strategy wisely.

That’s the most important thing.

For further reading …

Data and Analytics – Data-Driven Business Models: A Blueprint for Innovation

The new hero of big data and analytics: The Chief Data Officer

Effectuation and Causation: The Effect of “Entrepreneurial Experience” and “Market Uncertainty”

Things not even Big Data can predict

An Open Mind can Move Mountains

She was about to start working for a large bank the next month. “Big Data Analyst” she told me. I was very impressed. Big Data, wow! I was always impressed of the way she used to do her job. Starting with those days when we were used to work together. She stayed always cool and friendly even when work was very tough. She had quitted her job already but we still whatsapped. Considering what she was doing before, Big Data was a huge step forward.

October was heading to its end. We decided to meet for lunch once again. We agreed on a restaurant close to the river. You could sit outside at that place. It was a warm day when we met. One of those days at the autumn’s beginning where the sun still keeps alive hustle and bustle out there.

She was already waiting when I arrived. She sat at a table outside with a nice view over the river. There was a big fat camera on the table, close to that a book about big data and analytics.

“Hey!” I said, “What’s up? Sorry for being late!” I continued. “Hey!” she replied, “Don’t worry, all good!” she smiled. “What’s that for?” I pointed to the camera. ”Oh, Photography is one of my hobbies! It’s a good light today!” she explained, sounding like she’s bit proud of her camera.

Big Data with No Idea

After all that small talk stuff people bring up, when they don’t know each other very well but they feel like they have to go through, I asked her about Big Data. She started to tell, the very humble way she always told about herself. “Oh, I have no clear idea, I’ve just started reading this book here.”

And then a remarkable overview of big data technology came through. I learned a lot about Cloudera distributions, Lambda architectures, Elastic Search, NoSQL database

s and the amazing number of different open source initiatives for different purposes. That was what she has always been. A very skilled and open-minded computer scientist with no fear of change. She was going to start that job with a good general skillset in computer science and mathematics and no actual idea of big data. But with an open mindset and the will to learn. To learn fast.

This happened two and a half years ago.

Today she is her manager’s right-hand woman having a deep understanding of tools, technologies, what actually practically can be done and which pains one can expect in big data projects.

Delivery Thinking is made of These

Last Friday we met once again. This time she told me about automatic data loading, type and structure recognition via machine learning and the usage of database statistics for data analytics.

Interesting stuff. I have never thought about it. But it’s true. Data access statistic harvested by RDBMS like oracle might be a good starting point for data analytics, indeed! Data frequently accessed are more likely to be important than the less accessed data.

But she told me about the obstacles of getting access to data together with their metadata and statistics within a certain operational transaction processing database due to organizational barriers between different departments and lack of understanding as well.

Apparently, you need to convince a lot of people and consider things like data privacy and data protection, when you want to get access to data for analysis purposes. Even though it is clear that there will be no way to track back customers or any other persons, you must go through a lot of bureaucracy to get the data. Obviously, there is a long way to go and yet she hasn’t lost her faith, optimism and open mind. Delivery thinking is made of these.

 

 

Down to the Code in Rest Applications

Use HATEOAS to Separate Control Flow and Client Representation in Single Page Web Application

When it comes to “deliverythinking” sometimes we have to leave our flight level from 10,000 feets to somewhere closer to the actual code. This is at the end of the day what it’s all about: Code!

Few days ago I talked with a friend of mine with which I used to work for a long time. We used to develop web applications with Vaadin back in 2014 when we were working together. We discussed how Vaadin has evolved since then and how easy you can create new web applications with it now. In such discussion you end up talking about single page applications and millions of different java script frameworks around that at some point in time.  It is actually impressive, how single page web applications managed to dominate this topic despite of the huge number of different frameworks, tools and technologies.

But managing all this framework stuff in an enterprise world is another story.

In today’s article I want to point to another interesting thing with single page applications which I talked with my friend. When you develop single page application one big issue is to hide all the business logic and the presentation flow from the client tier. It must not be part of the java script code delivered to the client. This must be considered when you design an API. Ideally client does not know anything about business logic and control flow of the application.

In Rest Architecture there is a constraint principle called “ Hypermedia As The Engine Of Application State (HATEAOS)“. HATEOAS allows exactly this separation of control flow and client. “With HATEOAS [..] A REST client needs no prior knowledge about how to interact with an application or server beyond a generic understanding of hypermedia.” as Wikipedia explains.

With other words a rest resource comes with a list of links which tell the client next interactions are possible starting from this served resource.

As an example (Source Wikipedia) if you requested an bank account with

GET /accounts/12345 HTTP/1.1
Host: bank.example.com
Accept: application/xml
...

the response could be like

HTTP/1.1 200 OK
Content-Type: application/xml
Content-Length: ...

<?xml version="1.0"?>
<account>
<account_number>12345</account_number>
<balance currency="usd">100.00</balance>
<link rel="deposit" href="https://bank.example.com/accounts/12345/deposit" />
<link rel="withdraw" href="https://bank.example.com/accounts/12345/withdraw" />
<link rel="transfer" href="https://bank.example.com/accounts/12345/transfer" />
<link rel="close" href="https://bank.example.com/accounts/12345/close" />
</account>

As you can see here from account with number 12345 you can perform deposit, withdraw, transfer or close. The linking piece between the client and the backend are the keys given in the rel part of the links.

Of course there is an XML as well as JASON support for HATEOAS.

{
"firstname" : "Dave",
"lastname" : "Matthews",
_"links" : [
{
"rel" : "self",
"href" : "http://myhost/people"
}
]
}

For good old java developers Spring provides a project called Spring HATEOAS which currently is in a prerelease phase.

Also you can find a set of best practices in API design at Microsoft Azure.

Digital Transformation to Keep up Revenue

The Winner Still Takes it All ….

It is commonly known that in times of low interests and intense regulations financial sector in Europe seeks for new ways to keep up profitability. Beyond low interests and increasing regulatory requirements also all of us know about the additional pressure digitalization is creating on the financial sector. Many articles, strategies and discussions have been published around that in the past few years.

Well, there is no other answer yet to this new challenges then that one which all of us have learned somewhere in the past. A company may either reduce expenditures or increase earnings to affect profitability positively. That is what strategy is all about: Find ways to increase earning or reduce expenditures. Which processes are to be redesigned? Which systems are to be replaced? Which services are to be introduced and to whom Retail, SME or large enterprises?

Having a look at the strategic answers banks and insurers find around these questions one very quickly may perceive that pretty much all of them start modernizing their IT. They do this because the hope to achieve several goals at once. They all hope to increase efficiency and reduce costs. They either have missed to replace IT in the past twenty years and hence carry a lot of technical debts or even if they have done that once during that period they need to redo because of the speed of technological progress.

On the other hand they hope to create a set of new business areas where they might produce new sources of revenue. New IT may allow banks and insurers to expose parts of their business to the public so other companies might mash them up with other services and create new value propositions which in return will help banks and insurers to benefit from those kinds of network economics. Such kind of network effect for instance might appear by taking new FinTechs and InsurTechs into account. Those startups create new value propositions mostly for SME and retails business and they have a significant demand for foundation services they usually cannot provide themselves in first place.

Working together with startups in almost all cases means bringing up new services for SME and retail and this fits to the strategy. ING for instance mainly targets SME for creating new business. ING is going to spend up to 800 million Euros for continued digital transformation until 2021 implementing new lending platforms for SME and consumers.

Other banks and insurers are doing similiar things.

Indeed there are bunch of initiatives like ING’s. Banks and insurers are introducing new core systems. They will be exposing their services semipublicly soon. They will operate clouds where third party applications can run. We will see a bunch of announcements for new application marketplaces operated by insurers and banks where developers offer new cloud based apps using the exposed services and cloud operating models. They might also use existing marketplaces such as SAP’s HANA or Salesforce’s AppExchange.

I am very curious which one of those projects will succeed and which ones of those succeeded initiatives will actually survive.

The principles of network economics teach us the winner takes it all. Hence, not all of them can win.

And what I actually am asking myself is what if a developer decides to use services exposed by different banks and insurers in one single application simultaneously? Where does she publish the application? Does he publish and offer it on the marketplace of Bank A or Insurer B? On whose cloud will that application run when each bank or insurer offers its own cloud? How can hybric clouds or multi clouds establish solutions which really work?