Bridges. They’re how we travel over expanses of water or deep declines in terrain. Now most bridges support 2-way traffic. And the cost of travel is paid in tolls, and the funds collected generally go to the destination state or principality.
In IT there are data bridges. They are usually constructed by service providers on the destination side of the expanse and as such, are usually interested in things flowing one way, into the promised land from the old, unwanted repository and its archaic processing. So you can guess the relative support given to the old repositories and associated application solutions (it ranges from compromised to abysmal). This of course is the account given by those in sales and technical support. It is rarely the view of the organization who is trying out new technology. So here is the bridge we’re talking about:
Since the advent of the Cloud when data and services have been increasingly offloaded (or is it uploaded) to service providers, a new term has appeared in market-speak: hybrid. It describes the state of an organization having data and services in both their on-premises infrastructure and in the cloud. I don’t have data to back it, but I would posit almost 100% of Cloud-using organizations are hybrid. And so will they remain. It is the savvy Cloud provider that settles for a piece of the processing pie and lets customers make their own cost-based decisions.
So with so many customers remaining in hybrid mode, the Cloud has introduced the need for a new class of data bridge, with different requirements than the vendor-supplied one-way transfer mechanisms of yore. I count this a good thing, because it affords those getting their feet wet to go at their speed and it surrenders to data processing realities – that the Cloud will never house all processing and data. Here are some goodies that could result from the new bridges:
- An end to target-side bias. Competitive pressure will mount for suppliers to provide first class replication, back-up and application-level data movement services to support the ongoing requirements of consumers.
- An attack on latency. This goes for both sides of the divide. Data quality degrades with age in most applications and bandwidth can be inadequate to improve the timing of the arrival of time-sensitive updates. Smarter/better compression and more innovative techniques are required.
- Data model melding and transformation. Strong to weak typing, relational to NOSQL, structured to non-structured, there are mapping and transformation problems to be solved between what has been derogatorily called ‘legacy” stores to newer repositories and back. This process has frequently been relegated to one-of’s that solve the problem of particular disparate data sets. I would like usable (operative word alert) standards or even de facto standards to arise for the most common transformations.
- New/better trigger and event capabilities. Communicating new data states and transitions and processing phenomena across the divide gives customers choosing hybrid solutions (which means all of them as I have said) a new and valuable advantage.
It might be accurate to classify full-fledged hybrid data bridges as middleware; I don’t really care how they are categorized. I do know that they would go far to the address the ongoing needs of organizations without forcing vendor-driven data or processing movement. Let them decide for themselves!!
But I do think bridges like this would inspire some confidence that providers are listening: