Remember me

Register  |   Lost password?










mikeohara's Blog

Pillar #2 of Market Surveillance 2.0: Past, Present and Predictive Analysis

March 5, 2015 by mikeohara   Comments (0)

 
By Theo Hildyard
 
In the second of a blog series outlining the Seven Pillars of Market Surveillance, we investigate Pillar #2 which emphasizes support for combined historical, real-time & predictive monitoring.
 
 
Following my last blog outlining the Pillar #1 in the Seven Pillars of Market Surveillance 2.0 – a convergent threat system, which integrates previously siloed systems such as risk and surveillance – we continue to look into the foundations of the next-generation of market surveillance and risk systems. 
 
Called Market Surveillance 2.0, the next generation will act as a kind of crystal ball; able to look into the future to see the early warning signs of unwanted behaviors and alerting managers or triggering business rules to ward off crises. By spotting the patterns that could lead to fraud, market abuse or technical error, we may be able to prevent a repeat of the recent financial markets scandals, such as the Libor fixing and manipulation of the Foreign Exchange benchmark
 
This is the goal of Market Surveillance 2.0 – to enable banks and regulators to identify anomalous behaviors before they impact the market. Pillar #2 involves using a combination of historical, real-time and predictive analysis tools to achieve this capability.
 
Historical analysis means you can find out about and analyze things after they’ve happened – maybe weeks or even months later. Real-time analysis means you find out about something as it happens – meaning you can act quickly to mitigate consequences. Continuous predictive analysis means you can extrapolate what has happened so far to predict that something might be about to happen – and prevent it! 
 
For example, consider a trading algorithm that has gone “wild.” Under normal circumstances you can be monitoring the algorithm’s operating parameters, which might include what instruments are traded, the size and frequency of orders, order-to-trade ratio etc. This data comes from historical analysis.
 
Then, if you detect that the algorithm has suddenly started trading outside of the “norm,” e.g. placing a high volume of orders far more frequently than usual without pause (a la Knight Capital), then it might be time to block the orders from hitting the market. This is real-time analysis and it means that actions can be taken before they go too far and impact the market. This can save your business or your reputation. 
 
If the trading algorithm dips in and around the norm, behaving correctly most of the time but verging on abnormal more times than you deem safe, you can use predictive analytics to shut it down if this happens too often. In other words, you can predict when your algo might be verging on “going rogue” if it trades unusually high volumes for a microsecond, then goes back to normal, then again trades too high volumes. 
 
The trick is to monitor all three types of data to ascertain if your algo was, is or might be spinning out of control. An out of control algo can – and has – bankrupt a trading firm. 
 
Adding Pillar #2 to Pillar #1 gives you complete visibility across all of your siloed systems, data and processes while monitoring for events that form patterns in real-time or compared with history in order to predict problems. 

, , , , , , , , , , , , , , ,

Risk and regulatory reporting: centralisation is not the answer

March 5, 2015 by mikeohara   Comments (0)

In this article, Mike O’Hara, publisher of The Trading Mesh, talks to Steve Willson of Violin Memory, KPMG’s Steven Hall and Rick Hawkins, Stream Financial’s Gavin Slater, and Ash Gawthorp of The Test People, about how banks should evaluate centralised versus federated approaches to intensified post-crisis risk and regulatory reporting requirements. 
 
 
Introduction
 
The post-crisis environment poses many challenges to c-suite executives at major banks. Business and operating models must adapt quickly to a more uncertain and unpredictable world. And as banks adjust to the new competitive landscape, they must also demonstrate their compliance with the evolving regulatory framework and their continued financial health and stability. In short, regulators want information that is fast, detailed, comprehensive and consistent. But meeting new risk and reporting requirements is an all-but-insurmountable challenge for the often-siloed data management infrastructures of banks with diverse business lines across multiple geographies. While efforts to centralise are often thwarted by budgetary, organisational and legislative barriers, technological advances are offering new means of delivering to regulators, while also providing senior executives with user-friendly insight into the performance of the bank’s business lines. 
 
 
New regulatory landscape
 
How have regulatory requirements changed? How haven’t they changed?! Some products, markets and organisational structures have been effectively outlawed. Others have been separated out and restructured for the sake of transparency and removal of conflicts of interest, while still others are subject to new capital charges or reporting requirements on account of their inherent risk. 
 
“Regulators, compliance and risk officers are demanding more granularity, based on regular high level reporting, and often at increased frequency.”
Steve Willson, Vice-President, Technology Services EMEA, Violin Memory
 
The pace and scale of regulatory change is both bewildering and burdensome, but there are common themes and requirements that banks should take account of when reappraising their internal data management capabilities. “Regulators, compliance and risk officers are demanding more granularity, based on regular high level reporting, and often at increased frequency,” says Steve Willson, VP Technology Services EMEA, Violin Memory.
 
In 2014, major European banks conducted different types of stress tests at the behest of the European Central Bank and their national regulators. Many took a tactical and labour intensive approach that is unlikely to be sustainable in the longer term. “Regulators have made it clear that approaches to Basel III and stress-testing reports should be seamless, run off the same data and systems as other reports. Many banks’ stress testing processes have been manual and exceptional. But it is beginning to hit home that both sets of reports will be subject to the same level of microscopic examination and must be based on the same data,” says Steven Hall, Partner, KPMG. 
 
“Many banks’ stress testing processes have been manual and exceptional.”
Steven Hall, Partner, KPMG
 
Whether watchdogs are concerned about the impact of a 30% swing in a major currency or a cyber-attack, banks must respond quickly to unexpected demands for risk assessments based on detailed and consistent data for all their different units and business lines. “There is an expectation that banks will have to deliver on much shorter timeframes, perhaps working to a 48-hour turnaround rather than 28 days,” adds Hall. 
 
For senior management, this means establishing not only a coherent, responsive data management infrastructure for the whole organisation, but also both an enterprise-wide approach to data governance.
 
“The increased need for timeliness demands a greater focus on data governance, ensuring that the calculations and assumptions on which the reports are based are appropriate and consistently applied,” says Rick Hawkins, Director, KPMG. “Banks must show the regulators that they are using the data in a consistent manner – and demonstrate their understanding of that data.”
 
“The increased need for timeliness demands a greater focus on data governance.”
Rick Hawkins, Director, KPMG
 
 
Limits to centralisation
 
A quick assessment of banks’ existing approach to data management highlights many internal and external challenges to meeting today’s reporting requirements let alone tomorrow’s. There are many reasons for an inconsistent approach to data management for a bank that competes in multiple geographies, legal jurisdictions and product lines. Whether organic or by acquisition, growth brings complexity and expediency, as well as myriad different regulatory and reporting requirements. 
 
But efforts to centralise and harmonise data management processes and systems are often fraught with organisational and technical barriers. Even when power struggles – both with the central functions and between departments – can be resolved to achieve a common aim, such as compliance with a particular regulation, enterprise-wide data warehouses are often of limited value because they are designed to meet a distinct purpose. 
 
“As soon as you try to centralise something in a very large organisation it becomes monolithic, and unable to adapt to a very rapidly changing environment, because if something changes in a particular business, you’ve got to change the entire centralised system just to cater for one small change in one little sub-business,” explains Gavin Slater, Co-Founder, Stream Financial.
 
The autonomy granted to branches or divisions across the world can also lead to major gaps in databases if those divisions have historically only collected and reported data required by their local regulators. And when it comes to centralising data management, local jurisdictions can pose barriers too. The need for certain types of data to stay in country for regulatory reasons has historically been addressed by banks through the copying of data between local branches and centralised units, but that brings its own problems.
 
“In theory, data could be aggregated by regular copying of operational data to a single reporting system, much like a data warehouse – based on systems which are optimised for complex analytical processes, but updates to this centralised system is often conducted daily, with poor data integrity until all source systems have supplied periodic data, and no ability to deliver near real-time actionable data,” says Willson.
 
 
New approaches required
 
Advances in technology – driven by increases in processing speeds and bandwidth availability – are offering banks and other large, complex organisations an alternative to prevailing centralised or highly manual approaches to risk reporting compliance. By harnessing such advances across hardware, software and networking capabilities, it is becoming increasingly viable for firms to implement a standardised approach to data management and governance which spreads the workload and the responsibility across the organisation. Such approaches ensure that inputs are appropriately structured and configured at the front end, and outputs – for example, aggregated, detailed risk reports required by regulators – can be delivered quickly, accurately and with consistency on an enterprise-wide basis. From an operational infrastructure perspective, this allows data to stay within in its original jurisdiction and avoids both ‘turf wars’ and the implementation challenges of imposing centralisation from above. The increased flexibility and agility offered by this ‘federated’ approach can also provide senior executives with easy-to-manipulate and up-to-the-minute information for commercial and strategic planning purposes.
 
“These systems provide a centralised query capability that federates queries to those operational systems in real time, and aggregates the data to provide real-time compliance and risk data. This means no copying of data, and no lengthy data integrity exposure windows. But it assumes that complex and ad hoc queries can be made on operational systems at any time,” explains Willson.
 
In terms of underlying technology, consolidation of operational databases on flash memory, a non-volatile and reprogrammable storage medium, is the key to a federated, real-time reporting and query capability for compliance and regulation. “It’s a classic case of the entire stack working optimally, from analytics application, to dataset to storage medium. Until all operational systems are based on a memory-based storage solution, compliance and risk systems will either be out of date, or operate at the performance of the slowest operational system being queried. This delay is becoming simply unacceptable for most financial institutions,” Willson adds. This can be mitigated of course by using flash storage as a caching layer, or choosing to use flash as the persistent memory store for those core applications. The trend to moving active datasets to flash from disk is well underway, as it provides memory-like performance, with the persistent safety of enterprise storage. Using flash at the ‘edge’ is part of an end-to-end strategy to support real-time risk applications.
 
Among the most important pre-requisites for a federated solution is for data to be sliced and diced and standardised in a manner appropriate for the queries subsequently expected, to optimise speed of retrieval. As such, a critical step is to devolve responsibility for standardisation of database inputs to the appropriate level, leaving local front-office staff to decide how best to adapt centrally agreed principles to local realities. 
 
“Standardisation doesn’t have to mean centralisation,” says Slater at Stream Financial. “The data schema can and should be developed centrally, but the hard part is mapping the terms used in the many disparate local data sources to the central schema. It requires a mind-set shift at the centre and locally, but the original data source owners in the front office must be told: ‘you have a responsibility not only to provide data to your customers, but also to give data to the support functions who need to access your data in some standardised schema’.”
 
“The hard part is mapping the terms used in the many disparate local data sources to the central schema. It requires a mindset shift.”
Gavin Slater, Co-Founder, Stream Financial
 
It is also important that banks’ data input standardisation efforts are conducted with due awareness for industry-wide standardisation initiatives such as the use of legal entity identifiers (LEIs), and message standards frameworks such as ISO 20022. “All the efforts at standardisation at an industry level and the use of standards such as LEIs in incoming regulation are very important, but there is no getting around the hard work needed to map the global schema with the data that actually sits in individual systems and business units,” adds Slater. 
 
 
Implementation and fine-tuning
 
Although a federated approach to data management has a number of advantages, it is no ‘plug-and-play’ quick fix and requires senior executives to ascertain significant levels of buy-in across the organisation. At the outset, both the active involvement of technology teams that support the platforms on which the data management structure relies and the requirements of the senior executives that signed off on its implementation must be present.
 
“Without a clear a requirements spec and ongoing direction from senior executives, a lot of time can be wasted building unnecessary functionality,” says Ash Gawthorp, Technical Director at The Test People, a performance, engineering and testing consultancy.
 
“Even the fastest technology relies on data being organised in a fashion that allows it to be processed and interrogated efficiently.”
Ash Gawthorp, Technical Director, The Test People
 
A number of important decisions must be taken up front to ensure a federated infrastructure to data management will perform its tasks to the standard and speed required. For example, the maximum eventual size of the database(s) should be estimated to gain a sense of the number of records that will need to be interrogated over the coming years. Although regulations may change on how long data must be kept available for live interrogation before being archived, planning will give the best chance of optimal performance.
 
“Even the fastest technology relies on data being organised in a fashion that allows it to be processed and interrogated efficiently,” says Gawthorp. “Another question that should be resolved early on is acceptable recovery times from a failover. In-memory databases do not get back up and running immediately.”
 
Stream Financial’s Slater points out that ‘caching’ strategies can overcome some concerns over access to and control of data. In traditional data warehouses, fears that a major query can drag down the performance of a system has reinforced the tendency to centralise. But it is possible to put a protective layer around the system so that a query hits the cache rather than the underlying database. This allows users to access the full history and eliminates the need to keep local copies, as many did previously rather than go through the hoops required to access the central database. 
 
Even with many factors agreed and understood in advance, a federated data management infrastructure will require a programme of detailed testing and tweaking to the specific needs of the individual organisation at every tier to reduce query response times from hours to minutes. 
 
“It’s important to start testing as soon as possible and be willing to accept the need to reconfigure. The sooner you begin performance testing, the sooner you will know whether the system can achieve the task set,” explains Gawthorp. 
 
With regulators demanding greater responsiveness from banks, Stream Financial’s Slater believes performance will become ever more important. Fast feedback loops, he suggests, lead to more accurate, reliable reporting and more reassured regulators. “If your system can provide the lowest level of granularity, and aggregated across many dimensions, within a day, you will have a faster feedback loop, better information and be better able to react to change. By using aggregated data to inform and change positions and the overall risk profile, banks will be able to respond to unexpected market events more effectively,” says Slater.
 
 
Conclusion
 
Today’s data management capabilities have largely failed to meet today’s expectations from banks’ senior executives or their regulators. The chances of them coping with tomorrow’s demands with the current approach are slim to none. Banks are already braced for the requirements of regulators to intensify and accelerate. 
 
“The changes in the regulatory environment mean banks must be able to aggregate risks within business units and across the group, and aggregate exposures across risk types and be able to drill down within that,” says KPMG’s Hall. “Moreover, the fines being handed out by regulators have forced banks to realise that deadlines can no longer be stretched.”
 
As such, it is no surprise that many senior executives are exploiting technology to help them handle the onslaught of requests. The holistic interrogation of operational data to give a complete picture of risk and compliance positions in very short time frames and in granular detail requires a very responsive storage platform. “From a risk and regulatory viewpoint, banks must service ad hoc requests with immediate effect. Flash memory offers exactly this –a nearly limitless ability to service data requests at very low latency,” says Willson at Violin. 
 
Importantly, banks are also using the capabilities being generated to demonstrate compliance and stability to regulators to better understand their own businesses. The combination of faster database interrogation and aggregation with user-friendly front ends is increasing the utility of federated data management structures beyond regulatory reporting. 
 
KPMG’s Hawkins says use of in-memory databases and analytics in the enterprise market is also leading to advances in the presentation of reports, such as interactive, mobile-friendly dashboards on which risk reports can be manipulated and subjected to ‘what-if’ scenarios. “From a high-level report, you can delve into questions such as, ‘Where are my top 10 exposures? What are they? What risk categories are they in? Which business unit is that particular transaction in?’ This is only possible of you’re using in-memory, rather than simply relying on a traditional client server and a data warehouse structure.”
 
Systems to manage risk and compliance in real-time, demand a holistic approach to the data flow. This includes making sure that edge systems have the capability to deliver anonymised risk data to a central platform, which can then be interrogated at huge speed to detect anomalies or out of compliance situations. Stream Financial and Violin memory have demonstrated that very fast in-memory processing combined with high-performance enterprise-class persistent storage strategies are a good fit for the challenges faced by the financial services industry.
 
 
Writing and additional research by Chris Hall, Associate Editor, The Realization Group
 
For more information on the companies mentioned in this article, visit:
 
 
 

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Risk technology: spend your budget on the carrots, donât waste it on the stick

March 5, 2015 by mikeohara   Comments (0)

Anthony Pereira, CEO of Percentile, specialists in risk technology for financial services, examines the Basel III interim findings which see the industry falling short

read more...

, , , , , , , , , , , ,

GMEX â going to eleven?

February 27, 2015 by mikeohara   Comments (0)

This article was originally published at the Fidessa blog, and is reproduced here with permission.
 
By Steve Grob

read more...

,

Quants: (Big) Data Whisperers

February 10, 2015 by mikeohara   Comments (0)

Quants are interesting people. Their presence spans both new and old analytics systems, architectures and cultures. Once hailed “masters of the universe”, quants apply mathematics to drive economic growth. They help create liquidity through derivatives, trade efficiently, manage portfolios systematically and assist the financial sector in risk management. Two events have disrupted their world.

read more...

, , ,

BCBS239: A bad case of deja vu

February 10, 2015 by mikeohara   Comments (0)

BCBS239: A bad case of deja vu
On 23rd January 2015 the BIS published its progress report on BCBS239 Principles for Effective Risk Data Aggregation and Risk Reporting. This progress report was striking in a number of ways; first, despite a significant investment there was only marginal improvement in the banks assessment of their ability to meet the principles and worryingly in some cases banks reported a downgrade of their abilities. Second, banks' failure to recognise the fundamental importance of governance and architecture in ensuring overall compliance with all the principles, leaving significant reliance on manual processes and workarounds.
 
Digging a little deeper into the report, principles which received the lowest rating were (principle 2) data architecture/IT infrastructure, (principle 6) adaptability and (principle 3) accuracy/integrity. The responses from the banks largely attributed their low ratings to delays in initiating or implementing complex, large-scale strategic IT infrastructure projects.
 
Another interesting nugget buried in this report, which for reasons I explain later in this post, is that one of the challenges reported by the banks was that IT Infrastructure, while adequate in normal times, is not adequate in stress or crisis situations.
 
The original BCBS239 draft consultation paper was published in June 2012 and banks have ploughed millions of pounds into remediation programmes and yet here we are two and a half years later reporting little progress. So what is the problem?
 
The problem?
The typical problems, according to the banks, is that their current IT infrastructure is too expensive to build and operate, too complex and difficult to change, with poor data quality, not timely and opaque due to large numbers of adjustments and manual workarounds.
 
These problems are exacerbated by a business environment where revenues are declining rapidly requiring fundamental changes to the banks' business model and associated infrastructure. Simultaneously, the regulatory environment is changing dramatically, with many new requirements creating additional demands on this already expensive and inefficient infrastructure.
 
What is the current solution being implemented by banks?
The banks typically have two distinct approaches to this type of reporting:
  1. Regular reporting using centralised back office systems . These reporting processes are largely automated but inflexible, inaccurate and inconsistent , due to the number of adjustment processes that take place
  2. In parallel, banks have developed ad hoc reporting solutions to cater for interim needs, not supported on these existing reporting platforms, but rather using manual queries on source systems often combined with spreadsheet aggregation . These reporting processes are fragile, by which I mean manual with poorly defined operational processes supporting them. However, they use the correct data sources.
The reaction from the banks as part of their BCBS239 initiatives has been to initiate large-scale IT and business change programmes to address the issues inherent in the regular reporting process, in particular by building consolidated IT platforms across multiple risk silos and in some cases across both risk and finance. These IT programmes are accompanied by business change programmes to enhance the operational reporting processes within these functions, including:
  • Automating existing manual processes
  • Identifying data owners and stewards
  • Documenting processes, including terminologies into data dictionaries
  • Developing enhanced data quality processes
  • Enhancing governance mechanisms to ensure that data quality issues are addressed and remedied in a controlled manner
The ad hoc reporting processes are deemed to be interim measures, with longer term plans for these to be migrated later onto the remediated regular reporting process, which would eliminate the need for the current duplication.
 
What is wrong with the current solutions?
To the untrained eye the solution described above appears entirely logical, but to anyone who has spent time in the reporting function of a globally significant bank will know, this approach is the same one that has been tried for the past 20 or more years without success. A quote often attributed to Albert Einstein springs to mind “The definition of insanity is doing the same thing over and over and expecting a different result.” There are those who believe that this time it will be different because BCBS239 has created more focus from senior management which will ensure success. Once again, years of experience in this field tells me otherwise.
 
It is my belief that the current proposed solutions are flawed for two main reasons:
  • The world has changed significantly since banks implemented their existing reporting infrastructure and operating model
  • These programmes are treating the symptoms, not the cause of the problems
What is new?
The existing regular reporting infrastructure of the banks was developed during a time of rapid expansion in new businesses, geographies, legal entities and systems to support expansion within the front office. Firm-wide level reporting was supported by the back office through centralised systems within each back office function, usually in the form of data feeds into an Extract-Transform-Load (ETL) processes into each back office systems. This firm-wide level reporting was used to support very specific functional views within risk, finance, compliance etc. rather than strategic business development.
 
Senior executives supported strategy decisions using judgement and macro-level views of the external business environment, which led to rapid expansion into new high-margin business (often in sophisticated structured products) and simply adding new feeds to the reporting process without regard for how these could be handled downstream.
 
But things have changed and post the financial crisis the high-margin products have largely been eliminated and the volumes (and profits) in the remaining “vanilla" products are significantly reduced.  More importantly, there are now a raft of new regulatory rules which have been mandated by the collective regulatory agencies throughout the G20. Many banks began addressing each of these regulatory rules separately, but most have now brought these initiatives under some common structure. Despite this, very few have identified the single thread that persists through all of these regulations. This thread is the need for near-real time global views across the entire organisation that can be aggregated across many different dimensions. This need for near real-time is driven by the fact that some of the regulatory rules require business decisions to be made against these global views, for example in determining whether potential trades will fall under the remit of the Volcker Rule or not. There is also an increasing need for near real-time global views for business purposes, in particular those driven by crisis events.
 
This need for near-real time global views with the ability to aggregate across different dimensions  is a significant change for organisations that have been accustomed to less frequent and more static reporting, and is the catalyst that will force banks to re-assess the current approach.
 
What is the root cause?
The problems of current IT infrastructure being too expensive to build and operate, too complex and difficult to change, with data that is of poor quality, opaque and not timely due to adjustments and manual workarounds, are symptoms and not the root cause.
 
The root cause is that data is being copied into large centralised systems. It is these copies that cause the need for adjustments to be made to data in downstream systems that are supported by large numbers of operational staff. Often these copies are pre-aggregated, which results in portfolio level adjustments which are inconsistent in systems that use data aggregated across different dimensions. These large centralised systems have slow release cycles and cannot be changed without co-ordinating data feeds across all upstream systems which causes them to be highly inflexible.
 
Historically this feeds-based approach into centralised systems has been acceptable as there was:
  • less need for global aggregated views close to real-time - most reporting tended to stay within businesses and global views were focused on specific back office functions e.g. risk
  • There was less need for crisis or stress market condition reporting
The second but more important root cause is that the culture of the banks allowed an operating model where the front office treated the back office as “second class citizens”. The requirements of the back office were met by data feeds that were more akin to “data dumps” rather than correct and accurate reflections of the trades actually booked in front office systems.
 
Are there any alternatives?
Given this assessment, are there any alternatives that can both meet the needs of near real-time global views and aggregations, without copying the data?
 
The answer is yes, and strangely enough, it it is staring most banks in the face without them even noticing. The answer is to move the logic, not the data! This is exactly what is being used in the ad hoc reporting process used by the banks to address interim needs, including much of the stressed market reporting which enabled them to survive during the financial crisis. This approach addresses the root cause by querying the original source data. This process however is considered too fragile to be relied upon for regular reporting and hence is dismissed as not a feasible option. I would argue that there is a serious business case for revisiting this assessment and that, by “industrialising” the ad hoc reporting process, the challenges being raised by the banks in their BCBS239 assessment can be overcome.
 
Why has this not been done before?
I would argue that there are 2 reasons why this was not considered before, first the environment was not suitable for the change in culture required, and second, the technology to enable such an approach was not sufficiently developed.
 
As already described, senior management in the banks allowed a culture to develop where back office units were treated as second class citizens. In the post-financial crisis era however, the regulatory regime now provides the environment whereby the cultural change can take effect. Senior management must now change the culture in a way that places the back office in a position where they are seen as a customer of the front office rather than a second class citizen. This means that data must be made available within front office systems in a way that allows it to be queried directly to facilitate the near real-time global views required for business as well as regulatory purposes. Not only will this allow for more accurate and timely reporting, but also keeps accountability with the true data owners in the front office.
 
This culture change is not restricted to the front office, the banks are notoriously silo’d organisations and there is a deep mistrust not only between the front office and back office but also between different back office functions. This creates a culture whereby data lying outside of the direct control of a particular function is not trusted and is another driver behind the continued copying and proliferation of data multiple times through the organisation. This culture of mistrust needs to change to allow data within the organisation to be stored once, but made available to many.
 
The technology has also historically not been available, in particular where there are extremely large volumes of data scattered across many disparate technologies, businesses, locations and legal entities, making it difficult to extract and use this data in a productive way. Data virtualisation technology can now handle large data volumes that allow a direct query approach across disparate data at an enterprise level, with performance similar to, or better than local copies.
 
One of the biggest concerns around this direct query approach is that it may compromise the performance of operational systems. Here too, advances in technology allow for building local caches that protect the operational systems from direct queries as well as enhancing the performance, making a slow system faster.
 
Another major concern is that disparate data is in different formats, nomenclatures and and structures, making it difficult to present back in a uniform manner required for global views across the organisation. Being realistic, these “translations” must occur and there is no avoiding the need to analyse and implement them. Using a direct query approach however simply moves the responsibility of where this translation occurs from a central team and system, to the location where the data is owned and understood best, at source. Using this approach, data can be queried according to a globally harmonised schema, but the translation from local data schemas, nomenclature and format is performed on-the-fly within the local data sources. This allows the same local data source to be accessed many times for different business contexts.
 
Another challenge raised in the BCBS239 update report relates to the legal restrictions in some regions/countries that hinders the ability to obtain granular risk data. In the traditional approach where data is copied, these data privacy issues often result in anonymised data being stored centrally which adds to the problems around the opaque nature of the reporting process. Using a direct query approach, the original data can still be queried directly, with only the results being anonymised on-the-fly in the reporting back to a central location.
 
Finally, but most importantly given the nature of the banks responses to the BCBS239 update report, this direct query approach can be implemented incrementally leveraging the existing legacy source systems. The data is accessed on-demand rather than copying into large centralised systems. The underlying issue is that a large centralised persistent data store simply cannot be agile enough given the amount of change in the original feeding systems. Every time a change occurs in one of the feeding systems, the logistical process of co-ordinating the testing and release across all feeder systems results in the complex large-scale IT projects which are at the heart of the challenge reported by banks. Having a direct query approach allows source systems to be changed independently, creating an agile infrastructure that can adapt to the level of change created by the current business and regulatory environment.
 
At a crossroads
This approach is a radical departure from the thinking that has prevailed for many years and it is understandable that banks will be hesitant to change. The slow progress witnessed since the original BIS consultation paper is evidence of how we often overestimate the speed of factors that challenge and change the status quo, on the contrary however we often underestimate the impact when it does change.
 
I think the banks are at a crossroads, the question is will they continue down the same path as before in the hope that the large strategic back office programmes finally deliver, or do they invest in an approach of streamlining and industrialising the ad hoc reporting processes which allowed them to survive the financial crisis and offers them an approach which:
  • Fixes the root causes rather than treating the symptoms
  • Leverages existing infrastructure
  • Avoids big strategic technology projects
  • Removes redundant data and associated operational process
  • Remove a large percentage of the cost base tied up in operational support and expensive IT and business change programmes
  • Increases agility
  • Delivers Incrementally
  • Increases controls by retaining accountability with original data/process owners
Given these advantages, is it not time for large banks to try a new approach?

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Thinking of FPGAs for Trading? Think Again!

February 6, 2015 by mikeohara   Comments (0)

In the competitive world of automated trading, deterministic and ultra-low latency performance is critical to business success. The optimum software and hardware design to achieve such performance shifts over time as technologies advance, market structure changes, and business pressures intensify.

read more...

, , , , , , , , , , , , , , , , , ,