Jisc case studies wiki Case studies / Transformations University of Bristol
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

View
 

Transformations University of Bristol

Project Name: Core Data Integration Project

Lead Institution: University of Bristol

Project Lead: Nikki Rogers

 

Improve processes, culture and quality of information around staff performance management, by procuring or developing an in-house combined IT solution that can achieve full integration of data across various learning, teaching (VLE, SRS), research info systems and other external systems. 

 

Background

 

The original proposal for this project described how we planned to augment our institutional Enterprise Architecture approach with JISC resources in order to tackle the problem of achieving full integration of data in our various learning, teaching and research IT systems. This was primarily in order to provide better quality data to support our staff Performance Management processes - a big strategic driver at the time, and an area for which the seamless combination of data from a number of separate IT systems was required. This was in order for the University to be able to view the activities of academic staff (who may also teach) in an holistic way, and for trained managers to be able to support staff better in achieving optimal success in research, education and citizenship. Since the time that the Core Data Integration project was conceived, the Performance Management project has scaled down its IT ambitions somewhat and focussed much more on cultural change - a very good path to take, prompted by a realisation that IT alone shouldn't be used to drive cultural change in this sensitive area.

 

Nonetheless, improving data integration at our institution, whether to underpin the smooth-running of the organisation's operational processes, to provide the data platform to support our increasingly ambitious business intelligence goals, or to support the reporting and dissemination of our University's activities via the Web and to the government, is a generic concern for any organisation. Indeed over the lifetime of the project whilst for completely separate reasons the Performance Management project became less of a driver for the data integration project, nonetheless other projects and strategic initiatives at the University came to bear with even great weight on the pressing need for Bristol to improve its data integration architecture. The need to invest in the right IT architecture to support an improved data integration strategy at Bristol has become recognised at a very high senior level within the University during the lifetime of this project - an important step forward for our institution.

 

Aims and objectives

 

The broad aim of this project was to see to what extent the University of Bristol should and could adopt a Service Oriented Architecture (SOA) approach to existent problems of integrating master data automatically, across master data systems (our student administration system, our research information administration system, our estates management system, our finance system and so on). The objectives were to:

 

  • document where the problems exist in our current, bespoke data integration architecture,
  • identity where costs could be reduced in terms of maintaining it,
  • describe how agility could be improved - in terms of the speed and ease with which the data integration architecture can be adapted to cope with IT system replacements, the changing requirements of government reporting and so on, 
  • convince senior management at the University of the value of investing in SOA technology (if indeed we could be sure of the benefit of this approach).

 

Context

 

The growth in the number of IT systems at the University in recent years has been significant. There are now systems to support the student and research lifecycles end to end. For example, to support the student lifecycle we now have a CRM system, an applicant portal, a student portal, a timetabling system, a student administration system, a VLE solution, an alumni database and the list goes on. The automatic passing of student (and other) data between these systems is essential if we are to avoid the likely errors and significant amount of time involved in re-keying large student datasets from one system into the next.

 

However, typically, over the years, these systems have been joined using a varying set of technologies, based on individual developer choices, using technologies that they were familiar with and had existing expertise in at the time. As the University’s systems architecture has increased in complexity, the number of joins has proliferated and the management of the joins has become increasingly cumbersome and expensive.  Due to the scale of information sharing that is required to support University processes across the student and research lifecycles, the number of joins (integrations) that now exist run in to the thousands. This is a problem for many institutions and is sometimes referred to as a spaghetti-like systems architecture.  Hundreds of joins are not documented at all and have been implemented using many different IT techniques, often only known to the technical developer who created them, and they can be hard for other developers to unravel and maintain.

 

We are now in a position where seemingly relatively small system changes have had major consequences for other joined systems, and have resulted in ‘broken’ point-to-point interfaces.  For example, we have had cases where an organisational hierarchy change made in our HR system, caused University system breakages elsewhere, because there was no process in place to manage the safe propagation of the change throughout the set of systems that rely on this master data.   Moreover, staff users report that in many cases it is no longer clear how data is being stored and used between different systems.  Even expert users often know that data is transferred from one system to another ‘somehow’ but in the case of cross-functional processes are often unable to easily ascertain to what extent this is via an automated interface, and when or how manual intervention may be required

 

In the current economic climate we not only need to improve the reliability and cost-effectiveness of our data integration architecture to cope with operational processes that increasingly depend on IT systems, but we also need to respond effectively to changes in government-driven reporting requirements: for example the REF (Research Excellence Framework), KIS (Key Information Sets), HESA reporting and so on.  The external demand for data puts increasing pressure on our internal ability to manage our data well.

 

Furthermore, the growth of Cloud opportunities such as Software As A Service (SAAS) and the possibility of shared services within the HE sector mean that in order to be ready to take advantage of these potentially cost-saving external solutions, we need to understand how we share data across processes and IT systems better than ever. How, for example, could we move our student administration system “into the Cloud” if we can’t easily import data from our CRM into it, and export data easily from it into our Alumni database, for example?

 

Finally, in an increasingly competitive sector, the University is paying greater than ever attention to business intelligence. Our current strategy for measuring the university’s year-on-year performance against its strategic performance indicators depends on the quality of the data supplying the data warehouse that is used for BI reporting. If we don’t have a well managed, standardised data integration architecture together with centrally (and unambiguously) defined semantics for our master data structures, then most likely we will fail to deliver on our BI strategy in the longer term.  

 

The business case

 

University projects are the agents of change at Bristol. To initiate a project, the project proposal is taken through a two-stage business case process: Stage 0 initiates the process and involves a standard-format, initial business case document being presented to a senior level decision making body (our Portfolio Executive) for approval. We have been through that process for this project at Bristol and in November 2012 our Stage 0 Business Case for Master Data Integration was approved, allowing us to proceed and develop the Stage 1 Business Case, a fully costed and very thorough document. We are still in the process of writing the Stage 1 business case at the time of writing, but it can be summarised as follows.

 

The master data integration project will deliver the following outcomes

 

  • Completion of documentation for the data dictionary and interface catalogue
  • Full analysis of the above, to identify where we can reduce many point-to-point system integrations to a minimum number of sustainable data services,
  • The establishment of a data governance structure, to include data stewards responsible for the control of changes to data structures in master systems over time,
  • A full technical options analysis and a recommended SOA solution for the University of Bristol
  • Incremental implementation of SOA for the reuse of data in our master data systems, colloquially known at Bristol as the “blue ring”


The master data integration project will deliver the following benefits

 

  • Improve the operational management of master data and how it is automatically passed between IT systems,
  • Improve our ability to carry out strategic level reporting and to easily provide management or government/funding reporting information on demand,
  • Increase the speed and ease (thereby reducing the cost) of implementation of new IT systems on an on-going basis (currently much time is spent in analysing existing system integrations ahead of every new IT system to be integrated, due to a lack of central documentation).
  • Reduce the cost and increase the reliability of maintaining our data integration architecture on an ongoing basis (note that Gartner undertook research that demonstrated that 92% of the Total Cost of Ownership - http://www.gartner.com/it-glossary/total-cost-of-ownership-tco/ - of an average IT application, based on 15 years of use is incurred after the implementation project has finished. A significant part of those costs will be concerned with maintaining the application’s seamless integration within the organisation’s application architecture, so by simplifying this aspect we will save costs over time),
  • Ensure that Bristol is part of the sector-wide movement towards addressing data issues, and that we are ready to take advantage of Cloud and Shared Services opportunities

 

The master data integration project will cost:

To be continued! We are finalising costings for a business case to be completed in the Autumn of 2013.

 

Key drivers

 

The key drivers have been described above but can be summarised as follows:

 

  • The University needs to save money on cumbersome manual data-re-entry/data cleansing processes that could be automated,

  • The University needs good data in order to do business intelligence reporting,

  • The University needs to reduce the cost and unreliability of the current data integration architecture (caused by the proliferation of point-to-point system integrations),

  • The University needs to be agile in response to on-going changes in government and funder reporting requirements,

  • The University needs to be confident that data security is managed well as student and staff personal data is passed from system to system.

 

JISC resources/technology used

 

JISC infoNet “10 Steps to Improving Organisational Efficiency” (see http://coredataintegration.isys.bris.ac.uk/category/jisc-infonet-10-steps-to-improving-organisational-efficiency/ )

Support from the JISC Transformations Programme in networking with colleagues and arranging follow up meetings regarding SOA and the application of an Enterprise Architecture approach to developing our master data integration strategy at Bristol.

JISC ICT Strategic Toolkit (http://www.nottingham.ac.uk/gradschool/sict/ )

JISC Archi Modelling tool (http://archi.cetis.ac.uk/ )

Data asset framework – audit tool for data (http://www.data-audit.eu/ )

The #luceroproject (http://lucero-project.info/lb/ ) 

 

Outcomes

 

Achievements

 

  • Achieving senior level buy-in to the master data integration strategy that evolved from this project. This was achieved through a one-off workshop session with the University's Porfolio Executive and a follow up Stage 0 business case which was approved.
  • Getting buy-in from within IT Services to starting and maintaining the interface catalogue (see http://coredataintegration.isys.bris.ac.uk/2013/06/16/important-documentation-for-soa-the-interface-catalogue-and-data-dictionary/ ) and similarly the enterprise data dictionary which people from beyond IT services are helping to complete.
  • Collaboration with other Universities - useful visits to the University's of Cardiff and Oxford to discuss their own experiences of deploying SOA solutions, with a follow up workshop at Bristol planned for September 2013.
  • The project blog which attracted feedback from useful contacts around the sector http://coredataintegration.isys.bris.ac.uk/category/enterprise-architect/
  • Developing the SOA roadmap, tailored to Bristol's needs. 

 

Benefits

 

  • Starting the interface catalogue and data dictionary for particular projects has brought benefit direct to those projects (for example, a project that will introduce a new ERP system to Bristol over the next year is using the data dictionary approach currently to record the data models for our finance, payroll and HR databases ready for data migration). This in itself demonstrates the value of central documentation and has increased buy-in for the approach and an understanding that we need these two information stores to be centrally supported services, maintained in a devolved way across the organisation, on an ongoing basis. It has become clear that just by taking this step we will save costs in future when migrating to new systems. In addition, the interface catalogue has already revealed a large number of separate, point-to-point system integrations that essentially perform tasks on the same data (many are passing student data, many are passing organisational hierarchy data, many are passing staff data between systems and so on). This gives us evidence that we have obvious opportunities for rationalising our integration architecture significantly.
  • Collecting examples of where data problems exist across the organisation helps strengthen the case for SOA,
  • Helping senior management to clearly understand why Bristol needs good master data management has been extremely beneficial to this project and a necessary precursor to future investment in SOA.
  • Collaboration with other Universities continues to be invaluable in acquiring lessons learned, comparing and contrasting approaches and gaining knowledge from others' experiences of deploying SOA solutions - we are lucky to be in a sector that is so willing to share information in this way. It also paves the way to our sector potentially having more of a collective impact on vendors in the future and ensuring they meet our technical and data requirements.  

 

Drawbacks

 

  • Getting senior level buy-in for significant investment in something that is not clearly visible to the end-user - i.e. middleware - and which may involve high up front costs and a longer time before benefits are realised is a challenge that requires a lot of strong evidence and good communication with a non-technical audience.
  • At Bristol, using Business Cases as the sole mechansim through which we initiate a project is somewhat restrictive and time-consuming, although it can be argued conversely that it enforces the kind of rigour needed to scrutinise the costs and benefits of implementing each stage of our SOA roadmap.
  • Getting consensus across IT teams can be difficult when the prevailing culture has been based around a freedom to implement point-to-point interfaces between systems, and the debate ensues around whether SOA should be rolled out experimentally, taking a toe-in-the-water approach with opensource solutions initially, versus going in first with procuring an expensive commercial product to then roll out incrementally over time. 
  • We anticipate that implementing data governance to support SOA will again challenge some of the prevailing organisational culture where there is a localised sense of "data ownership" in certain parts of the institution, rather than a general appreciation that IT systems may hold master data which should be managed as a corporate data asset rather than viewed as in some way owned and controlled exclusively by a particular group of stakeholders.

 

Key lessons

 

  • Cultural change is necessary - both for IT services (in terms of moving away from traditional point-to-point system integration approaches) and wider in terms of senses of data "ownership" – convincing people on the ground requires identifying real pain points that they can relate to.
  • It takes longer than you would expect to develop an institutional roadmap for SOA that all relevant parties (from non-technical senior management down to "techies" in IT services) can buy in to.
  • Hearing about real experiences from other HEI's is valuable
  • Doing an institutional maturity assessment is essential in understanding where on the road to SOA you are.
  • Understanding the relationship of the design of a SOA plan with business processes and strategic goals is essential – hence an EA approach is highly recommended.
  • Describing a roadmap for the particular institution concerned is essential in getting buy-in and helping people to understand the business case for SOA.

 

Looking ahead

 

The next phases on the master data integration strategy roadmap are:

 

  • An Autumn Knowledge Exchange Workshop on SOA to be hosted at the University of Bristol,
  • Submission of the Stage 1 business case (described above) to be submitted to the Portfolio Executive, recommending the procurement of a SOA solution and developer training to support its rollout, 
  • Beyond reusable master data services in the first instance, we would then want to get to a stage whereby we have a SOA competency centre (the right set of SOA-trained business analysts and IT developers) giving us the ability to orchestrate and optimize end to end processes that share and manipulate data, creating efficiencies across the institution (across the research and student lifecycles).
  • We would want to continue to collaborate with other Universities in the HE sector who are also developing SOA competencies, to the point where we may adopt greater sector-level data model standardisation and begin to influence vendors with a stake in our sector to encourage them to standardise their IT products according to the set of data services we require. 

 

Sustainability


Our strategy for sustainable Master Data Management is intended to include:

 

  • Developing and maintaining a SOA skillset in-house (there is the opportunity to outsource this at a later stage, but certainly at this stage it makes sense to develop in-house our knowledge of the organisation's needs regarding data services and how to implement them effectively within the institution)
  • Implementing formal process change within the appropriate governance structure such that new system integrations cannot be implemented purely according to individual developers' programming preferences, but such that the integration method has to be proposed, reviewed against the master data management strategy and formally approved before a "go live" date is agreed.
  • Implementing data governance - stewards within the organisation who will take responsibility for the safe propagation of master data structure changes throughout the organisation, and to guarantee the quality and security of master data that will be reused by other IT systems.
  • Maintaining the institutional SOA roadmap so that the organisation clearly understands its direction of travel with respect to SOA and such that it can be clear when it needs to adapt to fresh external or internal drivers as and when they should arise.