Jisc case studies wiki Case studies / Bolton case study at 231012 (2)
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • Files spread between Dropbox, Google Drive, Gmail, Slack, and more? Dokkio, a new product from the PBworks team, integrates and organizes them for you. Try it for free today.

View
 

Bolton case study at 231012 (2)

The University of Bolton

 

BOLT-CAP

 

1 Background

The context of the initiation of the BOLT CaP project was the recognition that a lot of effort is put into capturing data for statutory return purposes. The data that is captured is sometimes lacking in robustness. The staffing and estates costs within HE organisations are often the largest areas of expenditure and yet for the purposes of statutory returns they are sometimes the areas where the data is least robust due, in part, to the methods adopted for data collection.

If the data had greater validity it could be made used for a variety BI purposes such as Costing and Pricing of the component parts of the University’s business.

At the University of Bolton, most data sources are stored within monolithic information systems and the means of interrogation and reporting often takes the form of:

  • standard static reports generated by central specialists,

  • ad hoc reports which are generated by central specialist and take time and effort to specify and then prepare, and

  • ad hoc reports which are generated within local units often with limited understanding of the data sets that they are using and the potential for significant errors and misleading conclusions being drawn.

1.1 Aims and Objectives

The primary aims and objectives of the project therefore have been:

  • to determine a means of defining, capturing and reusing data to better support operational priorities and strategic decision-making by:

  • improving data quality and richness

  • increasing data reuse possibilities

  • to determine a set of data ownership principles that enable scalability of the approach taken so that the work is sustainable beyond the project end by:

  • defining a set of principles which include the responsibility that goes with data ownership

  • determining standard data definitions and coding structures

  • defining a standard means of warehousing which facilitates reporting at different levels of granularity to aid decision making at different levels of the organisation and recognises the needs of organisational change

A further objective of the project team relates to the way in which the project should achieve its objectives in terms of in house or bought in expertise. Whilst there are many occasions when the solution to a system development is best achieved through an externally produced and procured product as can be the case with the production of BI dashboards, the project team held a conviction that in order to develop a suite of tools that are developable, expandable and sustainable into the future, the approach that was of greatest value to this organisation was to develop the capability in house and to utilise external expertise when this was needed to help grow internal knowledge and experience. The project aim was to develop the tools using low cost interventions, such as Reporting Services, rather than produce specifications to external consultants who would undertake this development work for us and therefore in the longer term result in the University having to invest more financial resources as developments and changes are required in the future. The University has seen many structural changes in the past and the team recognised that in the volatile external environment the likelihood of further structural change is high, therefore solutions need to be able to be flexible enough to cope with the consequent coding changes that inevitably result from structural change.

1.2 The Project Approach

The project was split into four distinct phases;

Phase 1 focused on developing the means by which statutory return data is captured, this encapsulated data location and definition and where necessary the development of data capture tools or mechanisms, in particular the creation of a Workload Allocation Tool.

Phase 2 considers the issues surrounding the warehousing of data and in particular determining the way in which data is to be warehoused which will allow expandability into the future. Phase 3 focused on the development of a Costing and Pricing Tool which uses the data captured and/or created in Phases 1 and 2.

Phase 4 was only briefly explored within the project and is to be developed post project however there is a real opportunity for the data being captured to be used within ‘review’ tools which will be developed. For example, review of Workload Allocation to gauge how staffing resource is being managed across the various academic units, review of actual compared to planned module and programme costs that can be interrogated to better understand the variances and therefore make informed decisions on resource allocation where appropriate.

1.3 Scope

During the development of Phase 1 it became apparent that the scope of the project needed to be significantly wider than initially planned. There were some issues in terms of lack of standardisation across the University that needed to be addressed if the creation of data capture tools were to be achievable. The agreement of a Workload Allocation Framework was negotiated with the University and College Union (UCU) as part of the package of change initiatives during May-July 2012 which has resulted in the University implementing the use of the Workload Allocation Tool from July/August 2012 in planning and managing the academic workload for the 2012/13 session.

1.4 Governance

The developments associated with the project highlighted the need for improved governance. From August 2011 the Marketing and Corporate Intelligence Committee (MCIC) was introduced into the Universities Committee Structure. The MCIC reports directly to the University Corporate Management Group. MCIC has a number of working groups that are focusing on different aspects of corporate data including the Management Information Working Group (See Appendix 2 University Committee Structure). Amongst other issues the importance of data guardianship is being explored through this working group and control mechanisms have been put in place to better regulate coding changes. Following on from the initial data cataloguing and data definition work of the BoltCaP team, who are also members of the MI Working Group are continuing to establish coding structures that facilitate the needs of the users whilst fitting into the constraints of the various applications in terms of coding levels that are available.

 

 

 


The project governance attempted to follow the principals established within Prince project management methodology in that a project board was established with a steering group and project management group. The project management group met far more frequently than had been planned, whilst the responsibility of the Project Board was moved to the University wide IT Committee. The changes in the project management and governance structure which worked more effectively are shown below:

1.5 Technologies & Standards Used

1.5.1 Software Development Approach

The project and development team have followed an ‘agile’ software development approach to allow the team to cope with, and overcome, continuous data structure and organisational changes. This way of working has enabled a more flexible approach for delivering the required work packages. It is hoped that this way of working will be fully adopted in the future to enable more efficient and robust development of software applications within the University. The software has undergone full project team testing before delivery to nominated key users to undergo User Acceptance Testing (UAT).

1.5.2 Technologies Used:

i) Web Server / Application / Database Server – Microsoft Windows Server 2008

  • We are using VMware virtual technology to host all applications and databases.

  • There are currently two environments for the WorkLoad Allocation Tool (WLAT), namely a Test/Development environment and a LIVE environment. The Test/Development environment consists of one virtual server hosting both the database and web application.

  • With the LIVE environment the database and application are both hosted on separate production servers. This enables better security and server performance.

 

ii) Databases and Data Warehousing

  • Developed using Microsoft SQL Server 2008 R2

iii) Software Prototyping

  • Using Microsoft Access 2007 and Balsamiq mockup’s

iv) Software Development

  • WLAT solution system developed using Microsoft ASP.Net MVC and Entity Framework

  • Software developed using agile development and project management methodologies.

  • In partnership with Collabco we used scrum as the back bone of the development process which is itself an agile approach to project management and software development:

 

The key aspects of Agile methodology we applied were:

  • Light up-front requirements specification documentation focussing on key requirements and business outcomes.

 

  • Iterative development model using rapid-prototyping and incremental refinement

 

  • Regular demonstration of proto-types to key stakeholders through the life cycle of the development (via skype meetings both daily and end of week progress demonstration)

  • Empowered customer representatives continually input into the requirements and evolution of the product.

v) Reporting

  • Reports and Dashboards created with Microsoft Reporting Services (MSRS)

vi) Enterprise Architecture

  • Archi modelling tool for ‘as-is’ and ‘to-be’ diagrams

  • Microsoft Visio2007 for Business Process diagrams

1.6 Senior Management Support

An issue that was not foreseen but had significant impact on the project was the lack of buy in from a senior sponsor from the earliest stages. Due to retirement, the sponsor changed and for some months the project was somewhat adrift. This had an adverse impact both in terms of moving the project forward and on the morale of the project team. A number of positive things happened in October/November 2011 which helped to turn the project around:

  • discussions with the senior management of the university began to have an impact and the value of the project was better understood;

  • a new Project Director was agreed (Executive Dean - Market and Corporate Intelligence, member of the Corporate Management Group);

  • the remit of the Information Technology Committee was redefined with greater emphasis on the strategic and operational priorities in terms of systems and IS&T Development. The ITC membership includes the Project Director and the Project Manager which has raised the profile of the project.

The key lesson that has been learned is that the sponsor at the highest level is not just a figure head and whilst a key sponsor may not appear to the active, the lack of a sponsor with buy in can significantly affect the success of the project.

2 Outcomes

2.1 Six Key elements of BI

2.1.1 Process Improvement

The different components of the project have resulted in a variety of Process Improvements:

  1. WorkLoad Allocation Framework (WLAF) and WorkLoad Allocation Tool (WLAT) – the framework has resulted in transparency in Academic Workload across the organisation whilst the WLAT has standardised the means by which the Academic Staffing Resource is allocated and the associated data is captured by Academic Managers;

  2. The data captured in (i) above produces a validated Time Allocation Survey for each member of academic staff that has greater accuracy than was the case before the introduction of the WLAT;

  3. I and ii enable greater accuracy in the generation of the HEFCE TRAC return which leads to greater accuracy in the generation of the FEC;

  4. The process for capturing and reporting space data both for TRAC and EMS purposes has been standardised using MS Reporting Services collecting information from an in-house developed Space Management Database. This has been informed by the methods used by other Universities which became apparent during dissemination events ie North West TRAC Group. Whilst previously there was tension between the TRAC guardian and the space manager because of the perceived complexity of reporting requirements, the development has resulted in a simpler process that involves stakeholders across the university and therefore has created greater transparency;

  5. By developing the data warehouse we are able to reuse data much more efficiently for purposes such as timetabling which means that the processes for producing published data are much more timely, efficient and reliable with data not requiring multiple audit points and checks;

  6. The components that are used in the costing and pricing tool include data that has been taken via the data warehouse which was generated from the WLAT, the Module Database and the Space Database. When producing proposed academic products, the automated calculations then mean that the amount of data input and calculation required from the Academic Manager is reduced significantly.

2.1.2 System Implementation

Section 2.4 provides the detail of the project specification together with the means by which the various components have been or are to be implemented. The project has impacted on a range of both developers and users and therefore the implementation has taken different forms, for example the implementation of the warehouse component has involved an external organisation (Solstone) providing advice and guidance on the development and the implementation of the warehouse together with an insight into visualisation techniques to be adopted to assist end users. The implementation of the Workload Allocation Tool and the Costing and Pricing Tool has involved staff development for the relevant Academic Managers who are involved in resource management and/or programme development. The use Workload Allocation Tool which in turn creates the Time Allocation Survey has required training sessions with groups of staff. All of these implementation steps are to continue with assistance from the University’s IS&T team including the training officers and the Staff Development Officers from the Human Resources and Organisational Development Unit.

2.1.3 Change Management

Initially the project appeared to require little in terms of change management. Whilst the project team put in place, from an early stage, a stakeholder group to inform developments, it was felt by the project team and stakeholders that the aims of the project provided tangible and real benefits in terms of better data and tools to make life easier for those managing resources when they have had little resource and finance management experience. However, as the project unfolded it became very apparent that the cultural impact that the project was spearheading required sensitive handling and careful management techniques needed to be deployed to manage the changes that colleagues were experiencing.

The University went through a major structural reorganisation which began three months into the project and had an impact on colleagues who had been in the project’s major stakeholder groups. In some instances colleagues have changed roles including moving to posts abroad and some leaving the organisation. The University has also in parallel gone through an academic review over the last 12 months which has required all undergraduate programmes to be refreshed and revalidated and, in some case, redesigned. Set against this backdrop the approach of the project team has been to introduce a wider team of stakeholders to the benefits of the various tools and to obtain feedback to inform further development.

2.1.4 Data Usage

The premise of the project was that data can be used and reused for a number of different purposes by different levels of staff. The process of integrating data sources in order to create common data definitions has led to changes in the opportunities for data usage which, albeit spear-headed by the project, has had ramifications across the university. Bringing together coding structures means that statutory returns will require less manual intervention and recoding. Trend analysis over time has been hampered in the past. Work load allocation, time allocation costing and pricing have been particularly difficult to analyse because the data structures have been patchy and inconsistent. There is also a positive impact in data usage in other areas of the University as a result of the approaches that are being put in place by the project for example timetabling, student progression/achievement/attrition.

2.1.5 Data definitions

Three groups of data have been used: Staff, University Structure and Module. Data is sourced from Student Records System and HR and Payroll and also has to be consistent with our Finance System. The requirement for university reporting in parallel to the project has underlined a need for a central group ensuring consistency of data across all systems. University Structure is a prime example of this, especially as the University is reviewing its structure. Examples of where standardisation will benefit include:

  • the name of one particular faculty uses a variety of different formatting of its title the implementation of standard data definitions will enable us to be sure that there is one common format which in turn will help in our ability to provide the users with sensible and meaningful data reports.

  • a second example is how we refer to staff by name. We have developed a standardised <surname>, <preferred forename>format which is used across the board.

  • a final example is that of a module being taught in a particular year <module code>-<year part>-<occurrence> for example ABC1234-12-A is the first occurrence (A) of a module (ABC1234) in the year 2012/13.

  • Data definitions and the data usages are being recorded in a data dictionary to enable future impact analysis of changes in data value or definition.

2.1.5 BI Maturity

During the bidding process an evaluation of the University was undertaken using the JISC Business Intelligence Infokit BI and there was somewhat of a dilemma regarding where on the maturity model the University of Bolton was positioned. On one hand the data within the university is located in the main within a number of monolithic core applications such as Tribal SITS system for the Student Data, Trent for the Human Resources Data, and these are recognised as the central “single sources of truth” for the various elements that they relate to. This means that, normally, duplicate data records are very rare. Many of the systems have been in place a substantial number of years and the acceptance of the use of central data collection and storage has been achieved by organic change management involving changing the culture of the organisation and technological advances. However, there remains a suspicion as to the robustness of the data, in particular because whilst the data collected meets the needs of the data owners, it is sometimes not fit for purpose when used for wider university application. The problems are compounded by the fact that the reporting mechanisms are devolved to Faculties, Institutes and Professional Service Units often requiring the creation of Access Queries and SQL Queries that are constructed by people who may not understand the very complex data framework upon which the reports are structured. This therefore means that whilst the data is in the main accurate it may not always be complete. The means of interrogating the data is not robust and there are few standard approaches to data mining and report generation. Also data coding within one application often does not match that within another. So from the perspective of the BI Maturity Model, the University of Bolton was deemed to be at Stage 2 in terms of having centrally managed and locally operated systems, but there remained a mistrust of the data that is retrieved from the systems which would indicate that the organisation had not fully left Stage 1.

 

The university has considered vendor supplied BI solutions in the past but found the development and/or licence costs to be particularly prohibitive. The maturity model suggests that at Stage 3 a vendor should be selected and indeed it was felt that the Infokit leads one to believe that the best means of developing the BI that will benefit an organisation is by securing the services and products of an external vendor. As a small organisation we find that we sometimes do not have the means to invest in the type of application that meets the needs of the complexity of the organisation, small does not necessarily mean simple. We have developed the applications therefore, sometimes with external developer contribution, using Microsoft technologies such as MS Reporting, and MS InfoPath which are more freely available across the University and are affordable under the existing Campus Agreement.

 

Having re-evaluated the BI Maturity Model at the end of the project we feel that we are now at Level 4 moving towards Level 5. We have not completely solved some of data definition issues but the data cataloguing and the strengthening governance means that data owners have a greater understanding of the responsibility that comes with data guardianship and there is a drive to achieve compatibility between applications.

2.2 Skills / knowledge development 

The component parts of the project have had a positive impact on skills and knowledge development from a variety of perspectives:

  • Use of agile application development with external development company (Colabco) has been a very positive experience for UoB’s internal development team;

  • External expertise (Solstone) has been used to inform the development of the data warehouse and visualisation techniques to better support users;

  • Wider discussion of the data governance and usage has brought data owners and users together and broadened their knowledge of the use of data beyond their own area of responsibility.

  • Moving to a “service oriented” approach for data users has provided a different emphasis to the relationships between service providers and internal clients with providers seeking an understanding of what the internal clients need to achieve from the data rather than asking them what coding structures they need, which in the past has lead to structures that are not fit for purpose across different systems or uses;

  • The WLAT has involved skills development for resource managers. The tool together with the standardisation of allocations has resulted in this process being a simpler one to administer albeit that this has required significant negotiation across the University.

2.3 Evaluation of Incorporating Enterprise Architecture

The University of Bolton has been involved in developing expertise in Enterprise Architecture(EA) over the last two years and an output of the project was to explore the applicability of EA to the project. The project team went through a number of workshops at which the “as is” and “to be” scenarios were mapped using EA techniques and the Archimate modelling tool. An evaluation by the project team of the process and the maps that were generated, concluded that this method of mapping during the development of a complex set of applications was frustrating and was not bringing any benefits to the project. Archimate enabled the linkage between systems and service to be mapped however the complexity of the data flow in this particular project proved very difficult to articulate in a sensible way that added any real value to the project. Due to time constraints the team determined that continuing to use EA to inform the project development would create unnecessary time delays.

 

EA “As Is” diagram

EA “To Be” diagram

2.4 Project Specific Outputs

2.4.1 BOLT-CaP Project Development

The development of the BOLT CaP BI System was split into a number of components or phases that are expandable post project. A description of each phase can be found below.

 


Project Overview Diagram

2.4.1.1 Phase 1 develop the means by which statutory return data is captured to: i. increase its accuracy and validity; ii ensure data is used for appropriate purpose and intended output; iii. increase reuse opportunities.

A) Transparent Approach to Costing (TRAC)

Analysis of the data that is captured to complete the HEFCE TRAC statutory data return identifies that approximately 50% of the data relates to staff time and space both of which were based on data collection techniques that were not robust which was a concern across the HE sector. The TRAC data in particular is then used for benchmarking purposes but more importantly is used to create the Full Economic Costing overhead calculation which is incorporated into costing models for a wide range of University activity including Research Council Awards, academic offerings, overseas delivery (albeit adjusted to reflect different service usage). Data that is not robust potentially leads to costing models that are skewed and inaccurate. The diagram below identifies the input and manipulation of data collected for TRAC purposes at the project initiation.

Data that is collected for TRAC purposes

BoltCaP has focused on Staff and Space data which is only a small part of the whole but accounts for approximately 50% of the influence that the data has on TRAC and FEC as shown in red above

B) Development of the Work Load Allocation Framework and Tool

A significant element of the TRAC data collection requires all Universities to undertake a Time Allocation Survey (TAS) which quantifies the time and effort of full time and fractional academic staff by activity category. This data is often not validated and is based on subjective interpretations. A large part of Phase 1 focused on the understanding and development of a Work Load Allocation Tool (WLAT) which provides Academic Managers with the means to manage staff workloads and allocations in an open and transparent way. Data is taken from various primary and secondary sources to populate the tool and the Academic Managers, through a process of management planning and discussion, determine the workload of staff that is then available for a number of purposes including providing an accurate data return for TAS purposes. The creation of the TAS is automated and validated whilst achieving TRAC compliance requirements. The data that is captured within the WLAT is then reusable for use in statutory returns, Professional Development Planning process (PDP) and appraisal, academic planning and review. Further, the data captured is warehoused to be interrogated for the Bolton Costing and Pricing Tool (Bolt-CaP).

WLA Tool Screen shots (Academic Manager Views for Work Units and Staff Allocations)

Work Units

Fig 1.0 Work Unit - Index Page

Fig 1.1 Work Unit - Details Page

Fig 1.2 Work Unit - Edit Page

Fig 1.3 Work Unit – Work Unit – Create New Page

Staff Allocations:

Fig 1.4 Staff Allocations - Index Page

Fig 1.5 Staff Allocations - Edit Page

Fig 1.6 Staff Allocations – Create New Page

 

 

Fig 1.7 Time Allocation Survey – automated collection of Workload Allocation data

C) Space and Estates Data

TRAC and Estates Management Statistics (EMS) statutory returns require the collection and collation of Space and Estates information. Analysis of the data collection methods for 2010/11 indicated that there was a lack of robustness and understanding at the operational and management level of what space and estates data were required to ensure compliance with these statutory returns. A Space Management Database has been developed to ensure that data is captured in a standard way and that changes are reflected in a timely and auditable manner. As the number of people who need to have access to the application is relatively small and therefore security/firewall/access issues have been controllable, the use of MS InfoPath screens has enabled an input and updating set of screens to be developed very cost effectively. Working with the Costing and Pricing North West Networking Group, which was initiated as part of the Jisc BoltCaP dissemination requirements, has lead to further developments in establishing an annual work flow that provides greater transparency to space and estates costs whilst maintaining compliance for TRAC and EMS purposes.

D) Timetable Planner

The University moved to central timetabling some 10 years ago and uses Celcat for this purpose. The workflow around the data being made available for timetabling purposes has been cumbersome and has involved the generation of various spreadsheets that Academic Managers have designed themselves and use for planning purposes. Different versions of the data are used in communication between the Academic Managers, the Timetabler and the Student Data Management section which is responsible for course, module and student data.

As part of the Workload Allocation Tool, the means by which the modules were validated as being available for the timetabled period needed to be addressed to try to achieve a more standard and controlled method of data generation in terms of quality, accuracy and timeliness.

To this end, a set of Timetable Planning (forms/tools/applications) have been developed from which the Central Timetable is generated. The Workload Allocation Tool is able to draw upon information from the teaching components that the Academic Managers are allocating to staff members.

The Timetable Planning Tools will be further developed post project to establish a set of web forms for ease of data input and capture.

Figure Example of the Timetable Planning Tool

E) Revalidation of UG Programmes and the Module Database

As a separate set of projects the University has embarked over the last 10 months in a process of revalidation of its entire suite of undergraduate programmes. As part of the revalidation process, modules have been reconfigured often significantly and in line with the requirements of the University’s Core Curriculum Agenda and the Key Information Sets (KIS). The modules will then be present on the newly developed Module Database from which the KIS data will be made publically available and the delivery pattern will inform the WLAT by ultimately providing the Academic Managers with a schedule of delivery into which they will need to allocate staff time.

The development of the Module Database also allows the BOLT-CaP Tool to pull costing data from the Module Database which is informed by the delivery pattern of live modules and provides the framework that surrounds the costing of new modules for planning purposes.

The development of the Module Database is of primary concern to the University because the data has operational significance ie programme deliveries are dependent on the completion of the database and the publishing of KIS is a statutory requirement. As such the completion of this project has organisational priority.

In order to provide the data requirements for WLAT, it has been necessary to create data tables within the data warehouse which provide the required data rather than extract this data from the Module Database however the WLAT web form will be updated to utilise the Module Database data when it become available and is robust.

2.4.2 Phase 2 develop the means by which the data is warehoused to: i. facilitate future growth and development of the data capture and BI tools; ii. standardise terminology and create organisation wide data catalogue/dictionary; iii. In terms of BOLTCaP project - enable the various data sources to be analysed to create and then review the costing model of a module/programme.

The warehousing of data is a new concept at Bolton. Previously data from different sources has been queried via SQL and Access queries. The project has become the launch pad for the development of a warehouse that will ultimately meet the MI and BI reporting needs of the University. This has meant that a wider range of considerations have had an impact on the development of the warehouse than those anticipated by the project’s original goals.

The warehousing of the data has two functions:

  • To provide a common data source independent of the originating application, allowing the data to be provided in a common interpretation and data type. Thus changes in the primary data source do not affect the data provided, and the primary data source can be managed for its primary role, and replaced if necessary.

  • To provide additional information over and above what is stored in a primary data source. For example tracking the full history of Status Changes in students’ enrolment records is not feasible in the Student records System, but by suitable data extraction this history is recorded. Also additional tables able to support groupings of status codes will provide an enhancement to reporting data.

Over and above these two functions the construction of the warehouse will provide a flexible cross system reporting provision for the University. This has brought a focus on the project and the reporting tools for information, for example on applications, which will feed through to the projects planning tools, for example timetable planner.

Whilst the data capture from primary data sources (SITS and TRENT in particular) has progressed well, albeit that there have been real complexities in terms of creating a common understanding of data definition and the responsibilities of data owners in an environment of integrated data usage, the really significant issues for the project have been around the need for some (secondary) data to be available from other projects that are running in parallel, in particular the Module Database from which the Key Information Sets will be taken which articulate the delivery pattern which then drives the Workload Allocation Tool. Cross membership within project teams in particular the Systems Team Leader has ensured compatibility and the maintenance of the data dictionary standards.

2.4.3 Phase 3 Creation of a Costing Tool

Using data from all primary and secondary data sources the Costing and Pricing Tool brings together the costs for existing modules and average costs for typical modules of different categories eg Lab Based, Lecture Based, Workshop Based that are planned to be developed. The costing tool then incorporates overhead costs generated where appropriate by the TRAC data which informs the FEC overhead rate. The university operates both on and off campus which a range of delivery methods. The costing tool differentiates between the various delivery methods and generates costs including the overhead rate that reflect the delivery methods that are required, eg on campus, off campus franchise, off campus flying faculty etc.

INSERT PDF 1 = Off Campus Costing Form (NB for commercial reasons figures are test only)

INSERT PDF 2 - On Campus New Course Proposal

2.4.4 Phase 4 Creation of a Costing Review Tool

An opportunity that has become more evident and is taking on greater importance, is that in the creation of the costing tool we are gathering the data that can then be used for monitoring and to determine variance in actual versus planned, comparability of programme costs which enables drilling down to determine reasons for variance and to inform programme redesign and resource planning. Phase 4 of the programme is discussed further in Section 3 Looking Ahead.

INSERT PDF 3 - Course Review Form

 

 

2.5 Intangible Benefits

i clearer understanding of the structures of primary and secondary data sources;

ii having to gain senior management buy-in and the change in the senior sponsor has moved the organisation forward in its appreciation of the long term value of the project and BI in its broader sense;

iii discussions have led to the acknowledgement of the need for and addressing of standard issues in the broader organisational operations ie agreeing what a working day is to be, in order to determine the work-load allocation model parameters, which is leading to greater transparency;

iv a development methodology that works across projects and ensures cross over membership so that the benefits of each project work in harmony rather than in conflict with each other;

 

3 Looking Ahead

The Costing and Pricing Tool has generated a lot of interest and colleagues at various levels of the organisation are anxious to have the tool available for them to use. It will simplify the costing of products both on and off campus which has been a goal of the organisation for some time. The Workload Allocation Tool and the Time Allocation Survey report are now being implemented and will be reviewed over the next academic session to ensure that they meet user requirements.

Over the next twelve months the project team will continue to develop the tools to encompass the Timetable Planner and the Costing Reviewer to complete the suite of planning tools around this particular theme.

In the wider context of BI, the university has embraced BI and whilst the JISC BoltCap project has been running a parallel internally sponsored BI project has been developing student retention and intervention reports which has informed the development of the warehouse to meet the needs of the wider organisation. I think the issue will be managing expectations as the opportunities that BI provide in gaining an understanding of the organisation are appreciated.

4 Summary and Reflection

4.1 Identify objectives early and articulate them clearly

In particular get a clear group understanding of what the tangible outputs are going to be; some of which may be incremental steps within the larger project. Small successes can be great motivators in a large and complex project. Creating a common understanding of the components that are necessary to achieve the end goal can be complicated. There can be a number of different solutions to the problem and, over time, the project aims may need to have greater flexibility than had been anticipated because the “as is” situation may change or may prove to be different to that which had been understood earlier in the project. Constantly checking and rechecking the common understanding with the team members is therefore necessary so that they are all clear about emerging or changing tensions that will impact on the project.

4.2 Don’t under estimate the time that the project management and interfacing with the programme ie production of documentation, wiki etc takes

Set aside time to complete the documentation regularly because otherwise catching up can be a real problem.

4.3 Controlling Project Roll-Out

When colleagues become aware of the potential positive impact of the project outputs managing their expectations of roll-out can be challenging. This an issue that came to light midway through the project and has to be addressed by careful change management and discussion with individuals and groups about managed roll out and implementation.

The University of Bolton, towards the latter part of 2011, began to better understand the importance of BI in the changing sector landscape. The factors influencing the growth of understanding, interest and drive for BI have been many including the internal appointment to the newly created role of Executive Dean of Marketing and Corporate Intelligence who became the BOLT CaP Project Director from November 2011. The BOLT CaP project has also had significant influence on the increased interest in BI as the University has begun to amass a greater understanding of the value of BI to the University’s decision makers.

It had always been an aim of the project to develop an approach and build foundations that were flexible and expandable in order that the application has a sustained life after the project has finished. The University’s demand for more and more BI to aid decision making is giving further weight to this aim, however in terms of scope the foundations that are being laid are likely to have a much bigger impact in the longer term than had been planned for at the project outset. This in itself has been a challenge for the project team. It has been recognised both internally and externally that the project is a complex one, indeed this was identified as a risk at the bidding stage, however the explosion in longer term BI expectations for the University has meant that the project team have had to consider a much broader criteria than was originally anticipated. So, for example, creating coding maps and data catalogues have had to be expanded beyond the initially identified data sets eg all primary (Trent, SITS, QLx) and secondary (Celcat) data sources have been mapped.

4.4 Evaluation of Enterprise Architecture

The project team were committed to evaluating Enterprise Architecture using the application Archimate in relation to this project The team found if very difficult to adapt the concept of EA to the project in the main because of the complexity of the project with data flowing into and out of the data warehouse.

4.5 Working arrangements with external companies:

4.5.1 the adoption of an “agile” approach to the application development with our chosen external developer (Colabco) has been very successful;

4.5.2 utilising external expertise (Solstone) to enhance our own capability rather than simply commissioning them to undertake the development has meant that greater commitment from internal staff and managers has been necessary but this approach has resulted in a level of sustainability for the project outputs and the further development of the BI framework that would not have been possible otherwise.

 

5 BI Impact Assessment

5.1 BI Maturity

As discussed in section 2.1.6 above the initial evaluation of the University’s position on the BI Maturity Model was level 1 moving to level 2 at the project initiation. The evaluation now is that we are within level 4 moving towards level 5 with introduction of enhanced governance, the involvement of external expertise to enhance in-house capability to create the BI warehouse and application for the project and into the future to ensure sustainability.

5.2 Organisational Impact

The University continues to go through a process of reorganisation and realignment and whilst the original Risk Analysis recognised this as an area of risk and determined mitigating action, in reality the impact on the organisational change in terms of the soft, people, issues was underestimated and whilst the impact is now very positive in terms of potential take up by stakeholders (Divisional Provosts, Deans and Academic Managers) who can see a benefit in the BI that the tools will provide them with. Also the operational use of the tools, in particular at this stage, the use of the Work-Load Allocation Tool is seen by the stakeholders to be of particular importance. Whilst, during the period between May – September 2011 the project was hindered by staff changes and uncertainty, it now seems that the changes have led to opportunities and take up of the project that the organisation may have been more reticent to engage with had the changes not happened. For example the use of the WLAT for resource management and the need to understand the cost base of modules and programmes are two particular areas where the changes within the internal and external environment have emphasised the need for improved BI and these two elements of the project are tools that will support managers to undertake their remit.

The coding structure in particular had been flagged as a risk area which has proved to be frustrating and through the initial stages of the project common coding was explored. The work in this area is continuing with the MI/BI task group members working together to create standardisation across all systems. There are complexities in that the various systems have differing levels, and there are further structural changes that are continuing to influence the organisation and therefore coding structure.

Another area that was not fully appreciated at project initiation was the lack of university-wide standards in terms academic staff allowances for example, had not been anticipated. In part the project helped the university to begin the process of determining how we move to standardisation. Although it has taken time to get to the point whereby agreement has been reached with the UCU with regards to the Workload Allocation Framework, this will now from the project perspective ensure that the tools created will become embedded within the organisations resource management toolkit.

 5.3 Impact on Users

The various tools are being rolled out to Deans, Academic Managers and Business Managers across the university and the take up will be University wide. Academic Managers are using the tools for resource management, annual resource planning and longer term new programme planning.

5.4 Institutional Embedding

The approach that has been taken by the project has resulted in a number of tangible benefits that are becoming embedded within both the strategic and operational processes.

The tools that have been developed or are in their prototype stage are being used by Academic Managers across the organisation for planning purposes in relation to Workload Allocation.

The Workload Allocation Framework and the Workload Allocation Tool have achieved standardisation of workload and allowances across all academic units of the University. This has provided a level playing field which enables transparency and equality of practice whilst the Framework as a whole provides flexibility for Academic Managers.

The automation of the Time Allocation Survey has replaced the previous time consuming and un-validated data collection exercise and ultimately the automated collection of the data will further reduce the time spent in collating the data and the accuracy of the data collected. This improvement will in turn have an impact on the University’s TRAC return and an expected positive impact on the University’s FEC.

The creation of the University’s data warehouse has resulted in an understanding of the data that is captured and the importance of data standardisation. There is still some way to go in achieving full data standardisation but the creation of the Marketing and Corporate Intelligence Committee (reporting to the University Corporate Management Group) with the various working groups is beginning to ensure that the responsibility of the data owners is understood and that practices that ensure whole organisation consideration of data is becoming embedded.

The Timetable Planning Tools have the potential to drive forward this important part of the planning cycle so that it occurs in a more timely and therefore organised and managed manner and will make available the data that is used for human resource planning. Whilst these tools are not in place yet the approach to process management is becoming more embedded.

5.5 Sector Level Impact

The ability of the University to achieve a number of positive impacts in data collection has potential impact for the sector:

  • The agreement of standard workload allocations and the implementation of the Workload Allocation Tool across the organisation has enabled a model of resource management that could be replicated across other organisations or subset of organisations. The University of Bolton is relatively small in comparison to other organisations in the sector however the model, framework and tool could be adopted by Faculties of a similar size in larger organisations. There may be opportunities to use smaller sub units as pilot studies with a view to wider implementation, however there would need to be a substantial amount of enabling work to facilitate the data capture.

  • The principals that are being developed through the project in terms of standardisation of data definitions and coding structures to facilitate data warehousing and thereafter accurate reporting for BI purposes cannot be stressed enough. The project began this exploration and the importance beyond this initial BI project to the wider application cannot be stressed enough. By having the discussions with data owners the understanding of the use of data is gaining momentum. A sea change that would possibly impact across the sector is the change in how data users are approached to ascertain their data needs. Until recently Deans of Faculties would be asked what coding structure they required, for example, to reflect the staffing structures within Trent. This approach has not taken into account that the users cannot understand often complex data structures nor the impact that their suggestions will have on data usage beyond their requirements. However, in a similar way to the changes in the way in which IT services are being determined, the approach is moving towards that of a “service delivery” ethos whereby the users, eg Deans, are asked what they need to achieve with the data and the warehouse and system developers determine how the user needs can be met within the bigger data picture. Further, whilst data owners have had free reign to change coding, a control mechanism has been put in place which requires such changes to be approved by the MI/BI Task Group the membership of which includes all the data owners in order that the impact of changes can be assessed prior to approval. From a sector perspective this level of control and governance could help to improve data control in all organisations.

  • The method of data capture and control of data change for Space returns has introduced a more straightforward mechanism for calculating space which resulted in part from discussions with other northwest universities. The sector impact here relates to sharing understanding and interpretation of the requirements for statutory returns which can save a lot of time and the creation of complex mechanisms that are sometimes not necessary. This may also provide some feedback to HEFCE in terms of the lack of clear guidance and the opportunity for institutes to interpret data needs can lead to organisations spending an unnecessarily significant amount of resource on data capture that is not always appropriate. Clearer specifications on the mechanisms to be adopted would help to avoid this.

  • The means by which the Costing and Pricing Tool has been set up to calculate the costs of programmes in itself is based on a set of principles that can be replicated across the sector, ie standardising data sets and using averages to cost offerings whilst ultimately using actual costs to review costs, however whilst this might be replicable in some universities it is likely that in larger organisations the approach would probably be difficult to achieve across the organisation but could be adapted to business unit level.

 

 

 

 

Appendix One

Workload Analysis Database Entity Relationships and Schema Documentation

University of Bolton

 

V 0.1 27/02/2012

V 0.2 30/03/2012

V 1.0 05/04/2012

 

Contents

  • 1 Introduction

  1. Conventions

  • Tables are named in Capital case. Fields are named in camel case. The prefix PK_ denotes a primary key; the prefix FK_ denotes a foreign key.

  • The field suffix ’ID’ in general denotes an Identity field. ‘Code’ generally represents an alphanumeric index with some meaning applied to the letters/numbers used.

  • The use of ‘num’ to abbreviate number instead of ‘no’: for instance noPeriods can be read as “No Periods” – a Boolean true/false meaning “there are no periods”, rather than the intended “Number of Periods” hence numPeriods

  • The University refers to The University of Bolton, which is promoted under the name “University of Bolton”

  • For the purposes of this document only, Teaching refers to any scheduled activity, including seminars and practicals.

  • 2 Data

  • Standard data types are used are nvarchar(15) to represent alpha numeric indexes.

  • Preferred string type of nvarchar of size 50, 100, 128 and 255 are used for names of objects, depending on source constraints.

  • Other none standard sizes may be used.

  • Staff table contains 2 potential indexes. BoltonID is taken from the HR system’s Employee Reference Number. This has the potential to change. The network ID is unique to an individual and is now considered a better index in most systems.

  • BoltonID is a 7 numeric character string range covering Staff, Students and External people. 70xxxxx covers staff. YYxxxxx covers students, 80xxxxx covers external users. It should be noted that a batch of students of 1994 have 8 character ID numbers.

  • 3 Time Allocation Survey

  • Whilst Time Allocation Survey (TAS) analysis is not part of Workload analysis, they are closely tied. The Workload Analysis schema, therefore, contains items specifically for TAS.

  • 4 Table Auditing

  • As a general principal data is not deleted, rather an in use flag is maintained. Data created and maintained within the WLA has date/person added and date/person updated. These data should be updated by the system, although the time records will have a default of the current time and date.

  • 5 Entity Relationship Diagrams

  • Note: only key fields are shown.

    • 5.1 Work Items

  • Work items are managed by staff.

    • 5.1.1 Work Unit

  • This data will be modified in WLA, by the staff line manager

  • WorkUnit definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • workUnitID

    • int

    • 4

    • academicYear

    • nvarchar

    • 15

    • subjectAreaCode

    • nvarchar

    • 15

    • workUnitTypeID

    • int

    • 4

    • dateAdded

    • datetime

    • 8

    • getDate()

    • addedBy

    • nvarchar

    • 15

    • dateUpdated

    • datetime

    • 8

    • getDate()

    • updatedBy

    • nvarchar

    • 15

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_WorkUnit – Primary Key, Clustered (workUnitID) FK_WorkUnit_AcademicYear – Foreign Key (academicYear from AcademicYear) FK_WorkUnit_Staff_Add – Foreign Key (networkID from Staff) FK_WorkUnit_Staff_Update – Foreign Key (networkID from Staff )

  • Indexes

  • PK_WorkUnit – Unique (workUnitID)

  • Notes

  • Simple auditing is managed by recording the person adding new or updating the record.

  • Each Work Unit contains groups of staff members and groups of modules.

    • 5.1.2 WorkUnitModule

  • These are the modules worked on in a single Work Unit. This data will be modified in WLA, by the staff line manager.

  • WorkUnitModule definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • workUnitID

    • int

    • 4

    • moduleID

    • nvarchar

    • 25

    • dateAdded

    • datetime

    • 8

    • getDate()

    • addedBy

    • nvarchar

    • 15

    • dateUpdated

    • datetime

    • 8

    • getDate()

    • updatedBy

    • nvarchar

    • 15

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_WorkUnitModule – Primary Key, Clustered (workUnitID, moduleID) FK_WorkUnitModule_ModuleAvailability – Foreign Key (moduleID from ModuleAvailability) FK_WorkUnitModule_WorkUnit – Foreign Key (workUnitID from WorkUnit)

  • Indexes

  • PK_WorkUnitModule – Unique (workUnitID)

  • Notes

    • 5.1.3 WorkUnitStaff

  • This records the number of hours allocated to each member of staff in a Work Unit. This data will be modified in WLA, by the staff line manager.

  • WorkUnitStaff definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • workUnitID

    • int

    • 4

    • staffID

    • nvarchar

    • 15

    • hours

    • float

    • 8

    • dateAdded

    • datetime

    • 8

    • getDate()

    • addedBy

    • nvarchar

    • 15

    • dateUpdated

    • datetime

    • 8

    • getDate()

    • updatedBy

    • nvarchar

    • 15

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_WorkUnitStaff – Primary Key, Clustered (workUnitID) FK_WorkUnitStaff_Staff – Foreign Key (networked from Staff) FK_WorkUnitStaff_WorkUnit - Foreign Key (workUnitID from WorkUnit)

  • Indexes

  • PK_WorkUnitStaff – Unique (workUnitID)

  • Notes

  • Records the number of hours allocated to each staff member in a work unit.

      • 5.1.4 WorkUnitType

  • This defines how the work unit delivered. It is related to KIS ... look this up.

  • diagram

  • WorkUnitStaff definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • workUnitTypeID

    • int

    • 4

    • workUnitName

    • nvarchar

    • 100

    • KISCategory

    • nvarchar

    • 100

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_ workUnitTypeID – Primary Key, Clustered (workUnitTypeID)

  • Indexes

  • PK_ workUnitTypeID – Unique (workUnitTypeID)

  • Notes

  • This is a late addition to the WLA project. This may not sit in the work unit space.

    • 5.2 Allowance Items

  • Allowances are non-module based records of time.

  • Allowances are non teaching roles and responsibilities allocated to a member of staff by a line manager as part of their PDP. There are two types of allowance, those with a number of hours directly added by the manager, and secondly where a manager can allocate a number of units which are recorded in hours. For the Unit based allowances, the number of hours is recorded in the Allowance table, which is copied into the AllowanceStaff table on allocation. This value is then disconnected from the original Allowance value, this if at some later date the allowance is varied, the original allowance is maintained. It is not envisaged to retrospectively re-allocate the number of hours allocated to a staff unit based allowance.

    • 5.2.1 Allowance

  • These are defined by senior management at the university. Once created they cannot be deleted, but can be taken out of use.

  • Allowance definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • Y

    • allowanceID

    • int

    • 4

    • allowanceCode

    • nvarchar

    • 15

    • allowanceName

    • nvarchar

    • 100

    • methodID

    • int

    • 4

    • typeCode

    • nvarchar

    • 15

    • hours

    • float

    • 8

    • Y

    • minValue

    • float

    • 8

    • Y

    • maxValue

    • float

    • 8

    • Y

    • notes

    • ntext

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PKAllowanceFixed – Primary Key, Clustered (allowanceID)

  • Indexes

  • PK_Allowance – Unique (allowanceID) FK_Allowance_AllowanceMethod (methodID from AllowanceMethod) FK_Allowance_AllowanceType (typeCode from AllowanceType)

  • Notes

  • Usage of the hours field is dependent on the Allowance Type. If the type is Unit Based, then the hours value is copied in to the coefficient field of the AllowanceStaff.

  • minValue and maxValue define the range of values that can be entered for the AllowanceStaff. Both fields should be set, or nither.

    • 5.2.2 AllowanceMethod

  • Allowance Method defines how the allowance is calculated.

  • AllowanceMethod definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • Y

    • methodID

    • int

    • 4

    • methodName

    • nvarchar

    • 50

    • showCoefficientField

    • int

    • 4

    • 1

    • showValueField

    • int

    • 4

    • 1

  • Keys

  • PK_AllowanceMethod – Primary Key, Clustered (MethodID)

  • Indexes

  • PK_AllowanceMethod – Unique (MethodID)

  • Notes

  • Hours based type has coefficient of 1.0, and therefore uses the value directly as number of hours. Units based type has a coefficient defined by the Allowance, i.e. a number of hours for each Unit, the value part is added by the manager. The number of hours for the Allowance is the product of the coefficient and the value. Fixed takes the coefficient from the Allowance with a default value of 1.0

    • 5.2.3 AllowanceStaff

  • This data will be modified in WLA, by the staff line manager or Cluster Manager

  • AllowanceStaff definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • allowanceStaffID

    • int

    • 4

    • allowanceID

    • int

    • 4

    • staffID

    • nvarchar

    • 15

    • academicYear

    • nvarchar

    • 15

    • coefficient

    • float

    • 8

    • 1.0

    • value

    • float

    • 8

    • 1.0

    • description

    • nvarchar

    • 200

    • Y

    • dateAdded

    • datetime

    • 8

    • getDate()

    • addedBy

    • nvarchar

    • 15

    • dateUpdated

    • datetime

    • 8

    • getDate()

    • updatedBy

    • nvarchar

    • 15

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_ AllowanceStaff - Primary Key, Clustered (allowanceStaffID) FK_AllowanceStaff_AcademicYear (academicYear from AcademicYear) FK_AllowanceStaff_Allowance (allowanceID from Allowance) FK_AllowanceStaff_Staff (NetworkID from Staff) FK_AllowanceStaff_Staff_Manager (NetworkID from Staff)

  • Indexes

  • PK_ AllowanceStaff – Unique (allowanceStaffID)

  • Notes

  • Usage of the coefficient field is dependent on the Allowance Type. If the type is Unit Based, then the coefficient value is copied from the hours field of the Allowance table (i.e. the manager enters the number times to multiply the coefficient by). An ‘hours based’ allowance has a coefficient of 1 (i.e. the manager enters the number of hours in the value field)

  • updatedBy records the manager who performs the work unit allocation. Work allocation can only be carried out by a staff member’s reporting manager.

  • Value field is editable, Coefficient is view only in WLA entry form

    • 5.2.4 AllowanceType

  • AllowanceType describes how allowances relate to Full Economic Costing/Time Allocation Survey

  • Allowance Type definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • typeCode

    • nvarchar

    • 15

    • typeName

    • nvarchar

    • 100

  • Keys

  • PK_AllowanceType – Primary Key, Clustered (typeCode)

  • Indexes

  • PK_AllowanceType – Unique (typeCode)

  • Notes

  • AllowanceType is used for Time Allocation Survey

    • 5.3 Module Items

  • Module items are sourced from the Students Record System and the Modules database.

    • 5.3.1 Module

  • The module is the main unit of teaching. This requires regular updating from datasources

  • Module definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • moduleCode

    • nvarchar

    • 15

    • moduleName

    • nvarchar

    • 128

    • subjectAreaID

    • int

    • 4

    • subjectAreaCode

    • nvarchar

    • 15

    • credits

    • float

    • 8

    • qualificationLevel

    • nvarchar

    • 15

    • numPeriods

    • int

    • 4

  • Keys

  • PK_Module – Primary Key, Clusered (moduleCode) FK_Module_QualificationLevel – Foreign Key (qualificationLevelCode from QualificationLevel) FK_Module_SubjectArea – Foreign Key (subjectAreaID from SubjectArea) temp

  • Indexes

  • PK_Module – Unique (moduleCode)

  • Notes

  • Module data is managed externally to the system and imported using TSQL

    • 5.3.2 ModuleAvailability

  • Module Availability describes when and where a Module is to be delivered

  • ModuleAvailability definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • moduleID

    • nvarchar

    • 25

    • moduleCode

    • nvarchar

    • 15

    • moduleName

    • nvarchar

    • 128

    • academicYear

    • nvarchar

    • 15

    • occurrence

    • nvarchar

    • 15

    • prefix

    • nvarchar

    • 15

    • subjectAreaCode

    • nvarchar

    • 15

    • subjectAreaID

    • int

    • credits

    • float

    • 8

    • numberOfStudents

    • int

    • 4

    • periodCode

    • nvarchar

    • 15

    • endPeriod

    • nvarchar

    • 15

    • qualificationLevelCode

    • nvarchar

    • 15

    • locationCode

    • nvarchar

    • 15

  • Keys

  • PK_ModuleAvailability – Primary Key, Clustered (moduleID) FK_ModuleAvailability_AcademicYear – Foreign Key (academicYear from AcademicYear) FK_ModuleAvailability_Location – Foreign Key (locationID from Location) FK_ModuleAvailability_Module – Foreign Key (moduleID from Module) FK_ModuleAvailability_Period – Foreign Key (periodID from Period) FK_ModuleAvailability_QualificationLevel – Foreign Key (qualificationLevelCode from QualificationLevel) FK_ModuleAvailability_SubjectArea – Foreign Key (subjectAreaCode from SubjectArea)

  • Indexes

  • PK_ModuleAvailability - Unique (moduleID)

  • Notes

  • ModuleID is a unique code describeing an individual cohort of students at a location for a given year and is made up of Module. It is a generated field use to manage available modules on University databases, such as Moodle and CELCAT, by condensing SITS complex key to a simple key field.

  • Module Availability data is managed externally to the system and imported using TSQL

    • 5.3.3 ModuleAllowanceLink

  • The Module AllowanceLink allows reporting of teaching activity in terms of TAS items. This table interprets business processes for reporting to Government.

  • ModuleAllowanceLink definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • qualificationLevelCode

    • nvarchar

    • 15

    • Y

    • locationCode

    • nvarchar

    • 15

    • allowanceID

    • int

    • 4

  • Keys

  • PK_ModuleAllowanceLink - PrimaryKey, Clustered (qualificationLevelCode, locationCode) FK_ModuleAllowanceLink_Allowance- Foreign Key (allowanceID from Allowance) FK_ModuleAllowanceLink_Location - Foreign Key (locationCode from Location) FK_ModuleAllowanceLink_QualificationLevel - Foreign Key (qualificationCode from Qualification)

  • Indexes

  • IX_ModuleAllowanceLink - Non-Uniqiue, Non-clustered (qualificationLevelCode) PK_ModuleAllowanceLink – Unique (qualificationLevelCode, locationCode)

  • Notes

  • Module Allowance Link enables Teaching activity to be grouped by location and qualification level, primarily for TAS reporting.

  • Whilst data is managed externally to the system, there is no current defined origin for the data.

    • 5.4 University Data

  • This section deals with data which is defined and managed outside the Work Load Analysis system. Data imports and interfaces are outside of the scope of this document. The structure of the data conforms to external definitions.

    • 5.4.1 Staff

  • Staff table includes information recorded for employees of the university and lecturers on our Externals Registry.

  • Staff definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • networkID

    • nvarchar

    • 15

    • boltonID

    • nvarchar

    • 15

    • displayName

    • nvarchar

    • 100

    • managerNetworkID

    • nvarchar

    • 15

    • subjectAreaID

    • int

    • 4

    • subjectAreaCode

    • nvarchar

    • 15

    • hours

    • float

    • 8

    • FTE

    • float

    • 8

    • gradeCode

    • nvarchar

    • 15

    • dateAdded

    • datetime

    • 8

    • dateUpdated

    • datetime

    • 8

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_Staff – Primary Key, Clustered (networkID) FK_Staff_Staff_Manager – Foreign Key (networkID from Staff) FK_Staff_SubjectArea – Foreign Key (subjectAreaID from SubjectArea)

  • Indexes

  • PK_Staff – Unique (networkID)

  • Notes

  • Key field; as staff Personal Reference Number may change, from time to time, network ID is used to maintain a unique identity over time.

  • displayName is the standard [surname], [preferredName] format.

  • Trent HR system record staff working hours and full contract hours, and some staff have multiple contracts. FTE records sum (working hours) / sum (full contract hours). Hourly-Paid Lecturers will not have an FTE recorded.

  • Grade is considered public domain data, as jobs are advertised with a grade level.

  • It is not planned to record WLA for Hourly-Paid Lecturers.

    • 5.4.2 SubjectArea

  • Subject Area is the lower structural unit of the University. Most items (Staff, Modules) are linked to the Subject Area

  • SubjectArea definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • subjectAreaCode

    • nvarchar

    • 15

    • subjectAreaName

    • nvarchar

    • 100

    • clusterCode

    • nvarchar

    • 15

    • subjectAreaManager

    • nvarchar

    • 15

    • subjectAreaDeputyManager

    • nvarchar

    • 15

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_SubjectArea – Primary Key, Clustered (subjectAreaID) FK_SubjectArea_Cluster FK_SubjectArea_Staff

  • Indexes

  • PK_ SubjectArea – Unique (subjectAreaID)

  • Notes

  • Subject Area data is managed externally to the system and imported using TSQL

  • subjectAreaID is used in as a temporary index until the University Structure is fixed

    • 5.4.3 Cluster

  • A cluster is a construct for reporting, set between Department and Subject Area. This generally speaking, will not be found in some primary data systems (SITS and CELCAT), but may be created using user defined fields.

  • Cluster definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • clusterCode

    • nvarchar

    • 15

    • clusterName

    • nvarchar

    • 100

    • departmentCode

    • nvarchar

    • 15

    • clusterManager

    • nvarchar

    • 15

    • clusterDeputyManager

    • nvarchar

    • 15

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_Cluster – Primary Key, Clustered (clusterCode) FK_Cluster_Department FK_Cluster_Staff

  • Indexes

  • PK_ Cluster – Unique (clusterCode)

  • Notes

  • Clusters are a reporting and management level. It is envisaged that Cluster Managers will be responsible for managing WLA, however a degree of flexibility is required to change to Subject Area Managers or Department Managers (less likely)

    • 5.4.4 Department

  • Department is a standard University component. It represents the highest structural component within the University, it represents, for example “Faculty” or “Research Centre”

  • Department definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • departmentCode

    • nvarchar

    • 15

    • departmentName

    • nvarchar

    • 100

    • departmentManager

    • nvarchar

    • 15

    • departmentDeputyManager

    • nvarchar

    • 15

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_Department – Primary Key, Clustered (departmentID)

  • Indexes

  • PK_Department – Unique (departmentID)

  • Notes

  • Department data is managed externally to the system and imported using TSQL

  • Department table is structured to allow flexibility during University restructure. departmentID is a measure to allow department Names and Codes to change during the period of system design.

    • 5.4.5 Location

  • Location is a standard University component. This represents institutions and other organisations which have links to the University, rather than individual campuses or sites.

  • ModuleAvailability definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • locationCode

    • nvarchar

    • 15

    • locationName

    • nvarchar

    • 100

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_Location - Primary Key, Clustered (locationCode)

  • Indexes

  • PK_Location - Unique (locationCode)

  • Notes

  • Location data is managed externally to the system and imported using TSQL

    • 5.4.6 Period

  • Period is a standard University component. It the part of the teaching year: Semester, Trimester or Term

  • Period definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • periodCode

    • nvarchar

    • 15

    • periodName

    • nvarchar

    • 100

    • inUse

    • int

    • 4

    • 1

  • Keys

  • PK_Period - Primary Key, Clustered (periodCode)

  • Indexes

  • PK_Period - Unique (periodCode)

  • Notes

  • Period data is managed externally to the system and imported using TSQL

  • 2012/13 is expected to run in Trimesters, however, codes have not yet been defined

    • 5.4.7 Academic Year

  • AcademicYear is a standard University component, with the addition of a “Support for Teaching [SFT]Multiplier” field

  • AcademicYear definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • academicYear

    • nvarchar

    • 15

    • SFTMultiplier

    • float

    • Y

    • startDate

    • datetime

    • Y

    • endDate

    • datetime

    • Y

    • numWeeks

    • int

    • 52

  • Keys

  • PK_AcademicYear – Primary Key, Clustered (academicYear)

  • Indexes

  • PK_AcademicYear – Unique (academicYear)

  • Notes

  • AcademicYear data is managed externally to the system and imported using TSQL

  • SFTMultiplier – Support for Teaching Multiplier. This is used to calculate the support for teaching component of a workload by multiplying total teaching hours in an academic year for a member of staff by the SFTMultiplier.

  • numWeeks may contain 52 or 53 for accounting weeks in a year

    • 5.4.8 QualificationLevel

  • QualificationLevel definition

    • Key

    • Identity

    • Column

    • Data Type

    • Size

    • Nulls

    • Default

    • Y

    • qualificationLevelCode

    • nvarchar

    • 15

    • qualificationLevelName

    • nvarchar

    • 50

  • Keys

  • PK_QualificationLevel - Primary Key, Clustered (qualificationLevelCode)

  • Indexes

  • PK_QualificationLevel - Unique (qualificationLevelCode)

  • Notes

  • Qualification Level data is managed externally to the system and imported using TSQL

  • 6 Glossary

    • Teaching Load

  • The sum of hours for work units for a particular member of staff.

    • Non-teaching Load

  • The sum of non teaching activities plus the sum of allowances allocated in according with PDP

  • PDP Personal Development plan.

  • Module One part of six teaching entities making up a Course in a year. It is the basic unit of teaching.

  • Module Occurrence The availability of a module for a cohort of students in a given year. The occurrence code can indicate the teaching starting period.

  • Period The Trimesters of terms which make up an academic year.

  • KIS Key Information Set. This relates to how a course is delivered. Data within the project refers KIS but is too early to act on. It is here only as a reference item.