Jisc case studies wiki Case studies / Course Data - Bournemouth and Poole College
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Course Data - Bournemouth and Poole College

Funded by the: Jisc e-Learning programme.

Lead Institution: The Bournemouth and Poole College.

Learner Provider Type: Further Education

Project Duration: January 2012 - March 2013

Key Words: Course Data

Case study tags: course data, process improvement, course information, course information, change management

Note: This is an abridged version of this project's final report.  The full version is available here.

 

XCRI-CAP Implementation

 

Project Summary

This project has been about testing the principles of XCRI-CAP in three distinct ways: at the input stage; at the output stage and at the outcome stage. At the input stage, the project tested the theory that an XCRI-CAP standardisation model may be able to be implemented at a key intervention point in the decision making process for course advertising information.  We felt this would be of immense importance to HE establishments who may be reluctant to change existing working structures but happy to integrate XCRI-CAP into a current functional environment.  The key point to show here was to show that XCRi-CAP does not have to revolutionise the way decision making is done in an institution.

 

At the same time the project looked at methods of outputting XCRI CAP feeds for aggregation purposes and testing out the presentation results of XCRI CAP feeds in terms of ease of discovery paying particular attention to the needs of industry and the ways on-line publishers categorise industry sectors.  Trainagain was used as a pilot on-line course publisher for this purpose and there will be continuous opportunities to try out export feeds in different presentations in order to maximise effective discovery and retrieval by key users. The main point here is that the output of data needs to be well presented and easily discoverable otherwise it would be a waste of time creating the feed in the first place.

 

The third element of the project looked at specific outcomes in terms of employer feedback on the actual presentation of data.  Without employer engagement the project runs the risk of designing solutions which do not meet key user requirements – so industry feedback became a key part of the process and a means of providing a feedback loop into the first ‘input’ stage of the project.

 

 

What could have been improved?  What lessons have been learned?

The project has shown that there is an important distinction to be made between small and large organisations, or to put another way, between complex and less complex environments. 

 

  • More complex organisations may have multiple decision making sources
  • Less complex organisations may have single decision making sources

 

In terms of data transformation, the project has learnt a great deal.   Starting with one institution's data, XCRI-CAP was mapped to the Trainagain structure using Altova MapForce. This tool generates sql statements that can be used to import data into the database, showing error messages on failure. The error messages were used to formulate successive new mappings and sub-processes, until success was achieved with the first institution's data. Having recorded the mapping and process, these were then used to import the other institutions' data, revealing a small number of additional difficulties, owing to differences in the XCRI-CAP content. For the last two institutions, relatively few difficulties were encountered…

 

Validation

This was mostly straightforward and for some data sets was unnecessary, as it had already been carried out prior to publication of the XCRI-CAP data feed. This activity typically revealed data problems in the initial XCRI-CAP data, something not of particular relevance to this work. These problems were corrected locally prior to proceeding further.

 

Mapping in MapForce (a graphical data mapping, conversion, and integration tool: http://www.altova.com/mapforce.html ).  Several mappings were used.

  • Simple ones were created to import reference data into the database.
  • A more complex set of mappings and transforms was created to generate the sql statements for import into the database. These mappings could be output in various programming languages including Java, C# and C++, as well as XSLT, for use elsewhere.
  • A one-to-one mapping was implemented between each XCRI-CAP presentation and a Trainagain Event. XCRI-CAP qualifications and credit data were not included, because there are no locations for them in Trainagain, as it does not cater for this type of learning opportunity. However, a sizeable chunk of course and presentations data was included.
  • This work did not represent production level aggregation, but was a series of trials to investigate the problems of aggregating data into an existing system.

 

Likely generic difficulties, compared to a web-based data entry system, that could be extrapolated from the work were:

  • A requirement to include appropriate mandatory reference data in the data to be imported. Whether this should be within the XCRI-CAP feed is moot; for Trainagain it must be in the final data for loading into the database, so some pre-processing is needed.
  • Reference data must use a supported vocabulary or classification system. For subjects in Trainagain, this trial used LDCS, requiring extra work with the data. If data is already classified, it might be possible to use mappings between vocabularies, followed by manual checking. Otherwise manual classification from scratch, or transforms from other fields, would be needed.
  • Any manual alterations should be avoided, as these changes will be over-written when a new whole data set is imported. Alternatively an internal delta update[1] process could be implemented to ensure that only genuine changes are made.

 

Consuming XCRI-CAP data requires extra work from the aggregating organisation, over and above a web-based data entry system. However, the amount of work done overall between the aggregating organisation and the producers is reduced significantly once the new data exchange methods are in place. One of the pieces of new work is a reasonable quality mapping from XCRI-CAP to the database structure, including any necessary transformations. Another is a well-designed set of data definitions set out by the consuming organisation for use by the producers. Fortunately, once these data definitions are in place, the producers can create good quality feeds, and then the mapping from XCRI-CAP to the database structure only needs to be done once for the consuming system to cover all the producers.

 

The experience from this work stream has shown that importing data using XCRI-CAP feeds is a practical proposition with an existing system not optimised for XCRI-CAP. Completely automatic loading of data into a live system with no intervention is not the aim; what is needed is a robust process to gather the XCRI-CAP data, carry out any pre-loading processes, including validation, on it, and then load it, with a final check that all is correctly implemented.

 

Decisions about the processes required will depend on specific issues:

  • Is the architecture push or pull?
  • Does the existing system use reference data? If so, how much, and how can that be included on loading the new data?
  • Will the import be 'whole data sets' or updates?
  • How frequently will the data be refreshed?
  • How much use will be made of the identifiers in XCRI-CAP and how much of other internal identifiers?
  • How will differences between XCRI-CAP data definitions and local data definitions be handled, particularly with regard to size of fields and expectations of blank or null fields?

 

With robust data definitions, good quality feeds and well-designed processes, it should be straightforward to consume XCRI-CAP data. What is needed is an attention to the details of the data requirements and how to map and transform the incoming data to meet them. It is also worth bearing in mind that course marketing data is not particularly volatile, so minute-by-minute real-time data exchange is not a requirement; in many cases monthly or quarterly regular updates are sufficient.

 

The final part of our work involved finding out how employers and their workforce, might use an aggregator.  In particular we wanted to know what decision making processes business went through to find and book courses.

 

Analysis of questionnaires for XCRI project Bournemouth & Poole College

First it is worth reiterating that businesses go about choosing training in a very different way to students.

We've been carrying out interviews with businesses to find out how they search for courses.  Our research confirms what we had suspected - that businesses search for training in a completely different way to students.  Essentially they look for:

  • Relationship
  • Flexibility
  • Price

The relationship with training providers appears to be the overriding concern and several businesses commented that they would not change on the basis of price, that the relationship with their existing provider (for a particular subject specialism) was more important.  Flexibility is seen by all businesses we spoke to as absolutely key.  If training providers are not able to respond to expressed training needs appropriately, little can be achieved.  This means that businesses are looking for flexibility on content and timing; on place and duration, and also of course on price although this seems to be less important than other factors… it is clear that a simple course discovery service is insufficient to satisfy the more complex needs of employers and their workforces.  Whilst course advertising data may be of some use to companies’ initial enquiries, much more important will be the follow-up services in place and local relationships and networks established.

 

Immediate Impact

The immediate impact of this project is that it has shown that there is a way to involve many more FE and HE organisations in moving towards the XCRI-CAP standard – by implementing a parallel integration model (XPIM), and/or externally transforming course data so that it meets database aggregator requirements. 

 

Of equal importance is the finding that no matter how streamlined and comprehensive course searches become, they will not satisfy all of the needs of businesses, and additional activities (relationship building, networks) need to be in place in order to maximise course booking success.

 

Future Impact

There are several longer term considerations to be gleaned from this work.  First, there is the question of proportionality – some institutions may feel that the culture changes required to implement XCRI-CAP successfully are too great and that carrying on with the status quo is a better option.  What the second part of our work has shown is that data transformation processes can go a long way to obviating the need for organisational change.  Of course this won’t help organisations improve their course advertising decision making nor will it help streamline diverse departmental approaches to capturing and managing course data.  But it should be very helpful in terms of aggregation. 

 

On the question of employer engagement, this project has highlighted the importance of relationship building and networking for which course data advertising in any form will never serve as a replacement.  This suggests that there is still much to do in exploiting the internet for these purposes – how can universities and colleges harness social media to better engage with employers.  The Kahn Academy (www.khanacademy.org) has shown how simple online communication can convert millions to learning – it remains to be seen how HEI’s and FE colleges can find similar ways to inspire and engage with non-core audiences.

 

Conclusions

Implementing a Parallel Integration Model

This report concludes that it is both sensible and possible in certain circumstances to bypass the organisational changes implicit in moving towards XCRI-CAP compliance.  This could be particularly important for the FE sector.

 

Transforming Data to fit aggregator requirements

This project has shown that it can be quite a simple task to take existing data and transform it using tools such MapForce.  One can see how this could be extremely useful in providing aggregators with standardised information contained in the right format.

 

Employer Engagement

This project has shown that that businesses search for training in a completely different way to students.  This has implications for the way in which HEI’s and FE colleges engage with employers and their workforce.

 

Recommendations

  • To assess which organisations are most suitable for undertaking a complete organisational review for XCRI-CAP implementation and which types of organisation should be encouraged to adopt a parallel integration model.
  • To encourage colleges and universities (where appropriate) to make use of MapForce services so that their existing data can be transformed to meet aggregator requirements.
  • To explore ways of engaging with employers and their workforces, especially through social media.

Further details:

Project Director: Quinton O’kane

Project Manager: Antony Carr

Contact email: carra@bpc.ac.uk

Partner Institutions: Trainagain; APS

Please refer to www.trainagain.co.uk/bpc


[1] http://en.wikipedia.org/wiki/Delta_update