1 / 28

TTN – WP3 TTN meeting, June 10-11, 2010

TTN – WP3 TTN meeting, June 10-11, 2010. WP3 members. WP3 - objectives and methodology. To investigate, define and classify a set of criteria for TT activities measurement in PP How? To build a set of indicators and metrics overview of the situation in our institutions

dane
Download Presentation

TTN – WP3 TTN meeting, June 10-11, 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TTN – WP3TTN meeting,June 10-11, 2010

  2. WP3 members

  3. WP3 - objectives and methodology • To investigate, define and classify a set of criteria for TT activities measurement in PP • How? To build a set of indicators and metrics • overview of the situation in our institutions • elements of comparison between us, and us vs overseas • guidance for newcomers • performance improvement measurement •  How to select those indicators? • bibliography • adjustment to our research profile • testing using a questionnaire • define the final ones: the TT PKIs for HEP institutions Presented last TTN meeting (December 2009)

  4. Questionnaire – sent in April 2009 Presented last TTN meeting (December 2009)

  5. Schedule at end 2009 Presented last TTN meeting (December 2009)

  6. Questionnaire analysis: last TTN meeting issues At the end of 2009, 19 ‘2008 questionnaires’ were received, 14 considered as valid (not too much empty fields), splitted in subsets: EG, EH (using FTE-HEP), EU, NG ALL vs ASTP, HEP vs a ENCHMARK=(BNL+EPFL) • the questionnaire was designed to get answers on the HEP activity, but responses covered a mix between HEP and non HEP • empty cells are significant – for ex. we can show that facilities agreements are indicated only in HEP institutions-, but calculations are disturbed • possible misunderstandings in the answers • very large variance • results sensitive to the quality of data and the choice of selected questionnaires in each set • not the same indicators than ASTP  better identification of homogeneous subsets  add other questionnaires • other ways of calculations (mean on significant variables by reducing the impact of empty cases…) • consolidation of KPI choice

  7. Major evolutions since previous TTN meeting • WP3 meeting (22/01/10 in Paris), with main decisions: • selection of KPIs for the analysis and for future questionnaires • distribution in two subsets: HEP institutions and ‘BENCHmark institutions’ (having high performance in TT) • report and booklet structure preparation • new schedule with the objective to add more questionnaires to be more confident on the results: that’s where the shoe pinches, because of delays in the receipt of new completed questionnaires! • Only two new completed questionnaires were received (more were expected), classified as ‘Universities’ : • University College London (GB) • Politecnico Di Milano (It) • Reorganisation in two groups: - HEP institutions (all facts considered only through HEP activities) - BENCHMARK: generic institutions having high performance in TT [BNL (US), EPFL (S), UCL (UK)]

  8. Work done in 2010 • 21 ‘2008 questionnaires’ were received, 16 considered as valid (not too much empty fields) • Anonymisation of questionnaires, on the request of some institutions • Splitting in a first group of two subsets: • ALL 2006 (16 Q) mixing HEP, multipurpose institutes and Universities • ASTP (Association of European Science and Technology Transfer Professionals) survey added for comparison • Splitting in a second group of two subsets: • 10 HEP institutions having a profile pure HEP (all facts considered only through HEP activities) • BENCHMARK: 3 generic multipurpose institutions (existing HEP activity is not the measure) with high performance in TT [BNL (US), EPFL (S), UCL (UK)] – also active in TT but n • Analysis: • Descriptive statistics • Selection of KPIs • Comparison on KPI means • Research of explaining factors in each subset • Comparison ALL vs ASTP • Comparison HEP vs BENCHMARK • Radar graphs

  9. Questionnaire analysis –methodology (1) • 1 Input raw data from questionnaires to Excel • 2 Preparation of the synthesis • Total # FTEs= FTE for general institutions and HEP FTEs for HEP institutions • Quantification of qualitative data (particularly those relative to maturity) • One worksheet splitted in two sets: ALL inputs (16 institutions) vs ASTP 2006,to have a global vision • One worksheet splitted in two sets: • 10 HEP European institutions having provided HEP specific data vs a BENCHMARK of 3 institutions

  10. Selection of KPIs • Two references (not KPIs): • #FTE and #TTO • Eleven KPIs (* see comments next page) • 2.1.1 # invention disclosures/year • 2.1.2 # priority patent applications/year • 2.1.5 portfolio of patent families • 2.1.6 portfolio of commercially licensed patents • (missing in Q2008) total portfolio of licenses (including software and know-how) • (missing in Q2008) license revenue/year • 2.2.1 # IP transfer or exploitation agreements/year • 3.1.1 # R&D cooperation agreements/year • 3.1.1.4 R&D cooperation agreements revenues/year • (incomplete in Q2008): licenses+services+facilities revenue/year * • 2.3.4 # startups still alive since 2000 (not really significant, but for information).

  11. Revenues from KTT activities IP commercialisation R&D cooperation (collaborative and contract research) Licensing Services, Consultancy, Access to facilities Products & GDP New IP Research disciplines Comments on these KPIs • Maturity of HEP institutions is an interesting KPI; it was evaluated through an aggregate built from various answers with more ore less weighting; unfortunately, as it is today this indicator only measures if written rules exist • Revenues related to knowledge and technology transfer activities have two sources: • the commercialisation of IP comprising licensing, services, consultancy and access to facilities; • and R&D cooperation comprising collaborative and contract research

  12. Questionnaire analysis –methodology (2) • 4 Multiple correlation on aggregates: Search for explanatory factors • NB: empty cells have been set to zero; indeed, the Excel tool cannot work on non numeric values • 5 Normalised aggregates: aggregates are normalised to 1000 FTE equivalent per site, then all values are normalised between 0-1 for radar graphics and histograms • 6 Comparison of means between each set of selected institution (normalised to 1000 FTS): to see where are the main differences and if HEP institutions are specific • 7 Graphs criteria: a radar graph comparing all institutions on a selected KPI • 8 Graphs Institutes: a radar graph on strengths and weakness of each institute

  13. Results – June 2010

  14. Descriptive statistics Our 16 relevant institutions represent: • 68530 FTE (73339 if we include all questionnaires), on which 8043 FTE are devoted to HEP • 142 TT officers In 2008, they have produced: • 540 invention disclosures • 30 new startups – with 125 still alive • 88 IP agreements • 720 R&D contracts • 159 M€ revenues from R&D contracts

  15. ALL selected institutions vs ASTP per 1000 FTE The comparison of the figures resulting of the ALL questionnaires vs ASTP gives: -less TTO (75%) -quite the same invention disclosures per year -more licensed patents (maybe due the calculation on 1000 FTE and Top ten HEP institutes + 3 ‘BENCHMARK’ institutes)

  16. HEP institutions vs BENCHMARK In this comparison, we have HEP institutions compared a benchmark set (2 EU, 1 US), normalized to 1000 FTE in each institute: -less TT officers per 1000 FTE in the benchmark (prob. due to their large size) -a not too bad score in terms of licenses with only ¼ in terms of IP agreements -services & facilities are specific to some HEP institutions (vs no answer for the others)

  17. KPI means analysis • The objective is to compare KPI means of of institutions vs ASTP • In a first time, we have a look on the KPIs of each set vs one other to observe if there are interesting variations, to put the focus on them • The means are listed below:

  18. KPI means analysis per 1000 FTE

  19. Comparison on means • Preliminary remarks: • the normalisation of each institute on 1000 FTE improves the results for ‘HEP institutes’, particularly for those <<1000 • the results are not for all HEP institutes but for the top ten in TT • If the mean number of TT officers between subsets can be compared, it’s very variable from one institute to another • HEP invention disclosures and priority patents are satisfactory, with a good result in patent portfolio and patent licensing…but for CERN, GSI and STFC • Contracts: the number of R&D contracts is difficult to appreciate independently of their amount, but we have very good results in terms of revenue, thanks to GSI • Service and facilities revenues of some HEP institutes represent an interesting result, and will be grouped with license revenues in next questionnaires

  20. Explaining factors • Multiple correlation analysis has been used to measure the impact of each KPI on the others • The threshold to consider if a high correlation exists has been chosen to 0,707 (see next figures) considering freedom=6 and confidence p=5%

  21. ALL QTTN • In term of explaining factors, we have to suppress trivial correlations (patents # vs invention disclosures is an example). • Possible links: • # invention disclosures and # off TTOs • # startups alive and # patents

  22. Explaining factors HEP vs BENCHMARK empty cells • In HEP, the relation between # research agreements and priority patents; could we say that patents are related to • R&D agreements? • More interesting factors in BENCHMARK (high TT results), unless # of licenses vs # of TTOs with a high correlation between R&D contracts and patents ( objective for HEP institutions)

  23. Radar graphs Radar graphs are used to give an easy way in comparing more than three axis of values at a first time, and to see the evolution of the results on each axis vs the others. We have defined two categories of radar graphs: • Graphs criteria: a radar graph comparing all institutions on a selected KPI; by this way, each institution may compare its results vs the other ones • NB: values are normalised on 1000 FTE per institution and values between 0 to 1 to facilitate comparisons • 8 Graphs Institutes: a radar graph on strengths and weakness of each institute, to know where to put efforts • NB: values are normalised on 1000 FTE per institution and values between 0 to 1 Following figures are shown as examples. Each institution having answered the questionnaire will received their full set.

  24. Graph ‘Criteria’ for HEP institutions Example of Graphs ‘Criteria’ (performance of each institution per kpi) -Radar graphs show that each institution is specific and may have strengths & weaknesses -The high performance obtained by some institutions must be regarded as an objective by others, and improved each year

  25. Graph ‘Institute’ compared to another institutions Example of Graphs ‘Institutes’ (Strengths & weakness per Institution) for two institutions: These graphs show where are the weakness of your institution, and where you have to work with the institution management...for better results next year.

  26. Report and booklet structure Booklet structure (In italic, chapters pasted from the report) • 1 Purpose • 2 Scope and methodology of this survey • 3 Indicators selected (and meaning) • 4 Analysis and results • 5 Recommendations for improvement • 6 Future plans • 7 Summary of conclusions Distribution • CERN Council • PP institution Directors • Policy makers • TTN members • Other comparable networks • European Commission? • Specific distribution to questionnaire senders with added figures Presented last TTN meeting (December 2009)

  27. Work still to do • Report to the council • Booklet • New questionnaire

  28. Next steps

More Related