1 / 22

Giancarlo Senatore RSO Senior Partner Ghent, Nov 30 th , 2009

Framework Contract 30-CE-0121765/00-57 (Supply of services for the further development, reinforcement and promotion of benchlearning) Benchlearning Final Conference Measuring eGovernment Impact. Giancarlo Senatore RSO Senior Partner Ghent, Nov 30 th , 2009. INDEX. THE PROJECT. BENCHLEARNING.

beate
Download Presentation

Giancarlo Senatore RSO Senior Partner Ghent, Nov 30 th , 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Framework Contract 30-CE-0121765/00-57(Supply of services for the further development, reinforcement and promotion of benchlearning)Benchlearning Final ConferenceMeasuring eGovernment Impact Giancarlo Senatore RSO Senior Partner Ghent, Nov 30th, 2009

  2. INDEX THE PROJECT BENCHLEARNING THE PILOTS BENCHLEARNING COMMUNITY

  3. The project • Rooted in eGEP’s findings – eGovernment Economics Project, Benchlearning is a mean … • to test the comparability of impact indicators, • to build measurement awareness and capacities, • to share good practices. • On a voluntary and flexible basis, 12 public agencies covering 10 European countries have freely committed themselves to join 3 Pilot eGovernment Benchlearning exercises on a 2-yearly time span. • Through a systematic data gathering, the Agencies will prove whether eGovernment services and applications are finally delivering the expected outcomes.

  4. Back to 2005: the eGovernmentEconomics Project (eGEP) • eGOV costs monitoring methodology • Expenditure estimate for EU25 • Total ICT: € 36.5 billion (2004) • eGOV only: € 11.9 billion (2004) Expenditure Study • Measurement Framework • About 90 indicators • Implementation methodology Measurement Framework eGovernment Productivity GDP Growth Economic Model Scenarios show that future eGovernment research and programmes (2005-2010) could boost EU25 GDP by up to 1.54 percent

  5. The eGEP measurement framework Cashable financial gains Financial & organisational Value Better empowered employees Efficiency Better organisational and IT architectures Net Costs Inter-institutional cooperation Set-up Provision Maintenance Political Value Openness and participation Democracy Transparency and accountability Reduced admin. burden Constituency Value Increased user value & satisfaction Effectiveness More inclusive public services

  6. How could it be used? • EU must agree with • Member States indicators • Methodology for new benchmarking Very simple fully comparable indicators EU25 Benchmarking Less simple indicators, some comparability problems • A national level unit can • Impose indicators top down • Build consensus on most comparable ones National level monitoring of eGOV eGEP • A public agency can: • Select any of eGEP indicators • First use them for ex ante business cases • Then for steady and continuous measurement • Use eGEP implementation tools Sophisticated indicators, no comparability problems Micro-level business case & measurement

  7. Lessons from eGEP:impact measurement difficulties Overall score Level Type Difficulties score 4 4 4 International (EU25) Policy system benchmark • Cooperation • Comparability • Feasibility HIGH 3 3 3 Member State (holistic) Public policy benchmark • Cooperation • Comparability • Feasibility MEDIUM-HIGH Member State (within vertical and/ or region) Organisational benchmark • Cooperation • Comparability • Feasibility 2 1 3 MEDIUM Individual public agency (voluntary) Measurement/ Internal benchmark • Cooperation • Comparability • Feasibility 1 0 2 LOW 4=High; 3=Medium-high; 2=Medium; 1=Low; 0= null

  8. 2008-2009: the Benchlearning challenge A bottom-up collaborative benchmarking based on a peer-to-peer experimental exchange among fairly comparable public agencies from at least two different EU Member States, designed as a symmetric learning process, that (…) will implement and calculate more sophisticated indicators in a chosen area of impact the ICT enabled services the selected agencies provide and in the process will build transformative capacities.

  9. To benchmark only some eGEP indicators: The simplest and more comparable. To boost the public sector’s impact evaluation capabilities: Focus on most sophisticated impact indicators; Measurement capacities are built bottom-up. To provide the involved agencies with tangible benefits: Opportunity to look at processes complexity; Identify enabling and hindering factors. Why to benchlearn?

  10. Benchlearning = bottom-up benchmarking • Benchlearning is: • Voluntary, bottom up and learning oriented; • Flexible, with no need of uniform rigid indicators. • Gradually scalable from micro to meso and macro: • Groups of similar organisations; • Groups of similar verticals / regions; • Groups of similar countries. • Provides insights and learning on the eGOV value chain: • Key drivers and success factors; • Main barriers; • Organisational processes and input.

  11. Analysis of eService set-up and delivery processes To understand the success factors and barriers behind processes Elaboration of new impact indicators To collaboratively test its feasibility and comparability Peer-to-peer exchange of experiences To build awareness on good practices Transfer knowledge on measurement tools To enable PAs measure their own performance What benchlearning is aiming to achieve ACTIVITY AIM To extrapolate the promising areas where EU can become a global leader

  12. Ag.1 Ag.2 Ag.3 Ag.8 Ag.1 Ag.4 Ag.5 Ag.6 Ag.7 Ag.8 Ag.9 Ag.11 Ag.12 Ag.13 Ag.13 Benchlearning vs. benchmarkig TEST AND LEARNING BEST IN CLASS vs

  13. Ag.8 Ag.1 Ag.8 1 Ag.13 2 Ag.1 3 Ag.13 Benchlearning vs. benchmarking: outcomes CAPACITIES AND LESSONS RANKING vs

  14. How benchlearning works (1/2) First year measurement • Set-up: • Letter of intent from the participating agencies; • Running of a kick-off meeting with all participating agency. • As is and mainstreaming: • Review of existing measurement systems and data; • Analysis of organisational strategy and context; • Draft report on indicators and preliminary measurement. • First full measurement (or zero measurement): • Data gathering instruction to agencies; • Remote support to agencies to gather the data; • Validation of data and calculation of indicators.

  15. How benchlearning works (2/2) Second year measurement • Set-up(same as Y1); • As is and mainstreaming(same as Y1); • Second full measurement(same as Y1). • Continuous exchange of information (www.epractice.eu/community/benchlearning); • Inter-agency workshops. • Provision to the agencies of a measurement organisational model (processes and roles); • Final recommendations and final report. Exchange activities Sustainability actions By end of Dec 2009

  16. Expected project outcomes: eGEP 2.0 Pilot 1 Pilot 2 Pilot 3 EFFICIENCY ADMINISTRATIVE BURDEN REDUCTION CITIZEN CENTRICITY Public sector information indicator Standard cost model based indicator Plurality of subjective and objective metrics Efficiency indicator Work in progress on number of data field indicator Work in progress on a combined index Simplified version of the eGEP Measurement Framework: eGEP 2.0

  17. Pilot 1 Efficiency gains About the Pilot • Pilot measurement of “efficiency gains” of cadastral eServices: • Development of a set of indicators to measure the efficiency gains and savings due to the delivery of cadastral eServices; • Definition of the indicators in line with the agencies’ requests to assess the added value of online cadastral information supply in terms of internal and social impact; • Data gathering and analysis (volumes, eService costs, organisation and customer satisfaction programs); • Shared experiences among agencies. Agencies involved: • Agenzia del Territorio (Italian National Cadastre) • Oficina Virtual del Cadastro (Cadastre Virtual Office of Spain) • Lantmäteriet (National Land Survey of Sweden) Observers: • Regional agencies (Emilia-Romagna, Catalonia)

  18. Pilot 2 Administrative burden About the Pilot • Piloting of indicators in the Administrative Burden Reduction Field • Proof-of-concept service: Business registration • Data sources: Standard Cost Model Focus groups and interviews • Methodology: • Specific focus on users (both business and civil servants) • Qualitative and quantitative approach to burden measurement Agencies involved: • Pilot agencies G2B: • Belgium: FPS Economy • Slovenia: Ministry of Public Administration • Greece: Ministry of Public Administration • Project Observers: • Fedict, Dutch Ministry of Interior, Greek Information Society Observatory

  19. Pilot 3 User centric impact About the Pilot • Pilot measurement of “user centric impact” of national government portals: • Understand what the EC has already done on measuring e-Government and specific indicators developed for user-centric impact and benchmarking national portals • Development of a measurement framework that can be used to benchmark the user-centricity of national government portals. The framework measures 5 key aspects: content richness, service sophistication, user choice and control, quality control and design for usability • Shared experiences among agencies. Agencies involved: • DirectGov (British eGovernment portal) • Service-public.fr (French eGovernment portal) • Mojauprava (Croatian eGovernment portal) • Additional eGovernment portal data collected from Italy and Hungary

  20. Benchlearning groups: how to manage them Generate ownership Start simple • Voluntary participation: • Participants self-interested in capacity building/learning. • Clear mandate and leadership buy-in: • Groups to be assemblednot by facilitator. • Multi-stakeholders butfirm governance: • Exchange and consensus • But with clear lines of accountability. • Micro level only, single public organisations: • 3-4 learning organisations managing homogeneous services • Groups assembled from similar countries: • Leverage existing collaboration networks. • Third party facilitators (EU contractors, governments…): • Intense and in-depth work.

  21. Benchlearning community within ePractice.eu Having a say in the blog session Suggesting and attending new events Sharing experience and cases Exchanging ideas on eService impact measurement and evaluation Recommending documents for the Community library

  22. What results will be presented tomorrow • What kind of indicators we selected for the measurement of Efficiency Gains, Administrative Burden Reduction and User Centricity; • How we managed the comparability issues due to the structural differences of public services in the different countries; • How we took into account the organisational process and the eService costs for the service provision in each country; • What have been the results of terms of learning; • How we quantified the benefits from eGovernment; • ICT-enabling of processes leads to significant savings both for businesses and internally within governments. • The results obtained in the pilots illustrate the business cases for better and more convenient eGovernment solutions. • … • …

More Related