This note sets out the full methodology underlying the PIPE University Impact Index: how each scoring dimension is constructed, what data sources are used, how institutions are normalised against one another, the specific model used to calculate friction, and the limitations that apply to the current dataset. It is intended to allow independent scrutiny of the index and to support any institution that wishes to query or contextualise its position.
The index covers 346 universities across 13 countries, scored on six equal-weight dimensions. It is updated as new data becomes available. Questions about individual institution data should be directed to [email protected].
Overview of the scoring model
Each institution receives a composite impact score on a 0–100 scale. The score is derived from six dimensions, each carrying equal weight of 16.67 points. Scores are normalised within currency peer groups rather than globally, so that a Belgian university is compared against other EUR-group institutions and not against MIT. This removes the distortion that would otherwise arise from comparing absolute income figures across currencies and institutional sizes.
The seven currency peer groups are: GBP (140 UK institutions), EUR (101 institutions across Belgium, France, Germany, Italy, the Netherlands, Portugal and Spain), USD (45 US institutions), AUD (27 Australian institutions), CAD (15 Canadian institutions), SEK (10 Swedish institutions), and NZD (8 New Zealand institutions).
Within each peer group, the maximum value observed across all institutions sets the normalisation ceiling for that dimension. An institution scoring at the peer-group maximum receives the full 16.67 points for that dimension; one scoring at zero receives none. Because the normalisation is within peer group, a score of 65 at a UK institution means something different from a score of 65 at a US institution, the two are not directly comparable across currencies.
The six scoring dimensions
1. Patent productivity
Measured as patents filed per 100 research FTE per year. Research FTE is calculated as senior academic staff multiplied by their research effort fraction, plus postdoctoral researchers. This normalisation removes the scale advantage that large institutions would otherwise have over smaller ones.
Data sources: UK: HESA HE-BCI (Higher Education Business and Community Interaction) survey. United States: AUTM (Association of University Technology Managers) annual licensing survey, supplemented by IPEDS institutional data. Australia: HERDC (Higher Education Research Data Collection). Other countries, national equivalents and institutional publications where available.
Limitation: Patent counts reflect filings, not grants. A filed patent that is subsequently abandoned or refused counts the same as one that proceeds to grant. Institutions with a policy of conservative filing (only filing on strong commercial cases) may therefore score below their true commercial IP strength on this dimension.
2. Spinout formation rate
Measured as new spinout companies formed per 100 senior academic staff per year. Senior academic staff is used as the denominator rather than total research FTE because spinouts typically originate with principal investigators and senior researchers rather than with the full research workforce.
Data sources: As for patents above. For UK institutions, HESA HE-BCI provides spinout formation data directly.
Limitation: Spinout definitions vary between institutions and reporting systems. Some institutions count only majority-owned spinouts; others include minority-stake ventures and licence-based companies. The index uses the figures as reported in the primary data sources without adjustment for definitional variation. Institutions that apply a more inclusive definition will tend to report higher spinout rates.
3. Spinout three-year survival rate
Measured as the percentage of spinout companies still operating three years after formation. This dimension is designed to separate institutions that form durable commercial ventures from those that form spinouts for reporting or reputational purposes. A spinout that ceases operations within three years has consumed resources without generating sustained commercial value.
Data sources: HESA HE-BCI for UK. AUTM for US. Equivalent national sources for other countries. Where three-year survival data is not directly reported, it is estimated from two-year and five-year cohort data using a linear interpolation.
Limitation: The index does not differentiate between the commercial performance or quality of surviving spinout companies, only their survival. A spinout generating £500m in revenue and one generating £50k in its third year both count as surviving. The survival rate is therefore a floor indicator of spinout quality, not a ceiling. Institutions with exceptionally high-performing spinout portfolios may be underrepresented by this dimension relative to their true commercialisation impact.
4. IP revenue
Measured as total income from IP activities in local currency, combining surviving spinout revenue and licence income. The formula is:
ipRevAdj = (spinIncome × spinoutStock) + (licIncome × annualLicences)
Where spinoutStock is the number of spinouts adjusted for three-year survival, and annualLicences is the annual licence count derived from the patent portfolio and licence rate. This approach prevents institutions with large spinout portfolios of mostly failed companies from scoring artificially high on this dimension.
Data sources: HESA HE-BCI for UK. AUTM for US. HERDC for Australia. Institutional annual reports for other countries.
Limitation: IP revenue figures are not cross-currency comparable. A Canadian institution reporting C$50m is not directly comparable to a UK institution reporting £50m. This is why the index normalises IP revenue within currency peer groups only. Any cross-country comparison on this dimension should be treated with caution.
Total revenue figures used in this model exclude endowment income and consultancy receipts. For institutions with substantial endowments, notably Oxford (endowment approximately £6bn) and Cambridge (approximately £3.5bn) actual institutional income is considerably higher than the model reflects. This does not affect the IP revenue dimension directly but contextualises the cost recovery calculation below.
5. TTO return on investment
Measured as IP revenue (as defined above) divided by TTO operating cost. A ratio above 1× means the TTO generates more in IP income than it costs to operate. A ratio below 1× means the TTO is a net cost centre relative to the IP income it generates.
This is the most diagnostic dimension in the index for assessing the structural efficiency of the commercialisation function, independent of research scale. Across the 140 UK institutions in the index, only 9 achieve a TTO ROI above 1×. Cambridge leads the UK at 11.6×; MIT leads the global dataset at 39.1×.
Data sources: TTO operating costs from HESA HE-BCI (UK), AUTM (US), and equivalent national sources. Where operating costs are not separately reported, they are estimated from total knowledge exchange expenditure using a sector-standard proportion derived from institutions where both figures are available.
Limitation: Some institutions route contract research and consultancy income through their TTO or its commercial subsidiary. In these cases, the TTO ROI figure may overstate or understate the commercialisation-specific return depending on how income is attributed. The index uses the figures as reported and does not adjust for institutional accounting practice.
6. Research cost recovery
Measured as IP revenue expressed as a percentage of total research staff cost. Total research staff cost is calculated as:
researchCost = (seniorStaff × seniorCost) + (juniorStaff × juniorCost) + (postdocs × postdocCost)
Where staff costs are in thousands of local currency per year. This is the closest publicly available equivalent to ROCE (Return on Capital Employed) for university research commercialisation. It measures what proportion of the institution's investment in its research workforce is being recovered through IP activity.
Values above 100% are valid and economically significant: MIT recovers approximately 197% of its research staff cost through IP income, meaning its IP activities generate nearly twice what the institution spends on the research staff who produce the underlying science. The UK median is approximately 0.1%.
EUR-group limitation: For the 101 institutions in the EUR currency group (Belgium, France, Germany, Italy, Netherlands, Portugal and Spain), salary data was not available for six of the seven countries. For institutions in Belgium, France, Italy, Portugal, Spain and Sweden, the research staff cost is estimated using a sector-median multiplier derived from the ratio of research staff cost to TTO operating cost observed across all institutions where both figures are available (median ratio: 47.2×). The cost recovery figures for these 73 institutions should therefore be read as directional estimates rather than precise calculations. German, Dutch and Canadian institutions have full salary data and their cost recovery figures are calculated exactly.
The friction model
Friction is presented as a separate diagnostic metric rather than a scored dimension. It measures the volume of commercially viable ideas generated each month that do not reach a commercial output, a patent filing, a spinout formation, or a licence agreement.
The friction model has two components: idea generation and idea realisation.
Idea generation
The model estimates the monthly volume of commercially viable ideas generated by three staff groups at each institution: senior research-active academics, postdoctoral researchers, and doctoral students. For each group, two rates are applied:
- Raw idea rate — the proportion of the staff group that generates a commercially relevant observation per month. Calibrated from PraxisAuril Knowledge Exchange Benchmarking data and AUTM sector surveys.
- Good idea rate — the proportion of those raw ideas that are assessed as commercially viable following informal peer filtering. Calibrated from the same sources.
The resulting figure — totalGoodIdeasPerMonth — represents the estimated monthly flow of ideas that could, in principle, reach a commercial output if the institution had unlimited commercialisation capacity.
Idea realisation
Ideas that actually reach a commercial output in a given year are counted as: annual patent filings, annual spinout formations, and annual licence agreements. Dividing by 12 gives a monthly realisation rate. Friction is then:
friction = totalGoodIdeasPerMonth − (patents + spinouts + licences) / 12
The friction rate, used in scoring, is friction expressed as a proportion of total good ideas per month. Scoring is inverted: a lower friction rate produces a higher score.
Interpretation and limitations
Friction scores are model-derived estimates, not independently observed figures. The idea generation rates are calibrated from sector benchmarks and applied uniformly across all institutions, the model does not adjust rates for institutional type, research intensity, or disciplinary mix. An institution with a strong applied science focus may generate commercially viable ideas at a higher rate than the model assumes; one with a predominantly humanities focus may generate fewer. The relative ranking between institutions of similar type is more reliable than comparisons across very different institution types.
The friction metric captures quantity of unrealised potential, not quality. An institution that generates 500 good ideas per month and realises 100 of them has a higher friction score than one that generates 50 and realises 25, even though the latter is only realising 50% of its potential versus the former's 20%. Both have friction problems, but of different characters. Users of this metric should read it alongside the TTO ROI and cost recovery dimensions to form a complete picture of commercialisation efficiency.
The monthly idea figures across all 140 UK institutions in the index sum to approximately 2,737 commercially viable ideas per month. Of these, approximately 589 reach a commercial output. The remaining 2,148, approximately 78% — represent the sector-wide friction estimate. This figure should be treated as a model-derived estimate calibrated from sector benchmarks, not as a directly observed measure of unrealised commercial potential.
Data sources summary
The following primary sources are used across the index. Where primary data is unavailable, estimated figures are derived from the methods described above and flagged in the methodology.
- UK: HESA Higher Education Business and Community Interaction (HE-BCI) survey, patents, spinouts, licences, TTO costs, IP income, research staff numbers.
- United States: AUTM annual licensing survey — IP income, spinouts, licences, TTO costs. IPEDS, staff numbers, institutional data.
- Australia: HERDC (Higher Education Research Data Collection) research staff, funding. AusBiotech and institutional reports, commercialisation data.
- Canada: AUTM Canadian section — IP income and TTO data. Statistics Canada, staff and institutional data.
- Europe (Germany, Netherlands): National HEI statistics agencies and institutional annual reports. Full salary data available.
- Europe (Belgium, France, Italy, Portugal, Spain): National statistics and institutional sources. Salary data estimated, see cost recovery limitation above.
- Sweden: Swedish Higher Education Authority (UKÄ) and institutional sources. Salary data estimated.
- New Zealand: Tertiary Education Commission data and institutional reports.
Peer group normalisation
Scores are normalised within seven currency peer groups (GBP, EUR, USD, AUD, CAD, SEK, NZD) rather than globally. This decision reflects the reality that IP revenue, TTO costs, and salary costs are not meaningfully comparable across currencies without adjustment for purchasing power parity, institutional funding models, and national research policy environments. Normalising within currency peer groups is an imperfect but practical approach that allows meaningful comparison among institutions operating in the same economic context.
EUR-group limitation: The EUR peer group contains 101 institutions across seven countries with substantially different research funding models, salary scales, and commercialisation cultures. A Belgian institute with eight institutions in its peer group is being ranked against German research universities operating at a considerably different scale. This introduces within-group distortion that is not present in the GBP or USD groups, which are more internally homogeneous. Users comparing EUR-group institutions across countries should treat those comparisons as indicative rather than definitive. A future version of the index will normalise by country within the EUR group where data permits.
What the index does not measure
The index is explicitly limited to the efficiency and output of the research commercialisation function. It does not measure:
- The quality or societal impact of the underlying research.
- The commercial performance of individual spinout companies beyond three-year survival.
- Contract research income, consultancy revenue, or continuing professional development income.
- Endowment returns or philanthropic income.
- The depth or quality of industry partnerships that do not result in IP transactions.
- Regional economic impact or jobs created by spinout activity.
- The quality of support provided to researchers during the commercialisation process.
These omissions are deliberate. The index is designed to measure what is measurable with reasonable consistency across national data systems. Adding dimensions that require subjective assessment or are not reported consistently would compromise the comparability of the rankings.
Querying your institution's data
Institutions that wish to query their data, flag a reporting discrepancy, or discuss the application of this methodology to their specific context are encouraged to contact the PIPE research team directly. We will respond to all substantive queries within ten working days.
The index is updated annually as new HESA HE-BCI, AUTM, and equivalent data is published. Institutions that believe their primary source data has been misread or incorrectly applied are invited to provide the corrected source reference, and we will review and update accordingly.
Contact: [email protected] · Index page: University Impact Index · Version: 1.0, March 2026