Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop (2024)

Chapter: 3 Perspectives on Conventional Innovation and Commercialization Metrics

Previous Chapter: 2 Using SBIR and STTR to Achieve DOD Missions and Goals
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.

3

Perspectives on Conventional Innovation and Commercialization Metrics

Possible Metrics Discussed by the Presenters

  • Transition rates for technologies to Phase III awards (Bresler)
  • The total SBIR funding a company has received over its lifetime (Bresler)
  • Expenditures on SBIR programs compared with direct economic returns, including job creation, outside investments, value of acquisitions, and taxes generated by the technologies developed (Friesenhahn)
  • Success stories for individual technologies developed through SBIR funding (Friesenhahn)
  • For the life sciences, venture capital funding, publications, trademarks, patents, clinical trials, 510(k)s, device premarket approvals, and drug approvals (Sampat)

Building off the previous discussion, the second panel of the workshop focused on ways of measuring outputs to evaluate SBIR/STTR programs that have been used in the past and are now being extended in new directions.

USES OF EXISTING DATABASES

As part of her interest in how small innovative companies attempt to break into the defense market and navigate it successfully, Amanda Bresler, chief strategy officer of PW Communications, began connecting with companies that had participated not just in the SBIR program but in various other defense-sponsored innovation programs. “Many of them had the same experience,” she recounted. “We performed flawlessly in our project; we had every expectation that we would continue to serve the needs of the government thereafter; we were in conversations with various defense and government end users; and then, for reasons we cannot understand, those conversations just died, and we pivoted back

Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.

to the commercial market”—despite continued demand by the government for the capabilities that those companies could provide.

Using a dataset of about 7,700 innovation program awards over a 5-year period and a million-plus subsequent defense contract awards, Bresler assessed the performance of participant companies in the defense market at large scale.1 She found that the 7,700 awards corresponded to only about 1,100 unique entities, making her study the first to document what some have termed “SBIR mills.” Furthermore, the majority of the companies that participated in these programs did not win subsequent contracts.

Bresler drew on these findings to urge that people think about data and metrics for assessing the SBIR program in two distinct ways. The first is to use several available databases to look at the composition of the SBIR program in terms of who wins Phase I and Phase II awards irrespective of transition rate. Those data could be used to answer a number of relevant questions. Who is winning these contracts? What is the nature of these companies? Do the awards reflect the DOD’s priorities? Are the awards going to the right companies given the DOD’s challenges? What are companies’ previous experiences with government agencies? Do they have previous experience with the SBIR program, either as awardees or subcontractors? For example, Bresler observed, about 90 percent of all Phase I awards go to entities that have prior government business, which suggests that new entrants face challenges in accessing the program. “Understanding the composition of the portfolio is critical to a meaningful assessment of the program.”

The second way to look at metrics involves transition rates or performance. PW Communications has looked specifically at the extent to which companies funded through the SBIR program go on to serve the needs of government end users, as reflected in three primary metrics: Phase III awards; contract awards that are not coded as Phase IIIs but occur after their SBIR work; and subcontract awards that occur after their SBIR work. The links between SBIR awards and subsequent work are not always clear, which introduces some uncertainty in the data, but ways of reducing this uncertainty exist, such as figuring out whether a particular award corresponds to a weapon system.

Bresler drew several conclusions from the results of this study. Although she does not like to use the term “returns on investment” in thinking about the SBIR program, since the environment in which the program works differs from a commercial market, one way of calculating a comparable measure is looking at the total SBIR funding a company has received over its lifetime. “That’s a summary statistic, and it tells you something about the nature of the portfolio, the composition of the portfolio, and the transition rate.” However, her research has found that there is no direct correlation between giving a company more SBIR

___________________

1 Amanda Bresler. 2018. Bridging the gap: Improving DoD-backed innovation programs to enhance the adoption of innovative technology throughout the armed services. Proceedings of the 15th Annual Acquisition Research Symposium. https://dair.nps.edu/bitstream/123456789/1568/1/SYM-AM-18-056.pdf

Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.

funding and the likelihood of that company transitioning a technology for government end users. “There’s a lot of talk about programs that focus on injecting capital into the system in providing bridge financing, and these are good programs, but you cannot be blinded to the limitations of such funding in light of other upstream challenges.”

She also mentioned a 2021 analysis of the underlying challenges that face small businesses, including those in the SBIR program. For instance, looking at more than a million archived DOD solicitations, 70 percent of them required responses within 21 days of when they were posted, and 30 percent required responses within 10 days or less. “You can’t, as a small business that doesn’t have entrenched relationships inside the department or the government, reasonably be competitive in that kind of landscape.” Similarly, an analysis of the million archived DOD solicitations for readability showed that only about 4 percent were written in plain English. “I’d encourage you, as we’re thinking about data and metrics, to take a rigorous lens at the metrics used to assess programs. Think about ways and opportunities to leverage the tools that are out there . . . to box in a quantitative measure around some of the challenges that are not unique to the SBIR program but have thwarted transitions over time.”

NATIONAL ECONOMIC IMPACTS

Ray Friesenhahn, SBIR and technology transition manager at TechLink, described several of his organization’s approximately 20 economic analyses of SBIR/STTR and federal agency technology transfer programs.2 TechLink’s goal to date has been to get approximately 95 percent response rates in its economic impact analyses, which it achieves using skilled and motivated teams of trained professionals who utilize letters of authorization from the agencies being studied. To minimize intrusion and demands on companies, financial data are only reported in aggregate, with no individual results to government agencies, except for success stories that companies volunteer. In-depth background research includes data validation and contact with current and prior company representatives. The companies have generally been supportive, in part because the data generated by the studies are used for program reauthorization.

A 2018 study, following up on two previous studies for the Air Force and Navy, looked at nearly 17,000 Phase II awards from all DOD components, made to over 4,500 companies and initiated from fiscal years 1995 through 2012.3 Analyzed by the Leeds School of Business at the University of Colorado Boulder using the well-known IMPLAN method, the study uncovered sales of $121 billion in new products and services, $28 billion of which were sales either directly to the U.S. military or to prime contractors for military applications. The total economic nationwide impact, including direct, indirect, and induced impacts, was $347 billion, which represents a 22-to-1 return on the DOD’s investment of $14.4

___________________

2 Information on the studies is available at https://techlinkcenter.org/economic-impact-reports

3 Details of the study are available at https://www.sbir.gov/impact/impact-reports

Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.

billion provided to companies nationwide via about 17,000 separate SBIR/STTR Phase II contracts. The analysis also documented the creation of 1.5 million jobs with an average compensation of about $73,000. These figures “were very conservative,” said Friesenhahn, because the study wanted to verify the numbers and did not want to exaggerate.

Other economic impacts included outside investment of at least $9.5 billion, with 666 companies receiving venture capital or angel funding. Almost 500 companies were acquired based primarily on their DOD SBIR innovations, and the value of the acquisitions was at least $35.5 billion, though this number, too, is understated because of company underreporting. Significantly, the taxes generated by the contracts’ total impact amounted to $39.5 billion, which was almost three times the initial expenditure of SBIR funding. Furthermore, that number, and other impacts, will continue to grow, Friesenhahn observed, as the programs continue to exert their effects.

A more detailed analysis of these numbers, presented by Friesenhahn, shows that the average total return on investment was much larger for pre-2000 awards, which was also the period of the most large outliers in terms of returns (Figure 3-1). The number and total value of SBIR/STTR awards grew rapidly after 2002, following a dip in the return on investment that occurred after the September 11, 2001, terrorist attacks. Military sales initially grew faster than commercial sales, although long-term military sales growth was eventually surpassed by commercial sales. These results varied for the Army, Air Force, and Navy, with Air Force revenues and returns on investment generally being larger than for the Army. In comparison, the Navy, which has traditionally focused on the military transition, had a higher proportion of military sales, with such sales exceeding commercial sales for 15 of the 18 years studied. In general, the STTR commercialization results were comparable to those for SBIR, although a smaller number of STTR awards lowers the probability of large individual successes. The overall military sales for STTR contracts were generally comparable to those for SBIR, except for relatively low military sales results for STTR after 2003, which is a topic that bears future study.

In a 2014 study of Air Force SBIR/STTR programs, companies were grouped into five tiers based on the total number of Phase II awards they had ever received at any time up until 2013 from any agency (not just the DOD).4 Tier 5 companies had received more than 100 Phase II awards, while Tier 1 companies had received less than 4 Phase II awards. These results show that Tier 1 firms averaged five times the average sales results of Tier 5 firms relative to their total Phase II awards.

___________________

4 TechLink, in collaboration with the Business Research Division, Leeds School of Business of the University of Colorado Boulder. 2014. The Air Force impact to the economy via SBIR/STTR. Bozeman, MT: TechLink. https://www.sba.gov/sites/default/files/2020-11/Air_Force_Impact_to_the_Economy_via_SBIR-STTR_-_2014_Economic_Impact_Study_2015_03.pdf

Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
The total economic value of SBIR awards yields a high return on investment (green and yellow lines)
FIGURE 3-1 The total economic value of SBIR awards yields a high return on investment (green and yellow lines).
NOTE: DoD = U.S. Department of Defense, R&D = research and development, ROI = return on investment, SBIR = Small Business Innovation Research, STTR = Small Business Technology Transfer.
SOURCE: Excerpted from presentation by Ray Friesenhahn.

The slopes of the lines generated by these returns give a sense of how long it takes to get significant returns on SBIR funding, said Friesenhahn. Some technologies, such as software, can generate returns quickly, while others take longer. Achieving high returns on investment may happen within a few years but typically requires more like 10–12 years, especially with more complex technologies.

Overall, SBIR/STTR programs have produced many prominent successes for military, civilian, and dual-use applications, Friesenhahn concluded, with nearly 60 percent of Phase II awards resulting in sales.

METRICS USED IN THE EVALUATION OF SBIR/STTR PROGRAMS AT THE NIH

Bhaven Sampat, professor in Arizona State University’s School for the Future of Innovation in Society and School of Public Affairs and research associate at the National Bureau of Economic Research, provided an overview of some of the conventional and nonconventional metrics used in an evaluation of SBIR/STTR programs at the National Institutes of Health (NIH) conducted by a

Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.

committee of the National Academies of Sciences, Engineering, and Medicine.5 The NIH is the largest single funder of biomedical research and development in the world, with an annual budget of more than $40 billion. It consists of 27 institutes and centers, each with different missions and budgets, and most of which are involved in the SBIR program. The program operates quite differently in different institutes, Sampat noted, which “has implications for thinking about evaluation.”

The NIH is not primarily, nor has it been historically, focused on developing technologies for the operating requirements of the agency—in the case of the NIH, the Department of Health and Human Services—or for agency procurement. Rather, it is focused on the development of technologies to be adopted and utilized by a range of decentralized actors in the U.S. health care system, including private-sector actors, such as the drug, device, or pharmaceutical industries.

For some of the institutes and centers, the types of metrics conventionally used to look at life science innovation, such as patents, publications, clinical trials, and Food and Drug Administration approvals, work reasonably well. But other institutes may be focused on other outcomes, such as changes in health care practice or clinical guidelines. “There was no one indicator—or even a small set of indicators—that worked well for all of the institutes and centers, so we instead focused on a suite of indicators.”

Three challenges arose in evaluation, Sampat observed. The first is the question of additionality, or of what would have happened to the firms without SBIR funding. The committee approached this issue by looking at the available data for three categories of firms: firms whose applications were funded, firms whose applications were discussed but not funded, and firms whose applications were not discussed at all.

The second challenge was choosing the right outcomes. These had to cover the goals of different institutes and centers and also be things that could be reliably measured for both funded and unfunded entities over a relatively long period of time.

The third challenge was linking both funded and unfunded firms to the outcome measures. The names of firms can be different in different databases, although algorithms can be used to deal with this issue. More problematic is when a firm is bought or its name changes; “that’s a bit harder to track.”

The committee ended up using eight outcome measures:

  • Venture capital funding
  • Publication
  • Trademark
  • Patent

___________________

5 National Academies of Sciences, Engineering, and Medicine. 2022. Assessment of the SBIR and STTR programs at the National Institutes of Health. Washington, DC: The National Academies Press. https://doi.org/10.17226/26376

Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
  • Clinical trial
  • 510(k)
  • Device premarket approval
  • Drug approval

One interesting aspect of these outcomes was using trademark as well as patent data, said Sampat. The propensity to patent differs across institutes, but many of them generate products that can be tracked through the trademark register.

As an example of the results from this analysis, Sampat showed a graph of publications for firms falling into the three categories, along with a fourth category of firms that received the most SBIR funding after their first application (Figure 3-2). The results clearly differentiate between firms whose grants were funded and unfunded, although Sampat added that differences were beginning to emerge before the grants were funded. “There are pre-trends that made it difficult to make causal statements about the impact of the program.” Similar findings are available for the other indicators chosen by the committee.

Based on FY 2000–2018 NIH SBIR/STTR application data and PubMed data, firms whose applications were awarded did better than firms whose applications were discussed but not awarded or not discussed
FIGURE 3-2 Based on FY 2000–2018 NIH SBIR/STTR application data and PubMed data, firms whose applications were awarded did better than firms whose applications were discussed but not awarded or not discussed.
a Firms that fall in the top 10 percent of support from the programs from 1995 through 2019.
b Firms that fall in the bottom 90 percent of support from the programs from 1995 through 2019.
NOTE: FY = fiscal year, NIH = National Institutes of Health, SBIR = Small Business Innovation Research, STTR = Small Business Technology Transfer.
SOURCE: Excerpted from presentation by Bhaven Sampat.
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.

In addition to the data provided to the committee by the NIH as part of the evaluation, a public database called NIH RePORTER provides bulk administrative data on NIH-funded grants and contracts, including SBIR/STTR data. It also includes outcomes reported back to the NIH on such indicators as publications, patents, and clinical trials. This was useful to the committee for establishing links to outcomes for funded firms, but the committee also found considerable underreporting of particular outcomes. For example, comparing RePORTER to “government interest” statements, which are available from the U.S. Patent and Trademark Office, showed that only 30 percent of SBIR/STTR patents are reported back to the agency, compared with 60 percent for R01 grants.

PANEL DISCUSSION

In response to a question from moderator Kyle Myers about whether the valley of death can be observed in the data she has collected, Bresler noted that identifying a valley of death is “very subjective.” For example, a funding program would not be considered a success if every investment progressed through Phase I and Phase II awards to reach end users in the government. “That would be a signal that the program was too risk averse.” The question then becomes what the appropriate success rate should be. Another consideration is whether SBIR funding results in a technology that does not meet the needs of government but found other uses so that a company continues to thrive. A way to analyze this would be to cross-reference information on government procurements with the capabilities of companies that have received SBIR grants. Finally, companies that are being considered for an SBIR grant are not assessed on the extent to which they have commercialized technologies in the past, whereas requiring companies to report their transition data could shift more resources toward firms with a higher transition probability.

Friesenhahn pointed out that a company may develop a very good technology with an SBIR contract, but if the contract ends before the technology is ready, a company may have to look for other funding. “That’s how some of these companies evolve into the multiple award winners—they have to keep bringing in money.” The data he presented cannot be used to determine whether a company went elsewhere for funding or faltered because it could not get through the valley of death.

Sampat observed that the term was not used in the NIH report, but the idea is often applied to research funded by the NIH in general, beyond the SBIR/STTR programs. One question is whether a research result has commercial potential—Sampat quipped that maybe the valley of death “is filled with a pile of junk.” He also noted that the instruments available to an agency like the NIH for solving that problem are limited partly because of how the agency defines its mission, although SBIR/STTR could be reconceptualized as a program meant to commercialize NIH basic research that is not being picked up by private firms through other channels.

Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.

In response to a question about how individual technologies can be tracked from their earliest development to the release of a product, Bresler noted that some licensing information does appear in the subcontract data, but not always. Databases can also reveal companies that have chosen to stop working with the government or have gone out of business, and text analysis or other means of information gathering can sometimes reveal linkages. But the connections between Phase I and Phase II awards can sometimes be obscure, as can connections between those awards and final products. She also pointed out that the research her company publishes includes only licensing that is apparent in subcontract awards or prime contracts.

A related question addressed the metrics that the government is most interested in acquiring to be used to measure the success of SBIR programs. Friesenhahn noted that financial data are easy to put into a chart, but the government is most interested in products. People may have good ideas, but programs like SBIR are needed to convert those ideas into products. Without such programs, “we’d never see so many of these innovations that have come out of the program.” For that reason, TechLink has also been involved with putting together success stories of SBIR awards, a number of which have been published on SBIR.gov. “There are some really good things in there that will impress people with what these companies did through innovation that they were only able to achieve because of SBIR funding.”

Bresler added that companies have forms of support other than SBIR, such as other DOD-funded assistance programs and traditional contracts, so it can be hard to assess the specific contributions of SBIR programs to a technology. Instead, it is necessary to take a “big picture approach and consider how all of these instruments play together and what data and other metrics are available to assess these different things.”

Finally, a questioner asked whether a portfolio approach in SBIR programs both within agencies and across the federal government may support companies that may otherwise not receive such support. Friesenhahn agreed that SBIR-supported technologies may have applications that would be of interest to other government agencies. National SBIR conferences are one place for such coordination to occur and to learn what agencies are looking for, and some coordination occurs within individual agencies. Because there can be different applications for technologies, “[t]he portfolio is very important for the entrepreneur,” he said. “The advisors on the ground in each state tend to know what agencies are looking for and where to advise these companies to go for their portfolios.”

Bresler also noted that some of the companies her team has studied were active with non-DOD agencies for some time but then made their initial foray into military technologies through the SBIR program. In such cases, the DOD-funded SBIR award can change the trajectory of a company. Similarly, institutes and centers within the NIH tend to view their SBIR grants as part of a broader portfolio that collectively serves the needs of the sponsoring organizations, Sampat said.

Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.

This page intentionally left blank.

Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 23
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 24
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 25
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 26
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 27
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 28
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 29
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 30
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 31
Suggested Citation: "3 Perspectives on Conventional Innovation and Commercialization Metrics." National Academies of Sciences, Engineering, and Medicine. 2024. Data and Metrics for the DOD SBIR and STTR Programs: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/27984.
Page 32
Next Chapter: 4 New Data and New Metrics for Evaluating Impact
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.