Research Security is not a rulebook – it’s a design problem

red padlock on black computer keyboard

​Opinion

Australian universities have spent the better part of five years responding to research security as though it were primarily a compliance problem.

Guidelines have been issued, checklists developed, due diligence processes layered into grant applications, ethics approvals and partnership agreements.

The intent is reasonable. The results less so.

Compliance-led security tends to produce brittle outcomes. It can generate paper trails—often built under considerable time pressure and with limited guidance from government—without building the institutional judgement that security requires. It can create bottlenecks that slow legitimate collaboration while offering (perhaps unsurprisingly) little resistance to sophisticated actors who understand how to work within formal rules. Worse, it can breed a culture of quiet avoidance. Researchers become reluctant to flag concerns, not because they are indifferent to risk, but because the available pathways often feel binary: do nothing, or trigger a process whose proportionality and outcome they cannot predict. This is not simply about an individual’s willingness to report. It is a design gap between the intent of reporting frameworks and the individual’s experience when using them.

The question is not whether the underlying risks are overstated. Australia's research system holds capabilities of genuine strategic value in areas such as quantum technologies, critical minerals, synthetic biology, advanced manufacturing, and defence-adjacent sciences. It is not surprising that these capabilities attract sophisticated, persistent, and well-resourced efforts at acquisition by state actors whose interests are not aligned with Australia's. The challenge is real, the stakes are material, and universities are not exempt from the landscape of strategic competition simply because they value openness. What we really should be asking, though, is whether the current approach to protection is equal to the sophistication of the threat

This is not an argument against rules. It is an argument that rules alone cannot carry the weight we are placing on them. Research security is not a simple regulatory problem with a regulatory solution. It is a design problem, one that sits at the intersection of governance, incentives, capability, and institutional culture.

Let’s consider what "research security capability" requires. It demands that institutions understand their own research portfolios well enough to identify where risk lies—not just by country or partner, but by technology domain, data sensitivity, and talent pipeline. It requires leadership that can hold the tension between openness and protection without defaulting to either extreme. It needs systems that genuinely and proactively support researchers in making informed decisions, rather than simply transferring institutional risk onto individual academics through disclosure forms they may not fully understand.

Most critically, it requires universities to treat security as they treat research integrity or ethics: not as an external imposition, but as infrastructure that enables excellent research by maintaining the trust on which collaboration depends.

The comparison to research integrity is instructive. Two decades ago, integrity was largely managed through codes of conduct and reactive investigations. Today, the sector understands that integrity is cultural – embedded through training, supervision, institutional signals, and leadership modelling. Security is at an earlier point on the same trajectory, but the lessons are available if institutions choose to learn them.

The hidden trade-offs in the current approach deserve more honest discussion. Blunt controls on international collaboration do not simply manage risk; they can reshape research strategy, often without deliberate decision. When partnership approvals take months, when entire countries become informally off-limits, when early-career researchers learn that certain topics attract scrutiny they cannot navigate, the system is making choices about what kind of research Australia pursues. Those choices should be made consciously; by leaders accountable for both security and research quality, not by administrative friction operating beneath the waterline of institutional governance.

Leadership and accountability gaps compound the problem. In many universities, research security sits uneasily between the research portfolio, legal counsel, and corporate services—owned by everyone in principle and no one in practice. Without clear ownership at executive level, security settings drift toward risk minimisation rather than capability building. The institution protects itself from blame without asking whether it is truly becoming more secure, or merely more cautious.

For university leaders who are genuinely grappling with this as a design challenge rather than a compliance obligation, several strategic considerations are worth examining.

First, ownership matters more than oversight. Locating strategic ownership of research security with a senior executive with direct line of sight to both research strategy and security settings creates the conditions for decisions that balance protection with ambition.

Second, the institution's risk appetite should be articulated, not assumed. Have universities explicitly stated what level of research security risk they are willing to accept in pursuit of strategic objectives? Without this, every decision is made ad hoc, and researchers receive inconsistent signals about what the institution supports.

Third, capability requires investment commensurate with expectation. If institutions expect researchers to make sophisticated security judgements, they must resource the training, systems, and advisory support that make such judgements possible. Disclosure forms without accessible guidance are delegation, not design.

Finally, security intelligence should flow in both directions (within institutions, in addition to between the sector and government). Researchers hold granular, discipline-specific knowledge about where risks genuinely concentrate—knowledge that institutional frameworks would do well to capture. Governance structures that treat researchers solely as subjects of security policy, rather than contributors to security intelligence, are working with half the information available to them.

None of this means universities should relax their vigilance. The risks are real, the geopolitical environment is genuinely contested, and Australia's research system holds capabilities that warrant serious protection. But protection through design looks fundamentally different from protection through prescription. It is more demanding of leadership, more dependent on institutional self-knowledge, and ultimately more durable.

If research security is a design challenge for institutions, the next question follows naturally: what kind of researchers are we designing the system to produce? When security settings shape project selection, supervision, mobility, and career formation—often invisibly—they are not just protecting research; they are shaping it. That question belongs not to compliance officers, but to everyone responsible for the next generation of Australian research talent.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!!