Nuclear defence research facility deploys HPC network

The Atomic Weapons Establishment (AWE) has been central to the defence of the UK for more than 50 years through its provision and maintenance of the warheads for Trident, the country’s nuclear deterrent. Sponsored by the Ministry of Defence (MOD), AWE sites employ around 4,500 staff and over 2,000 contractors, from scientists and engineers to safety specialists and administrative experts.

The Challenge

Due to the large scale of its operations, AWE needed a high-performance computing (HPC) network to support the transformation of its sophisticated scientific and technological capabilities. The HPC needed to be able to migrate data reliably across multiple networks, including administrative networks handling day-to-day processes and highly secure research networks connected to AWE’s supercomputers.

Alongside a HPC network, AWE needed a trusted partner who could adhere to its complex site security restrictions and data protection requirements. We were pleased to secure the contract with AWE in 2013, having demonstrated that Ampito could adapt to these needs.

The Solution

We began our partnership with AWE by underwriting the HPC design, building the entire network in proxy at a secure off-site location. To offer the connectivity, performance, and scalability needed to support the ever-evolving requirements of the A the AWE network and its users, we created an integrated ecosystem using:

  • Arista ethernet-switching technology at the core of the HPC, based on its Linux based operating system EOS
  • 40GbE Arista switch technology at such scale [and subsequently has moved on to multiple 100Gb]

We also designed professional services packages based on AWE’s bespoke hardware environment, which includes collaborating with the team to deliver technology aftercare off-site. This allows them to overcome issues without returning hardware (potentially still containing residual data) or paying to have it destroyed.

The Result

As a result of our partnership, AWE now operates with the visibility and data migration capabilities key to its project planning. In fact, it became one of the first organisations in Europe to deploy 40GbE Arista switch technology at such scale. AWE employees now also enjoy a much smoother experience as they simulate physics-based research, with a low-latency and highly scalable network that meets the increasing throughput requirements of the HPC environment.

“Based on the commitments made to the business, working with Ampito has meant we’ve achieved what we said we’d do. Our requirements have certainly been met.”

The Future

Having partnered with AWE for 10 years, we’re proud to continue meeting its complex and non-standard requirements. Our ongoing relationship is rooted in our ability to adapt to the research facility’s technical and security needs as they evolve, while delivering HPC network support both on- and off-site. Collaboration has always been fundamental to our partnership, as it will continue to be in the future.

“The kind of technology and the lifecycle of equipment required in HPC environments is completely different to how standard corporate IT networks are managed where you want to minimise risk, introduce stability and modularity. In contrast, HPC is at the forefront of technology where the pace of change is faster with shorter refresh cycles and a willingness to install cutting edge equipment in a bid to increase the performance of calculations which can take weeks to complete and generate output files tens of terabytes in size.”

Neil McMahon, AWE’s Deputy Head of High-Performance Computing

Business Challenge

  • Operating with multiple networks and thousands of staff across sites.
  • Complex technical and security requirements both on- and off-site.

Technical Challenge

  • Needed to migrate data sets reliably and securely at scale.
  • Aftercare support needed off-site to prevent data leaks.

Technical Background

  • A bespoke hardware environment.
  • Ever-evolving requirements of networks and users.

The Results

  • Low-latency, scalable HPC network for better data migration.
  • Improved experience for users, including researchers.
  • 10+ year partnership with ongoing support.