Skip Navigation

Supercomputer

Skip Side Navigation

High Performance Computing (Schooner)

Highlights

  • 766 compute nodes
  • ~902.5 TFLOPS (trillions of calculations per second)
  • 18,408 CPU cores
  • ~57 TB RAM
  • ~425 TB usable storage
  • Two networks (Infiniband and Ethernet)
     

Detailed Hardware Specifications:

OSCER Supercomputer Compute Nodes (Current)

QtyCPUsCoresRAM (GB) QtyCPUsCoresRAM (GB)
285dual Intel Xeon Haswell E5-2650 v32 x 1032 142dual Intel Xeon Haswell E5-2670 v32 x 1264
1dual Intel Xeon Haswell E5-2650 v32 x 1096 5dual Intel Xeon Haswell E5-2670 v32 x 12128
72dual Intel Xeon Haswell E5-2660 v32 x 1032 28dual Intel Xeon Broadwell E5-2650 v42 x 1264
6dual Intel Xeon Haswell E5-2650L v32 x 1264 1dual Intel Xeon Broadwell E5-2650 v42 x 1232
6dual Intel Xeon Haswell E5-2630 v32 x 8128 7dual Intel Xeon Haswell E5-2640 v32 x 896
12dual Intel Xeon Skylake Gold 61402 x 1896 1dual Intel Xeon Skylake Gold 61522 x 22384
6dual Intel Xeon Cascade Lake Gold 6230R2 x 2696 5dual Intel Xeon Skylake Gold 61322 x 1496
30dual Intel Xeon Cascade Lake Gold 62302 x 2096 24dual Intel Xeon Cascade Lake Gold 62302 x 20192
3dual Intel Xeon Ice Lake Gold 63302 x 28128 1dual Intel Xeon Ice Lake Gold 63302 x 28
256
32dual Intel Xeon Ice Lake Gold 63382 x 32128     
1quad Intel Xeon Haswell E7-4809 v34 x 83072 1quad Intel Xeon Haswell E7-4809 v34 x 81024
1quad Intel Xeon Haswell E7-4830 v44 x 142048 1quad Intel Xeon Cascade Lake 62304 x 201536
18dual AMD EPYC Rome 74522 x 32256 1dual Intel Xeon Ice Lake 8352S2 x 322048
2dual AMD EPYC Milan 75432 x 32512 1dual AMD EPYC Milan 75432 x 321024
56dual Sandy Bridge E5-26502 x 832 15dual Sandy Bridge E5-26502 x 864
5single Intel Xeon Phi Knights Landing 72101 x 6448 3single Intel Xeon Phi Knights Landing 72301 x 6448
  • Additional capacity has been purchased and will soon be deployed: 34 nodes, 2160 cores, 8.5 TB RAM and 151.9 TFLOPs peak, for a total of 800 compute nodes, 20,568 CPU cores, ~66 TB RAM and 1054.4 TFLOPs peak (that is, just over 1 PFLOPs).

OSCER Supercomputer Compute Nodes (Purchased, to be deployed in 2022)

QtyCPUsRAM (GB)QtyCPUsRAM (GB)
15dual Intel Xeon Ice Lake 63381285dual Intel Xeon Ice Lake 6338256
4dual Intel Xeon Ice Lake 6338512   
3dual AMD EPYC Rome 74521284dual AMD EPYC Rome 7452256
2dual AMD EPYC Milan 75135121dual AMD EPYC Rome 73521024
  • Accelerators (Graphics Processing Units)
    • 2 NVIDIA V100 GPU cards (owned by a researcher).
    • 18 NVIDIA A100 GPU cards (6 owned by OSCER, 12 owned by researchers).
    • 22 A100 GPU cards have been ordered (8 owned by OSCER, 14 owned by researchers), for a total of 40 A100 GPU cards (14 owned by OSCER, 26 owned by researchers)

 

OSCER has been awarded the following grant:

National Science Foundation grant # OAC-2201561

"CC* Compute: OneOklahoma Cyberinfrastructure Initiative Research Accelerator for Machine Learning (OneOCII-RAML)"

We anticipate that this grant will fund 15 - 25 NVIDIA H100 GPU cards, plus the servers they reside in.

  • Storage
    • High performance parallel filesystem, global user-accessible: DataDirect Networks Exascaler SFX7700X, 70 SATA 6 TB disk drives, ~309 TB useable
    • Lower performance servers full of disk drives, global user-accessibe: ~150 TB useable
  • Networks
    • Infiniband: Mellanox FDR10 40 Gbps, 3:1 oversubscribed (13.33 Gbps)
      NOTE: 76 compute nodes don’t have Infiniband, at the owner’s discretion.
    • Ethernet: Gigabit Ethernet (GigE) to each compute node, uplinked to a top-of-rack GigE switch, and each GigE switch uplinked at 2 × 10 Gbps Ethernet (10GE) to a pair of 10GE core switches.
  • Operating system
    • CentOS 8
    • Batch scheduler is SLURM
    • Compiler families include Intel, Portland Group (now part of NVIDIA) and GNU, as well as the NAG Fortran compiler.
  • Schooner is connected to Internet2 and to Internet2’s 100 Gbps national research backbone (Advanced Layer 2 Services)
     

Interested In Using Schooner?

  • Request an OSCER account (new OSCER users only)
  • Contact us at support@oscer.ou.edu for an initial consult, or, if you have questions regarding your specific use of our HPC systems
  • Check out the help pages in our Support section for detailed information and tutorials
     

Purchasing "Condominium" Compute Nodes for Your Research Team

Under OSCER's "condominium" compute node plan, you can purchase one or more compute node(s) of your own, at any time, to have added to OSCER's supercomputer.

(We use the term "condominium" as an analogy to a condominium apartment complex, where some company owns the complex, but each resident owns their own apartment.)

NOTE: If you're at an institution other than OU, we CANNOT guarantee to offer you the condominium option, or, if we can, there might be additional charges.
 

How to Purchase Condominium Compute Nodes

You MUST work with OSCER to get the quote(s) for any condomium compute node purchase(s), because your condominium compute node(s) MUST be compatible with the rest of OSCER's supercomputer, and MUST be shipped to the correct address.

You can buy any number of condominium compute nodes at any time, with OSCER's help:

support@oscer.ou.edu

OSCER will work with you on the details of the hardware configuration, and to get a formal quote from our current vendor, Dell.

OSCER offers a variety of CPU options within a few Intel and AMD x86 CPU families, and a variety of RAM capacities.

See Condominium Compute Node Options, below.

You have to buy the compute node (server computer) itself, plus a few network cables.
 

Who Can Use Your Condominium Compute Node(s)?

Once you execute a purchase and your condominium compute node(s) arrive and get put into production, you get to decide who can run on them, typically by OSCER creating one or more batch queues for you.

For example, it could be just your research team (or even a subset of your team), or your team and one or more other team(s) that you designate, etc.

No Additional Charges Beyond Hardware Purchase

You pay for your condominium compute node hardware, including cables. There is NO ADDITIONAL CHARGE beyond purchasing the compute node hardware.

OSCER deploys your condominium compute node(s) and maintains them as part of OSCER's supercomputer, at NO ADDITIONAL CHARGE.

How Long Will a Condominium Compute Node Stay in Production?

A condominium compute node will stay in production for the lifetime of the supercomputer you buy it for, PLUS the lifetime of the immediate subsequent supercomputer.

Currently, that means OSCER's emerging new supercomputer, Sooner, plus its immediate successor, Boomer.

So, probably 6 to 8 years total, give or take.

NOTE: Once your initial extended warranty expires, either

(a) you can buy annual support year by year for your condominium compute node(s),

OR

(b) you can buy replacement components when any components in your condominium compute node(s) fail,

OR

(c) OSCER will let your condominium compute node(s) die when they die.

Condominium Compute Node Options

(1) Condominium Compute Node

(1a) R650, Intel Xeon Ice Lake CPUs, DDR4-3200 RAM
— Several CPU options (6338 32-core recommended)
— 128 GB or 256 GB or 512 GB RAM
— Common configuration (below)

(1b) R6525, AMD EPYC Rome or Milan CPUs, DDR4-3200 RAM
— Several CPU options (7513 32-core recommended)
— 128 GB or 256 GB or 512 GB RAM
— Common configuration (below)

(1c) R660, Intel Xeon Sapphire Rapids CPUs, DDR5-4800 RAM
— Several CPU options (6430 32-core recommended)
— 256 GB or 512 GB RAM
— Common configuration (below)

(1d) R6525, AMD EPYC Genoa CPUs, DDR5-4800 RAM
— Several CPU options (9454 48-core recommended)
— 384 GB or 768 GB RAM
— Common configuration (below)

Common Configuration
— Disk: single small drive for operating system and local /lscratch
— Network, low latency: Infiniband HDR100 100 Gbps 1-port w/1 cable
— Network, management: Gigabit Ethernet 2-port w/1 cable
— Power supply: single non-redundant
— Warranty: Basic hardware replacement, 5 years recommended

(2) Condominium Large RAM node

(2a) R650, configured like (1a), above, EXCEPT:
— 1 TB or 2 TB or 4 TB or 8 TB RAM
— Common configuration (below)

(2b) R6525, configured like (1b), above, EXCEPT:
— 1 TB or 2 TB or 4 TB RAM
— Common configuration (below)

(2c) R660, configured like (1c), above, EXCEPT:
— 1 TB or 2 TB or 4 TB or 8 TB RAM
— Common configuration (below)

(2d) R6625, configured like (1d), above, EXCEPT:
— 1.5 TB or 3 TB or 6 TB RAM
— Common configuration (below)

Common configuration
— Disk: dual disk drives mirrored (RAID1)
— Network, low latency: Infiniband HDR100 100 Gbps 1-port w/1 cable
— Network, Ethernet: 25GE 2-port w/cables
— Network, management: GigE 2-port w/cables
— Power supplies: dual redundant
— Warranty: Basic hardware replacement, 5 years recommended

(3) Condominium Quad CPU Node

R860, configured like (2c), above, EXCEPT:
— 4 CPU chips (6430H 32-core recommended)
— 1 TB or 2 TB or 4 TB or 8 TB or 16 TB RAM

(4) Condominium GPU node

NVIDIA A100 and H100 GPU cards now have a delivery time of approximately a year, so OSCER currently DOESN'T recommend buying them.

Instead, please consider the following options:

(4a) Dual NVIDIA RTX 6000 Ada Generation WITHOUT NVlink
Precision 7960 rackmount workstation, configured like (2c), above, EXCEPT:
— 256 GB RAM
— dual NVIDIA RTX 6000 Ada Generation GPUs (48 GB)

(4b) Dual NVIDIA L40 WITHOUT NVlink
R7525 with dual NVIDIA L40 WITHOUT NVlink, configured like (2b), above, EXCEPT:
— 256 GB RAM
— dual NVIDIA L40 GPUs (48 GB)

(4c) Dual NVIDIA L40S WITHOUT NVlink
R750xa with dual NVIDIA L40S WITHOUT NVlink, configured like (2a), above, EXCEPT:
— 256 GB RAM
— dual NVIDIA L40S GPUs (48 GB)

(4a) Dual NVIDIA A100 WITH NVlink (600 GB/sec GPU-to-GPU)
R750xa, configured like (2a), above, EXCEPT:
— 512 GB RAM (for A100 80 GB) or 256 GB RAM (for A100 40 GB)
— dual NVIDIA A100 GPUs (80 GB or 40 GB)
— NVlink (600 GB/sec GPU-to-GPU)

(4b) Dual NVIDIA A100 WITHOUT NVlink
R7525, configured like (2b), above, EXCEPT:
— 512 GB RAM (for A100 80 GB) or 256 GB RAM (for A100 40 GB)
— dual NVIDIA A100 GPUs (80 GB or 40 GB)

(4c) Dual NVIDIA H100 WITH NVlink (900 GB/sec GPU-to-GPU)
R760xa, configured like (2c), above, EXCEPT:
— 512 GB RAM
— dual NVIDIA H100 GPUs (80 GB)
— NVlink (900 GB/sec GPU-to-GPU)

(4d) Dual NVIDIA H100 WITHOUT NVlink
R760, configured like (2c), above, EXCEPT:
— 512 GB RAM
— dual NVIDIA H100 GPUs (80 GB)


How to Buy Condominium Compute Node(s)

You can buy any number of condominium compute nodes at any time, with OSCER's help.
Please contact OSCER at:

support@oscer.ou.edu