tacc.utexas.edu
TACC Essentials
Documentation Overview
Getting Started
Managing Your TACC Account
Notices
How to Access TACC Resources
New Accounts
Login Problems: Windows Users
Login Problems: Resolving VSCode Issues
Login Problems: Improper ssh Configuration!
Table 1. TACC Account Status
Deactivated Accounts
TACC Portals
TACC Accounts Portal
TACC User Portal
References
Multi-Factor Authentication
What is MFA?
Set up MFA at TACC
Example: Pairing with an Authentication App
Table 1. MFA Apps
Example: Pairing with SMS (text) Messaging
Logging into TACC Resources
Unpairing your Device
TACC Good Conduct Guide
Notices
1. Do Not Run Jobs on the Login Nodes
VSCode Users
Dos & Don'ts on the Login Nodes
2. Do Not Stress the Shared File Systems
File System Usage Recommendations
Scratch File System Purge Policy
More File System Tips
3. Limit Input/Output (I/O) Activity
4. File Transfer Guidelines
5. Job Submission Tips
Common User Issues
Nearing or Exceeding File System Quotas
Disrespecting the Login Nodes
Allowing compute jobs to access the wrong file system
Running I/O Intensive Sessions
Avoid Hardcoding Environment Variables in your Startup Files
Automatic Conda Startup
Admin: Multiple User Accounts
Large, Unmanaged File Transfers
Do not Store Important Data in $SCRATCH
Software at TACC
System-Installed
Resource-Specific Build Instructions
Building Third-Party Software
Help
Data Transfer Guides
Data Transfer at TACC
Choosing a Transfer Method
Table 1. Choosing a Transfer Method
Common Workflows
1. Between your laptop and TACC resources
2. Between TACC HPC Resources
3. Between Institutions
4. Backup/Transfer files between TACC HPC and TACC storage resources
Third-Party Storage
UTBox and Dropbox
S3-Compatible Storage
SSH Tools
SSH Command-Line Examples
Determining Paths
Using scp
Using rsync
Using sftp
More Reading
Example: Cyberduck GUI
Globus
Using Globus
Step 1. Retrieve your Unique ePPN.
Step 2. Associate your ePPN with your TACC Account.
Using the Globus File Manager
Key Concepts
Endpoints
Paths
Recommended Practices
Monitoring Transfers
Performance Expectations
A Note on End-to-End Checksums
HPC User Guides
Chameleon
Corral
System Overview
Consulting and Data Management Plans
Object/Cloud Storage on Corral
System Access
Basic File System Access from Lonestar6 and Other TACC systems
Usage Policies
"Category 1", HIPAA-PHI, and other restricted data types
Quotas
Data Retention Policies
Transferring your Files to Corral
Command-line data transfer
Staging to and from Lonestar6 File Systems
Transferring Using Cyberduck
Cyberduck Configuration and Use
Managing Files & Permissions
Unix Permissions
Managing Files and Permissions using ACLs
Managing Permissions with chmod
Snapshots and File Retrieval
References
Frontera
Notices
Introduction
Quickstart
Account Administration
Setting up Your Account
Check your Allocation Status
Multi-Factor Authentication
Access the System
Secure Shell (SSH)
Configuring Your Account
Linux Shell
Environment Variables
Account-Level Diagnostics
Using Modules to Manage your Environment
Crontabs
TACC Tips
Frontera User Portal
System Architecture
Cascade Lake (CLX) Compute Nodes
Table 1. CLX Specifications
Large Memory Nodes
Table 2. Large Memory Nodes
GPU Nodes
Table 3. Frontera GPU node specifications
Login Nodes
Network
Managing Files
File Systems
Table 4a. File Systems
Table 4b. Scratch File Systems
Important Notice about /scratch1
Scratch File System Purge Policy
Navigating the Shared File Systems
Figure 3. Stockyard File System
Table 5. Built-in Account Level Aliases
Striping Large Files
Transferring your Files
Windows Users
SSH Utilities: scp & rsync
Transferring Files with scp
Transferring Files with rsync
Sharing Files with Collaborators
Launching Applications
One Serial Application
One Multi-Threaded Application
One MPI Application
One Hybrid (MPI+Threads) Application
More Than One Serial Application in the Same Job
MPI Applications One at a Time
More than One MPI Application Running Concurrently
More than One OpenMP Application Running Concurrently
Running Jobs
Job Accounting
New Charging Policy
Requesting Resources
Frontera Production Queues
Table 6. Frontera Production Queues
Accessing the Compute Nodes
Figure 2. Login and Compute Nodes
Submitting Batch Jobs with sbatch
Table 7. Common sbatch Options
Interactive Sessions with idev and srun
Interactive Sessions using SSH
Slurm Environment Variables
Sample Job Scripts
Job Management
Monitoring Queue Status with sinfo and qlimits
TACC's qlimits command
Slurm's sinfo command
Monitoring Job Status
Slurm's squeue command
TACC's showq utility
Dependent Jobs using sbatch
Building Software
Basics of Building Software
Intel Compilers
GNU Compilers
Compiling and Linking as Separate Steps
Include and Library Paths
Compiling and Linking MPI Programs
Intel Math Kernel Library (MKL)
MKL with Intel C, C++, and Fortran Compilers
MKL with GNU C, C++, and Fortran Compilers
Using MKL as BLAS/LAPACK with Third-Party Software
Using MKL as BLAS/LAPACK with TACC's MATLAB, Python, and R Modules
Controlling Threading in MKL
Using ScaLAPACK, Cluster FFT, and Other MKL Cluster Capabilities
Building for Performance on Frontera
Recommended Compiler
Architecture-Specific Flags
Programming and Performance
Programming and Performance: General
Timing and Profiling
Data Locality
Vectorization
Learning More
Programming and Performance: CLX
File Operations: I/O Performance
Machine Learning
Install Pytorch
Testing Pytorch Installation
Single-Node
Multi-Node
Visualization and VNC Sessions
Remote Desktop Access
Running Applications on the Remote Desktop
Running Parallel Applications from the Desktop
Running OpenGL/X Applications On The Desktop
Parallel VisIt on Frontera
Preparing Data for Parallel Visit
Parallel ParaView on Frontera
Jupyter
Launch a Session
References
Cloud Services Integration
Google Cloud Platform
Request Access
Storage basics
Amazon Web Services (AWS)
Request Access
Log In to the Console
Add MFA
Add CLI and API access key
All Set
Containers
Help Desk
References
Lonestar6
Introduction
Allocations
System Architecture
Compute Nodes
Table 1. Compute Node Specifications
Login Nodes
vm-small Queue Nodes
Table 1.5. vm-small Compute Node Specifications
GPU Nodes
Table 2. A100 GPU Node Specifications
Table 2.5 H100 GPU Node Specifications
Network
Managing Files
Table 3. File Systems
Scratch File System Purge Policy
Navigating the Shared File Systems
Table 4. Built-in Account Level Aliases
Striping Large Files
Transferring your Files
Transferring with scp
Transferring with rsync
Sharing Files with Collaborators
Access the System
Secure Shell (SSH)
Account Administration
Check your Allocation Status
Multi-Factor Authentication
Linux Shell
Environment Variables
Account-Level Diagnostics
Using Modules to Manage your Environment
Crontabs
TACC Tips
Building Software
Basics of Building Software
Intel Compilers
GNU Compilers
Compiling and Linking as Separate Steps
Include and Library Paths
Compiling and Linking MPI Programs
Intel Math Kernel Library (MKL)
MKL with Intel C, C++, and Fortran Compilers
MKL with GNU C, C++, and Fortran Compilers
Using MKL as BLAS/LAPACK with Third-Party Software
Using MKL as BLAS/LAPACK with TACC's MATLAB, Python, and R Modules
Controlling Threading in MKL
Using ScaLAPACK, Cluster FFT, and Other MKL Cluster Capabilities
Launching Applications
One Serial Application
Parametric Sweep / HTC jobs
One Multi-Threaded Application
One MPI Application
One Hybrid (MPI+Threads) Application
More Than One Serial Application in the Same Job
MPI Applications One at a Time
More Than One MPI Application Running Concurrently
More than One OpenMP Application Running Concurrently
Running Jobs
Job Accounting
New Charging Policy
Production Queues
Table 5. Production Queues
Sample Job Scripts
Job Management
Monitoring Queue Status
TACC's qlimits command
Slurm's sinfo command
Monitoring Job Status
Slurm's squeue command
TACC's showq utility
Dependent Jobs using sbatch
Other Job Management Commands
Machine Learning on LS6
Environment Setup
PyTorch
Testing PyTorch Installation
Single-Node
Multi-Node
Transformers with Accelerate
Set up Environment for Transformers
Testing Transformers Installation
Single-Node
Multi-Node
Visualization and VNC Sessions
Remote Desktop Access
Applications on the Remote Desktop
Parallel Applications from the Desktop
OpenGL/X Applications On The Desktop
Parallel VisIt on Lonestar6
Preparing Data for Parallel Visit
Parallel ParaView on Lonestar6
Help Desk
Jetstream2
Ranch
Notices
Introduction
Intended Use
System Configuration
System Access
Ranch Environment Variables
Accessing Files from Within Running Programs
Organizing Your Data
Ranch Quotas
Monitor your Disk Usage and File Counts
Ranch "Project" Storage
Transferring Data
Manipulating Files within Ranch
Data Transfer Methods
Secure Copy with scp Command
Remote Sync with rsync command
Large Data Usage and Transfers
Examples
Good Conduct on Ranch
In Depth: How it Works
Help Desk
Ranch Migration 2026
Notices
Technical Information
Project Spaces
Curating your data
Examining Usage on Old Ranch
Examining Usage on New Ranch
Important Dates
Data Migration Directions
Large Data Migrations Guidelines
Ranch Quotas and Polices
Stampede3
Notices
Introduction
Allocations
System Architecture
Ice Lake Large Memory Nodes
Table 1. ICX NVDIMM Specifications
Ice Lake Compute Nodes
Table 2. ICX Specifications
Sapphire Rapids Compute Nodes
Table 3. SPR Specifications
Skylake Compute Nodes
Table 4. SKX Specifications
GPU Nodes
H100 nodes
Ponte Vecchio Compute Nodes
Login Nodes
Network
File Systems
Table 6. File Systems
Scratch File System Purge Policy
Accessing the System
Secure Shell (SSH)
Account Administration
Allocation Status
Linux Shell
Diagnostics
Environment Variables
Using Modules
Crontabs
TACC Tips
Managing Your Files
Navigating the Shared File Systems
Table 7. Built-in Account Level Aliases
Sharing Files with Collaborators
Running Jobs
Job Accounting
New Charging Policy
Slurm Partitions (Queues)
Table 8. Production Queues
Submitting Batch Jobs with sbatch
Table 9. Common sbatch Options
Launching Applications
One Serial Application
One Multi-Threaded Application
One MPI Application
One Hybrid (MPI+Threads) Application
More Than One Serial Application in the Same Job
MPI Applications - Consecutive
MPI Application - Concurrent
More than One OpenMP Application Running Concurrently
Interactive Sessions
Interactive Sessions with idev and srun
Interactive Sessions using ssh
Slurm Environment Variables
Building Software
Compilers
Intel Compilers
GNU Compilers
Compiling and Linking
Include and Library Paths
MPI Programs
Performance
Compiler Options
Architecture-Specific Flags
Intel oneAPI Math Kernel Library (oneMKL)
oneMKL with Intel C, C++, and Fortran Compilers
oneMKL with GNU C, C++, and Fortran Compilers
Using oneMKL as BLAS/LAPACK with Third-Party Software
Using oneMKL as BLAS/LAPACK with TACC's MATLAB, Python, and R Modules
Controlling Threading in oneMKL
Using ScaLAPACK, Cluster FFT, and Other oneMKL Cluster Capabilities
Job Scripts
SPR Nodes
ICX Nodes
SKX Nodes
Job Management
Monitoring Queue Status
TACC's qlimits command
Slurm's sinfo command
Monitoring Job Status
Slurm's squeue command
Queue Status Meanings
Table 10. Pending Jobs Reason
TACC's showq utility
Dependent Jobs using sbatch
Other Job Management Commands
Programming and Performance
Timing and Profiling
Data Locality
Vectorization
Programming and Performance: SPR, ICX, and SKX
File Operations: I/O Performance
Machine Learning
Python
Jupyter Notebooks
Help Desk
References
Stallion
System Configuration
File Systems
Computing Environment
Applications
DisplayCluster
Scalable Adaptive Graphics Environment (SAGE)
Launching SAGE
Viewing images with SAGE
Playing animations with SAGE
References
Vista
Notices
Introduction
System Architecture
Vista Topology
Grace Grace Compute Nodes
Table 1. GG Specifications
Grace Hopper Compute Nodes
Table 2. GH Specifications
Login Nodes
Network
File Systems
Table 3. File Systems
Scratch File System Purge Policy
Accessing the System
Secure Shell (SSH)
Account Administration
Allocation Status
Linux Shell
Account-Level Diagnostics
Environment Variables
Using Modules
Crontabs
TACC Tips
Running Jobs
Slurm Partitions (Queues)
Table 4. Production Queues
Job Accounting
New Charging Policy
Submitting Batch Jobs with sbatch
Table 5. Common sbatch Options
Launching Applications
One Serial Application
One Multi-Threaded Application
One MPI Application
One Hybrid (MPI+Threads) Application
MPI Applications - Consecutive
MPI Application - Concurrent
More than One OpenMP Application Running Concurrently
Interactive Sessions
Interactive Sessions with idev and srun
Interactive Sessions using ssh
Slurm Environment Variables
Job Management
Monitoring Queue Status
TACC's qlimits command
Slurm's sinfo command
Monitoring Job Status
Slurm's squeue command
Job Status
TACC's showq utility
Dependent Jobs using sbatch
Other Job Management Commands
NVIDIA MPS
Sample Job Script
Notes on Performance
Machine Learning on Vista
Environment Setup
Running PyTorch (Single Node)
Installing PyTorch
Testing PyTorch Installation
Running PyTorch (Multi-node)
Setting Up Transformers with Accelerate
Set up Environment for Transformers and Accelerate
Testing Transformers and Accelerate Installation
Single-Node
Multi-Node
Building Software
NVIDIA Compilers
GNU Compilers
Compiling and Linking
Include and Library Paths
MPI Programs
Building for Performance
Compiler Options
Architecture-Specific Flags
NVIDIA Performance Libraries (NVPL)
NVIDIA Documentation
Compiler Examples
Using NVPL as BLAS/LAPACK with Third-Party Software
Controlling Threading in NVPL
Using NVPL with other MATLAB, PYTHON and R
Help Desk
References
Software Packages
AlphaFold2
Installations at TACC
Table 1. Installations at TACC
Running AlphaFold2
Structure Prediction from Single Sequence
Table 2. AlphaFold2 Parameter Settings
Batch Structure Predictions from Independent Sequences
1. Prepare the commandlines file
2. Create the af2_launcher.py script
3. Set up the SLURM job script
Structure Prediction from Multiple Sequences (Multimer)
References
AlphaFold3
Installations at TACC
Table 1. Installations at TACC
Access
Running AlphaFold3
Directory Structure
Input Preparation
SLURM Job Script Preparation
Table 2. Required Variables to Set in Job Script
Optimizing Your AlphaFold3 Run
Enabling Unified Memory (GPU Spill to Host RAM)
Running MSA and Inference Separately
References
ANSYS
Licenses
Installations
Table 1. Installations at TACC
Running ANSYS
Interactive Mode
Batch Mode
Table 2. Binaries Location
Table 3. User Guides - Running Jobs
References
CD Tools
Using CD Tools
1. Initialize CD Tools Environment Variable
2. Distribute Files to Each Node's /tmp Space
3. Collect your Output Files
Sample Job Script
Notes
References
Gaussian
Licenses
UT Austin Students, Staff, and Faculty
Other Academic Users
Running Gaussian
Sample Job Script
References
GROMACS
Installations
Batch Jobs
Job Scripts
Stampede3
CPU
GPU
Frontera
CPU
GPU
Lonestar6
CPU
GPU
Vista
CPU
GPU
References
idev
How it works
Accessing Nodes Interactively
Examples
Exiting idev
LAMMPS
Installations
Job Scripts
Sample: Stampede3
Sample: Frontera
Sample: Lonestar6
Example Job-Script Invocations
Running Interactively
References
Lmod
MATLAB
Licenses
Interactive Mode
DCV session
Batch Mode
Sample Frontera Job Script
Parallel MATLAB
MATLAB parfor
MATLAB matlabpool
MATLAB Toolboxes
Mathworks References
Help
References
NAMD
Installations
Vista
GH 1 Task per Node
GG 4 Tasks per Node
GG 8 Tasks per Node
Frontera
CLX 4 Tasks per Node
CLX 8 Tasks per Node
GPU 1 Task per Node
Stampede3
SPR 4 Tasks per Node
SPR 8 Tasks per Node
ICX 4 Tasks per Node
ICX 8 Tasks per Node
SKX 4 tasks per node
SKX 8 Tasks per Node
Lonestar6
CPU Nodes: 4 Tasks per Node
GPU Nodes: 1 Task per Node
References
OpenFOAM
Environment Setup
Run Tutorials
References
ParaView
Installations
Table 1. ParaView Modules per TACC Resource
Interactive ParaView A Compute-Node Desktop
Running ParaView In Parallel
Notes on ParaView in Parallel
Running ParaView In Batch Mode
Job Script
Visualization Team Notes
References
PyLauncher
What is PyLauncher
Installations
Basic setup
Output files
Launcher types
Multi-Threaded
MPI
GPU launcher
Submit launcher
Sample Job Setup
Slurm Job Script File on Frontera
PyLauncher File
Command Lines File
Debugging and tracing
Advanced PyLauncher usage
PyLauncher in an idev Session
Restart File
Submit Launcher
Debugging PyLauncher Output
Parameters
References
Quantum Espresso
Installations
Running QE
Sample Job Script: Lonestar6
Sample Job Script: Frontera
Sample Job Script: Stampede3
Sample Job Script: Vista
References
REMORA
Running Remora
Interactively
Remora in Job Scripts
Remora Options
Remora Output
References
TAU
Using TAU
1. Instrumenting your Code
2. Running
3. Process program output
References
VASP
VASP Licenses
Installations
Running VASP Jobs
Sample Job Script: VASP on Vista
Sample Job Script: VASP on Stampede3
Sample Job Script: VASP on Frontera
Sample Job Script: VASP on Lonestar6
References
VisIt
Installations
Table 1. VisIt Modules per TACC Resource
Table 2. Running VisIt
Notes
Preparing Data for Parallel Visit
References
TACC Tutorials
Access Control Lists
Viewing ACLs
Setting ACLs from the Command-Line
Setting Complex ACLs
Default ACLs
Bash Quick Start Guide
Troubleshooting
Reference
BLAS/LAPACK
Implementations
MKL
BLIS
Reference BLAS/LAPACK
Other BLAS implementations
NVIDIA Performance Libraries
Containers at TACC
DDT Debugger
Set up Debugging Environment
Running DDT
DDT with Reverse Connect
References
Managing Group Permissions
Understanding the permissions and owners of a file
Recursive permissions
Controlling the active or current group
Managing all groups and requesting default changes
References
Managing I/O
What is I/O?
Recommended File Systems Usage
Table 1. TACC File System Usage Recommendations
Best Practices for Minimizing I/O
Use Each Compute Node's /tmp Storage Space
Table 2. TACC Resources Compute Node (/tmp) Storage
Run Jobs Out of Each Resource's Scratch File System
Avoid Writing One File Per Process
Avoid Repeated Reading/Writing to the Same File
Monitor Your File System Quotas
Manipulate Data in Memory, not on Disk
Stripe Large Files on $SCRATCH and $WORK
Govern I/O with OOOPS
Functions
How to Use OOOPS
Example: Single-Node Job on $SCRATCH
Example: Multi-Node Job on $SCRATCH
I/O Warning
Python I/O Management
Tracking Job I/O
I/O Do's and Don'ts
MAP
Set up Profiling Environment
Running MAP
MAP with Reverse Connect
References
Remote Desktop Access
Remote Desktop Methods
TACC Analysis Portal
DCV & VNC at TACC
Table 1. Job Scripts
Start a DCV Session
Start a VNC Session
Sample VNC session
Window 1
Window 2
Running Apps on the Desktop
Sharing Project Files
TACC, UNIX groups and Project Numbers
Determine Project's GID
Determine your Default GID
Create a Shared Project Workspace
References
TACC Analysis Portal (TAP)
Accessing the Portal
Job Management
Submitting a Job using TAP
Ending a Submitted Job
Resubmitting a Past Job
Utilities
Obtaining TACC Account Status
Setting a Remote Desktop to Full Screen Mode
Troubleshooting
No Allocation Available
Job Submission returns PENDING
Job Submission returns ERROR
Jupyter: Managing Modules
For Stampede3 Users:
TACC HPC Documentation
»
Search Results
Searching...
GitHub