Xsede logo

Xsede logo DEFAULT



What is XSEDE?¶

The Extreme Science and Engineering Discovery Environment(XSEDE) is a powerful collection of virtual resources and a way for scientists and researchers alike to interactively share resources and expertise. XSEDE provides an SSO ( Single Sign On ) Hub for access to several national computing clusters. You must have an XSEDE account in order to use XSEDE’s resources.

Using XSEDE’s resources¶

The easiest way to access XSEDE’s resources is through its SSO (Single Sign On) login hub. Once you sign onto this hub, you can access the clusters that XSEDE provides (assuming your XSEDE account has the proper clearance) without additional credentials. Using the SSO is easy, to connect to it use any SSH client you like (Linux and Mac users can use their terminal and Windows users can download MobaXterm).

DUO authentication¶

To use the SSO and other features, you will need to enroll in DUO authentication. Please be sure you already have an XSEDE account before attempting any of the following tips.

  • First sign into XSEDE’s user portal located here.

  • Find the tab towards the top labled “Profile” and click on it.

  • On the right hand side of the page, there will be the DUO logo and a link to enroll in DUO. Follow the steps on screen and setup a DUO token to be used with XSEDE.

  • Once you finish enrolling your DUO token, you can now have secure access to XSEDE’s SSO login node.

SSH into Login Hub¶


You must enroll in DUO authentication to use the SSO / Login Hub !

Once you have access to an SSH client, you can login to XSEDE.

  • The first step is to type:

Your ssh client may give you a warning about connecting to this server, if so, type yes and press Enter.

  • You will then be prompted for your XSEDE password, this is the same one you used when you created your XSEDE account. Note: You will not see anything as you type in your password, this is a security feature.

  • XSEDE will then present you with some options for DUO. Pick the desired action. See Below

  • Once you are logged on, XSEDE will display the MOTD(message of the day) with some system information and tips for logging into its resources. If you want to see this message again, type:

  • Once you login to the SSO Hub, you will receive an X.509 certificate *which will give you 12 hours before your session ‘expires’ and you will have to logout* of the SSO hub and log back in, even if you are using one of the XSEDE’s resources (i.e. Stampede, Comet, etc). You can see your remaining time left with the following command:


GSISSH into Clusters¶

To access super-computing clusters through XSEDE, your account must have the ‘clearance’ to login to a specific cluster. Go to the XSEDE website and login, there you will see what clusters you have access to under . If you have access to a cluster, you can login to it through the XSEDE SSO without being required to enter extra credentials. The protocol used to gain access to these clusters is gsissh, which is gsi-enabled ssh. It operates in a similar manner as ssh–for us, the user, we won’t notice a difference. Scroll below to find the specific cluster you’re looking for, if it is not found within this page you can visit XSEDE’s user Guides and click on the cluster you’re trying to access for more information.

To get a quick overview of all possible systems the XSEDE SSO can connect to, type: . Note that you must have access to a cluster before logging into it.

[[email protected] ~]$ xsede-gsissh-hosts bridges comet mason osg rmacc-summit stampede stampede2 supermic wrangler-iu wrangler-tacc xstream



Stampede has reached its 4 year life cycle and the cluster is retiring. For continued use of TACC resources, Stampede-2 must be used. Login will be disabled starting April 2, 2018 and Stampede2 will no longer provide temporary read-only mounts of the Stampede1 home and scratch file systems.**

To assist in the transfer from Stampede to Stampede-2, please see the Transition Guide.


Stampede-2 is the new flagship supercomputer at Texas Advanced Computing Center (TACC). After April 2, 2018, Stampede-2 will be the only Stampede system available.

Stampede-2’s initiation consisted of two infrastructure implementation phases. Phase 1 included 4,200 KNL (Knights Landing) compute nodes with:

  • Intel Xeon Phi 7250 with 68 cores on a single socket

  • 4 hardware threads per core totalling 272 threads on a single node

  • 96 GB of DDR4 RAM in addition to 16GB high speed MCDRAM. See Programming Notes for more info.

Phase 2 included 1,736 SKX nodes consisting of:

  • Intel Xeon Platinum 8160 “Skylake” processors with 48 cores on each node with a clock-rate of 2.1 GHZ.

  • 192 GB of RAM per node

  • 132 GB in /tmp on a SSD

Both Phase 1 and Phase 2 include a 100GB/sec Intel Omni-Path (OPA) network. Large memory nodes are expected to arrive in 2018. There are currently no plans for GPU systems in Stampede-2.

Logging into Stampede-2¶

To access Stampede-2 through XSEDE’s SSO, simply enter . Note that you must have an allocation with Stampede to login to the supercomputer. If you could access Stampede, you should be able to access Stampede-2. To find more information on Stampede-2, see TACC’s User Guide.

Submitting a batch job on Stampede-2¶

Stampede’s policy asks that jobs are not ran on the front-end nodes. So jobs must be submitted through a batch system. Stampede-2 uses SLURM as its job scheduler. To submit a job to SLURM, you can do the following:

  • Just like using the CRC’s resources, to submit a job on Stampede, you must create a job submission script. As Stampede-2 is a large machine with many users, there are a few different configurations for job submission scripts depending on the type of job to be ran.

  • A serial job, a job meant to run only on one core on a machine, could be created with the following job script:

#!/bin/bash#----------------------------------------------------#SBATCH -J myjob # Job name#SBATCH -o myjob.o%j # Name of stdout output file#SBATCH -e myjob.e%j # Name of stderr error file#SBATCH -p normal # Queue (partition) name#SBATCH -N 1 # Total # of nodes (must be 1 for serial)#SBATCH -n 1 # Total # of mpi tasks (should be 1 for serial)#SBATCH -t 01:30:00 # Run time (hh:mm:ss)#SBATCH [email protected]#SBATCH --mail-type=all # Send email at begin and end of job#SBATCH -A myproject # Allocation name (req'd if you have more than 1)# Other commands must follow all #SBATCH directives... module list pwd date # Launch serial code... ./mycode.exe # Do not use ibrun or any other MPI launcher# ---------------------------------------------------
  • An example of a KNL MPI job could be as follows:

#!/bin/bash#----------------------------------------------------#SBATCH -J myjob # Job name#SBATCH -o myjob.o%j # Name of stdout output file#SBATCH -e myjob.e%j # Name of stderr error file#SBATCH -p normal # Queue (partition) name#SBATCH -N 4 # Total # of nodes#SBATCH -n 32 # Total # of mpi tasks#SBATCH -t 01:30:00 # Run time (hh:mm:ss)#SBATCH [email protected]#SBATCH --mail-type=all # Send email at begin and end of job#SBATCH -A myproject # Allocation name (req'd if you have more than 1)# Other commands must follow all #SBATCH directives... module list pwd date # Launch MPI code... ibrun ./mycode.exe # Use ibrun instead of mpirun or mpiexec# ---------------------------------------------------
  • To see more examples of job submission scripts and other tips and tricks, see the sbatch guide on TACC’s User Guide.

Job Submission and Monitoring¶

Once your submission script is created, you can submit it to SLURM. You do this by typing the following command:

sbatch my_job_script_name

There are a two different options for checking on the status of your jobs. * The first option is to use by typing . * The second option is to use by typing . To see more options on job monitoring view the Job Monitoring section of the Stampede-2 User Guide.

For more information on Stampede-2, visit TACC’s Stampede-2 page.


Comet is a dedicated XSEDE cluster with 1,984 total compute nodes which can reach ~2.0 petaflops designed by Dell. The compute nodes contain Intel Xeon E5-2680v3’s, 128 GB of RAM, and 320 GB of local scratch memory. There are GPU nodes that have 4 NVIDIA GPU’s per each GPU node. There are also large memory nodes which contain 1.5 TB of RAM with 4 Intel Haswell processors each. The cluster uses CentOS as the OS and SLURM (just like Stampede) as the batch environment. Comet provides Intel, PGI, and GNU compilers.

Logging into Comet through SSO¶

  • Comet can be accessed through XSEDE’s SSO. Once logged into the SSO, you can access the Comet cluster through the following command:

gsissh comet.sdsc.xsede.org
  • If you have clearance to be on Comet, you will now be on a front-end for Comet.

  • For more information on the Comet cluster, visit XSEDE’s Comet Page.

Submitting Jobs to Comet¶

The Comet cluster, like the CRC, has many compute nodes which can have jobs ran across them. In order to manage this, just like Stampede, Comet uses SLURM as a resource manager. This means in order to properly submit a job to the Comet cluster, you must create and submit a ‘Job Submission Script’ to have your job properly ran across the compute nodes.

Sample SLURM jobscripts¶

For a basic MPI job, a submission script may look like the following:

#!/bin/bash#SBATCH --job-name="hellompi"#SBATCH --output="hellompi.%j.%N.out"#SBATCH --partition=compute#SBATCH --nodes=2#SBATCH --ntasks-per-node=24#SBATCH --export=ALL#SBATCH -t 01:30:00#This job runs with 2 nodes, 24 cores per node for a total of 48 cores.#ibrun in verbose mode will give binding detail ibrun -v ../hello_mpi

For an OpenMP Job, a base submission script would look like:

#!/bin/bash#SBATCH --job-name="hello_openmp"#SBATCH --output="hello_openmp.%j.%N.out"#SBATCH --partition=compute#SBATCH --nodes=1#SBATCH --ntasks-per-node=24#SBATCH --export=ALL#SBATCH -t 01:30:00#SET the number of openmp threadsexportOMP_NUM_THREADS=24#Run the job using mpirun_rsh ./hello_openmp

Within these submission scripts, you would need to change the job names to match that of your executable you want to run, and the output files to the names you are need.

Operating SLURM¶

SLURM is a resoruce manager and like the CRC, it has queues for job submissions. SLURM within Comet has 5 queues:

Name Max Wall Time Max Nodes compute: 48 hours 72 gpu: 48 hours 4 gpu-shared: 48 hours 1 shared: 48 hours 1 large-shared: 48 hours 1

For more examples of the GPU nodes, while logged onto Comet please see .

Job Management¶

To monitor your jobs in SLURM, you can view them with the squeue command:

squeue can take the following options:

-i interval Repeatedly report at intervals (in seconds) -i job_list Displays information for specified job(s) -i part_list Displays information for specified partitions (queues) -i state_list Shows jobs in the specified state(s)

To cancel a submitted job, you can use the scancel command such as:

[[email protected] ~]$ scancel jobid

Comet SLURM Help¶

For more information regarding SLURM and Comet, visit XSEDE’s Comet page.

© Copyright 2021, University of Notre Dame Center for Research Computing

Built with Sphinx using a theme provided by Read the Docs.
Sours: https://docs.crc.nd.edu/general_pages/x/xsede.html

For Media

XSEDE regularly archives its news announcements and events after they appear on the homepage, as well as its monthly newsletters. All of those archives are available via navigation on the right, including content that was migrated from the previous TeraGrid project website.


For the most current versions of publicly available documents related to the XSEDE project, please see the Publications page.


For questions about the archives or publications, contact us at [email protected]


The XSEDE wordmark can be downloaded and used online and in print by partner sites and other collaborators. The logo should not be altered in shape, proportion, or color.


If you have questions or would like to request the logo in higher resolution or in a single color, contact us at [email protected]


The "Powered by XSEDE" graphic is available in two versions -- blue and black. The graphic is 200 x 100 pixels and must appear at that size. Do not alter the logo in any way or resize it via HTML/CSS.


If you have questions about the use of the XSEDE wordmark or graphics, please contact [email protected]

[Download all web-ready logos]

Sours: https://www.xsede.org/news/for-media
  1. Gtx 1660 mobile
  2. Bellarmine class schedule
  3. Oceania tower 2

xsede12 xsede12

Welcome to XSEDE12

XSEDE12, the first conference of the Extreme Science and Engineering Discovery Environment, was held July 16-19, 2012, at the InterContinental hotel in downtown Chicago. The conference set a new standard for community participation and technical excellence.


Image of XSEDE12 award certificates.Twelve awards were presented on the final day of the conference. Following are the award categories, recipient names and project titles.

Congratulations to all award recipients on their outstanding work and contributions to XSEDE!

  • Best Paper and Best Science Paper
    Margarete Jadamec, Magali Billen, Oliver Kreylos
    "Three-dimensional Simulations of Geometrically Complex Subduction with Large Viscosity Variations"
  • Best Technology Paper
    Richard L. Moore, Leonard Carson, Amin Ghadersohi, Adam Jundt, Kenneth Yoshimoto, William Young
    "Analyzing Throughput and Utilization on Trestles"
  • Best Software and Software Environments Paper
    Katherine Lawrence, Nancy Wilkins-Diehr
    "Roadmaps, Not Blueprints: Paving the Way to Science Gateway Success"
  • Best Education, Outreach and Training Paper
    D. R. Mattson, Edee Wiziecki, R.J. Mashi
    "Enhancing Chemistry Teaching and Learning through Cyberinfrastructure"
  • Best Student Paper
    Justin McKennon, Gary Forrester, Gaurav Khanna 
    SCIENCE TRACK: "High Accuracy Gravitational Waveforms from Black Hole Binary Inspirals Using OpenCL"
  • Best Visualization
    Greg Abram, Carsten Burstedde, Omar Ghattas, James Martin, Georg Stadler, Lucas Wilcox
    "Visualization of Global Seismic Wave Propagation Simulation"
  • Best Poster
    Bhanu Rekepalli, Paul Giblock, Christopher Reardon, Mark Fahey, Subhra Sarkar
    "Petascale Informatics Applications Development on XSEDE Supercomputers"
  • Best Graduate Poster
    Andrew Kail, Kwai Wong, Elton Freeman, Jerry Baker

    "A Scalable Software Framework for Thermal Radiation Simulation" - University of Tennessee

  • Best Undergraduate Poster
    Joseph Peterson, Charles Wight

    "Reaction Modeling of Mesoscale Granular Beds of Explosives Subjected to Impact" - University of Utah

  • Best High School Poster
    Mike Wu, Rekha Narasimhan

    "Position and Vector Detection of Blind Spot Motion with the Horn-Schunck Optical Flow" - Torrey Pines High School

  • First Place – Student Programming Contest
    Manuel Zubieta, Justin Peyton, David Manosalvas, Nancy Carlos, Melissa Estrada, Grace Silva
    XSEDE Scholars Team 1, coached by Alice Fisher
  • Second Place – Student Programming Contest
    Brian Leu, Albert Liu, Parth Sheth, Zeyin Zhang
    University of Michigan team, coached by Benson Muite

Proceedings from the XSEDE12 Conference can be found at http://dl.acm.org/citation.cfm?id=2335755

National Science Foundation logo.

XSEDE is supported by the National Science Foundation.

Sours: https://www.xsede.org/web/xsede12/welcome
XSEDE Advisory Board (XAB) Orientation


Get access to the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing services, resources, and accounts.

XSEDE is an advanced, powerful, and robust collection of integrated advanced digital resources and services, giving researchers access to large supercomputers and related services such as storage and science gateways. TTS RT (Research Technology) serves as a local source of knowledge and support through its membership in the XSEDE Campus Champions program, acting as a conduit for account access to XSEDE services, applications, and digital resources, which include some of the largest supercomputers in the world with an enormous number of CPU cores, GPU devices, storage, high memory systems, and shared memory nodes.

Benefits to TTS's membership in the Campus Champions program include:

  • Local source for procuring a start-up XSEDE account
  • Source for advanced information regarding XSEDE resources
  • Direct access to XSEDE staff

If you’re interested in XSEDE:

  1. Contact TTS Research Technology (RT) to apply for an allocation. As a National Science Foundation-funded service, access to XSEDE requires an application and approval process. RT will guide you through this process.
  2. Next, work with RT staff to install applications and make sure that necessary access clients are installed.
  3. Once your account is approved, RT will work with you to establish access, copy files and submit jobs.

For more information about the RT Campus Champions program, see RT Campus Champions.

Sours: https://access.tufts.edu/xsede

Logo xsede

XSEDE Quick Links

  • Researchers
    The National Science Foundation's eXtreme Digital (XD) program is making new infrastructure and next-generation digital services available to researchers and educators to handle the huge volumes of digital information

  • Service Providers
    Service Providers - entities that make a resource visible and coordinated with the national cyberinfrastructure for benefit to the research community - are central to the function of XSEDE

  • College Students
    The goals of the student engagement program are to prepare and sustain a larger, more diverse pool of undergraduate and graduate students to be future researchers and educators. Students will be recruited nationally.

  • Community Outreach
    Increasing diversity is vital to America's future and is a foundation for two of XSEDE's strategic goals: Preparing the current and next generation of scholars, researchers, practitioners, and engineers in the use of advanced digital technologies

  • Sours: https://www.xsede.org/
    Best Logo design software for beginners

    NICS/NIMBioS XSEDE HPC Monthly Workshops

    XSEDE logo.
    NICS logo.

    The National Institute for Computational Sciences (NICS) at UT and NIMBioS host an XSEDE workshop on different topics each month related to high performance computing.

    Location: Hallam Auditorium (Room 206) at NIMBioS

    The in-person workshops are presented using the Wide Area Classroom (WAC) training platform telecast to several satellite sites in the U.S., including NIMBioS.

    For the next workshop and online registration, visit https://www.xsede.org/web/xup/course-calendar.


    OpenMP. This one day workshop is intended to give C and Fortran programmers a hands-on introduction to OpenMP programming. Attendees will leave with a working knowledge of how to write scalable codes using OpenMP.

    Big Data. This session focuses on topics such as Hadoop and Spark.

    MPI. This session is intended to give C and Fortran programmers a hands-on introduction to MPI programming. Attendees leave with a working knowledge of how to write scalable codes using MPI, the standard programming tool of scalable parallel computing.

    GPU Programming Using OpenACC. OpenACC is the accepted standard using compiler directives to allow quick development of GPU capable codes using standard languages and compilers. It has been used with great success to accelerate real applications within very short development periods. This workshop assumes knowledge of either C or Fortran programming and has a hands-on component using the Bridges computing platform at the Pittsburgh Supercomputing Center.

    Summer Boot Camp. This multi-day event includes MPI, OpenMP, OpenACC and accelerators. It concludes with a special hybrid exercise contest to challenge students to apply their skills.

    The National Science Foundation's Extreme Science and Engineering Discovery Environment (XSEDE) is a virtual organization of integrated advanced digital resources available to U.S. researchers. The workshops are co-organized by XSEDE and the Pittsburgh Supercomputing Center (PSC).

    Sours: http://www.nimbios.org/workshops/ws_xsede

    You will also like:

    Center for Quantitative Life Sciences

    XSEDE logoXSEDE is a collection of computing resources that scientists can use to interactively access and share computing resources, data and expertise. It consists of supercomputers, high throughput computing, storage, cloud computing, software and support for scientific computing.

    XSEDE is NSF funded and is available to all Oregon State researchers.  Allocations are available based on a proposal system. A startup allocation can be requested any time. Based on the performance of the code and appropriateness of the computations a proposal can be submitted for a full research allocation (there are 4 proposal periods per year).  XSEDE also offers specialized computing gateways, including bioscience gateways, that can be used at anytime without going through a proposal process.

    The CGRB at Oregon State works with users to enable different pathways to access the XSEDE resources. We have many tools and methods developed to take full advantage of the valuable resource. To access XSEDE resources, go to https://www.xsede.org.  If you need assistance in gaining access to XSEDE resources, contact [email protected]

    Sours: https://cqls.oregonstate.edu/bioinformatics/xsede

    879 880 881 882 883