From Parts
Jump to: navigation, search
This is Jeff.


About Me

Jeff Hammond

Assistant Computational Scientist

Leadership Computing Facility

Argonne National Laboratory / 630-252-5381 (desk)

Please read this before emailing me unsolicited questions about community software projects.

Curriculum Vitae

Unabridged Curriculum Vitae

Online Profiles



Random Facts

  • Google Scholar says I have an h-index of 15 (this is not necessarily exact value measured using only official publications but is closely correlated with the exact value).
  • My Erdős number is 4 (via Jim Demmel).
  • My Erdős-Bacon number is infinity because I have not acted in any movies. However, my Erdős-(Charles) Bacon is 5, thanks to Chapter 10 of this book.

My Other Argonne Pages


Specific software packages that these papers have involved are denoted by Software Name in front of the citation.

Reviews and High-level Presentations

Venkatram Vishwanath, Thomas Uram, Lisa Childers, Hal Finkel, Jeff Hammond, Kalyan Kumaran, Paul Messina and Michael E. Papka. DOE ASCR Workshop on Software Productivity for eXtreme-Scale Science (SWP4XS), Rockville, Maryland, January 13-14, 2014. Toward improved scientific software productivity on leadership facilities: An Argonne Leadership Computing Facility View

Jeff R. Hammond. ACM XRDS 19 (3), Spring 2013. Challenges and methods in large-scale computational chemistry applications (invited and proof-read but not refereed in the traditional sense)

Bill Allcock, Anna Maria Bailey, Ray Bair, Charles Bacon, Ramesh Balakrishnan, Adam Bertsch, Barna Bihari, Brian Carnes, Dong Chen, George Chiu, Richard Coffey, Susan Coghlan, Paul Coteus, Kim Cupps, Erik W. Draeger, Thomas W. Fox, Larry Fried, Mark Gary, Jim Glosli, Thomas Gooding, John Gunnels, John Gyllenhaal, Jeff Hammond, Ruud Haring, Philip Heidelberger, Mark Hereld, Todd Inglett, K.H. Kim, Kalyan Kumaran, Steve Langer, Amith Mamidala, Rose McCallen, Paul Messina, Sam Miller, Art Mirin, Vitali Morozov, Fady Najjar, Mike Nelson, Albert Nichols, Martin Ohmacht, Michael E. Papka, Fabrizio Petrini, Terri Quinn, David Richards, Nichols A. Romero, Kyung Dong Ryu, Andy Schram, Rob Shearer, Tom Spelce, Becky Springmeyer, Fred Streitz, Bronis de Supinski, Pavlos Vranas, Bob Walkup, Amy Wang, Timothy Williams, and Robert Wisniewski. Blue Gene/Q: Sequoia and Mira in Contemporary High Performance Computing: From Petascale toward Exascale, edited by Jeffrey S. Vetter.

Jeff R. Hammond. IEEE-TCSC Blog, August 6th, 2012. Challenges for Interoperability of Runtime Systems in Scientific Applications (invited and proof-read but not refereed in the traditional sense)

Matrix and Tensor Computations

BLIS: T. M. Smith, R. van de Geijn, M. Smelyanskiy, J. R. Hammond, and F. G. Van Zee. "Proceedings of the 28th IEEE International Parallel and Distributed Processing Symposium (IPDPS)." Phoenix, Arizona, May 2014. Anatomy of High-Performance Many-Threaded Matrix Multiplication. Also known as FLAME Working Note #71. The University of Texas at Austin, Department of Computer Science. Technical Report TR-13-20. 2013. Opportunities for Parallelism in Matrix Multiplication (Source Code)

P. Ghosh, J. R. Hammond, S. Ghosh, and B. Chapman, 4th International Workshop on. Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS13). Workshop at SC13, Denver, Colorado, USA, November 2013. Performance analysis of the NWChem TCE for different communication patterns

TCE-IE: D. Ozog, J. R. Hammond, J. Dinan, P. Balaji, S. Shende and A. Malony. International Conference on Parallel Processing (ICPP). Ecole Normale Superieure de Lyon, Lyon, France, October 1-4, 2013. Inspector-Executor Load Balancing Algorithms for Block-Sparse Tensor Contractions (Preprint). (Related poster from ICS.)

CTF: Edgar Solomonik, Devin Matthews, Jeff Hammond and James Demmel. Proc. 27th Intl. Parallel and Distributed Processing Symp (IPDPS). Boston, Massachusetts, May 2013. Cyclops Tensor Framework: reducing communication and eliminating load imbalance in massively parallel contractions (Preprint) (Source Code)

CTF: Edgar Solomonik, Jeff Hammond and James Demmel. Electrical Engineering and Computer Sciences, University of California at Berkeley, Technical Report No. UCB/EECS-2012-29, March 9, 2012. A preliminary analysis of Cyclops Tensor Framework.

Elemental: J. Poulson, B. Marker, J. R. Hammond, N. A. Romero, and R. van de Geijn. ACM Trans. Math. Software, 39 (2012). Elemental: A New Framework for Distributed Memory Dense Matrix Computations. (preprint) PDF (Source Code)

Automatically tuned libraries for native-dimension tensor transpose and contraction algorithms (derived from chapter in my dissertation). This is really old and does not reflect my current understanding of these issues. I really need to update this document.

MPI, Global Arrays, ARMCI and OpenSHMEM

OSHMPI J. R. Hammond, S. Ghosh, and B. M. Chapman, accepted to First OpenSHMEM Workshop: Experiences, Implementations and Tools. Implementing OpenSHMEM using MPI-3 one-sided communication (Preprint) (Source Code)

V. Morozov, J. Meng, V. Vishwanath, J. R. Hammond, K. Kumaran and M. Papka. "Parallel Processing Workshops (ICPPW), 41st International Conference," September 2012, Pittsburgh, Pennsylvania ALCF MPI Benchmarks: Understanding Machine-Specific Communication Behavior (IEEE link) (Slides)

OSPRI: J. R. Hammond, J. Dinan, P. Balaji, I. Kabadshow, S. Potluri, and V. Tipparaju, The 6th Conference on Partitioned Global Address Space Programming Models (PGAS). Santa Barbara, CA, October 2012. OSPRI: An Optimized One-Sided Communication Runtime for Leadership-Class Machines (Preprint).

ARMCI-MPI: J. Dinan, P. Balaji, J. R. Hammond, S. Krishnamoorthy, and V. Tipparaju, Proc. 26th Intl. Parallel and Distributed Processing Symp (IPDPS). Shanghai, China, May 2012. Supporting the Global Arrays PGAS Model Using MPI One-Sided Communication (preprint). (Source Code)

J. Dinan, S. Krishnamoorthy, P. Balaji, J. R. Hammond, M. Krishnan, V. Tipparaju and A. Vishnu, in Recent Advances in the Message Passing Interface (Lecture Notes in Computer Science, Volume 6960/2011, pp. 282-291), edited by Y. Cotronis, A. Danalis, D. S. Nikolopoulos and J. Dongarra. Noncollective Communicator Creation in MPI (preprint).

TAU-ARMCI: J. R. Hammond, S. Krishnamoorthy, S. Shende, N. A. Romero and A. D. Malony, Concurrency and Computation: Practice and Experience (DOI: 10.1002/cpe.1881). Performance Characterization of Global Address Space Applications: A Case Study with NWChem (Preprint) PDF

Performance Engineering

Please do not confuse my co-authorship of these papers with any expertise in computational fluid dynamics; my role was primarily - if not exclusively - pertaining to performance analysis and optimization for the Blue Gene architecture.

Amanda Peters Randles, Vivek Kale, Jeff Hammond, William D. Gropp and Efthimios Kaxiras. Proc. 27th Intl. Parallel and Distributed Processing Symp (IPDPS). Boston, Massachusetts, May 2013. Performance Analysis of the Lattice Boltzmann Model Beyond Navier-Stokes (Preprint)


Sean Hogan, Jeff R. Hammond and Andrew A. Chien. Fault-Tolerance at Extreme Scale (FTXS). Boston, MA. June, 2012. An Evaluation of Difference and Threshold Techniques for Efficient Checkpoints. (Preprint) (Slides)

Statistical sampling and molecular dynamics

LAMMPS Rolf Isele-Holder, Wayne Mitchell, Jeff Hammond, Axel Kohlmeyer and Ahmed Ismail, J. Chem. Theory Comput. 9 (12), 5412-5420 (2013). Reconsidering Dispersion Potentials: Reduced Cutoffs in Mesh-Based Ewald Solvers Can Be Faster Than Truncation

LAMMPS-Ensembles: Luke Westby, Mladen Rasic, Adrian Lange and Jeff R. Hammond. See LAMMPS for more information.

NEUS: A. Dickson, M. Maienshein-Cline, A. Tovo-Dwyer, J. R. Hammond and A. R. Dinner, J. Chem. Theory Comput. 7, 2710 (2011). Flow-dependent unfolding and refolding of an RNA by nonequilibrium umbrella sampling. (Preprint)

Quantum chemistry on accelerators


Eugene has incorporated all of the GPU coupled-cluster codes into PSI4.

A. E. DePrince III, J. R. Hammond and S. K. Gray, Proceedings of SciDAC 2011, Denver, CO, July 10-14, 2011. Many-body quantum chemistry on graphics processing units.

A. E. DePrince III and J. R. Hammond, Symposium on Application Accelerators in High-Performance Computing (SAAHPC), Knoxville, TN, USA, 19-21 July 2011. Quantum chemical many-body theory on heterogeneous nodes. (Slides)

A. E. DePrince, III and J. R. Hammond J. Chem. Theory Comput. 7, 1287 (2011) Coupled Cluster Theory on Graphics Processing Units I. The Coupled Cluster Doubles Method.

A. E. DePrince III and J. R. Hammond, Symposium on Application Accelerators in High-Performance Computing (SAAHPC), Knoxville, TN, USA, 13-15 July 2011. Evaluating one-sided programming models for GPU cluster computations.

Intel MIC

NWChem-MIC: Jeff Hammond, Priyanka Ghosh, David Ozog, Cyrus Karshenas, and Karol Kowalski. work in progress (I gave a talk on the preliminary results at SIAM CSE13.)

Coupled-cluster response theory and NWChem

NWChem 101 - incomplete version of what I hope will be a crash course in how to use NWChem like an expert. Obviously, this is not a refereed publication.

Coupled-cluster response theory: parallel algorithms and novel applications (my dissertation).

NWChem K. Kowalski, J. R. Hammond, W. A. de Jong, P.-D. Fan, M. Valiev, D. Wang and N. Govind, in Computational Methods for Large Systems: Electronic Structure Approaches for Biotechnology and Nanotechnology, edited by J. R. Reimers (Wiley, March 2011, Hoboken). Coupled-Cluster Calculations for Large Molecular and Extended Systems

K. Kowalski, S. Krishnamoorthy, O. Villa, J. R. Hammond, and N. Govind, J. Chem. Phys. 132, 154103 (2010). Active-space completely-renormalized equation-of-motion coupled-cluster formalism: Excited-state studies of green fluorescent protein, free-base porphyrin, and oligoporphyrin dimer

J. R. Hammond, N. Govind, K. Kowalski, J. Autschbach and S. S. Xantheas, J. Chem. Phys. 131, 214103 (2009). Accurate dipole polarizabilities for water clusters N=2-12 at the coupled-cluster level of theory and benchmarking of various density functionals

J. R. Hammond and K. Kowalski, J. Chem. Phys. 130, 194108 (2008). Parallel computation of coupled-cluster hyperpolarizabilities

K. Kowalski, J. R. Hammond, W. A. de Jong and A. J. Sadlej, J. Chem. Phys. 129, 226101 (2008). Coupled cluster calculations for static and dynamic polarizabilities of C60

J. R. Hammond, W. A. de Jong and K. Kowalski, J. Chem. Phys. 128, 224102 (2008). Coupled cluster dynamic polarizabilities including triple excitations

K. Kowalski, J. R. Hammond and W. A. de Jong, J. Chem. Phys. 127, 164105 (2007). Linear response coupled cluster singles and doubles approach with modified spectral resolution of the similarity transformed Hamiltonian

J. R. Hammond, K. Kowalski and W. A. de Jong, J. Chem. Phys. 127, 144105 (2007). Dynamic polarizabilities of polyaromatic hydrocarbons using coupled-cluster linear response theory

J. R. Hammond, M. Valiev, W. A. de Jong and K. Kowalski, J. Phys. Chem. A 111, 5492 (2007). Calculations of properties using a hybrid coupled-cluster and molecular mechanics approach

Chemistry Applications

R. S. Assary, P. C. Redfern, J. R. Hammond, J. Greeley and L. A. Curtiss, Chem. Phys. Lett., 497 (1-3), 123 (2010). Predicted Thermochemistry for Chemical Conversion of 5-Hydroxymethyl Furfural

R. S. Assary, P. C. Redfern, J. R. Hammond, J. Greeley and L. A. Curtiss, J. Phys. Chem. B, 114, 9002 (2010). Computational Studies of the Thermochemistry for Conversion of Glucose to Levulinic Acid

R. K. Chaudhuri, J. R. Hammond, K. F. Freed, S. Chattopadhyay and U. S. Mahapatra, J. Chem. Phys. 129, 064101 (2008). Reappraisal of cis effect in 1,2-dihaloethenes: An improved virtual orbital multireference approach

M. Lingwood, J. R. Hammond, D. A. Hrovat, J. M. Mayer, and W. T. Borden, J. Chem. Theory Comp. 2, 740 (2006). MPW1K, rather than B3LYP, should be used as the functional for DFT calculations on reactions that proceed by proton-coupled electron transfer (PCET)

RDM Theory

J. R. Hammond and D. A. Mazziotti, Bulletin of the American Physical Society 52 (1) (March 2007). Variational reduced-rensity-matrix theory applied to the hubbard model. (Slides) (This was first reported results on the 2D Hubbard model, which been the subject of ongoing interest (e.g. by [1], [2], and [3]).)

J. R. Hammond and D. A. Mazziotti, Phys. Rev. A 73, 062505 (2006). Variational reduced-density-matrix calculation of the one-dimensional Hubbard model.

J. R. Hammond and D. A. Mazziotti, Phys. Rev. A 73, 012509 (2006). Variational reduced-density-matrix calculations on small radicals: a new approach to open-shell ab initio quantum chemistry.

J. R. Hammond and D. A. Mazziotti, Phys. Rev. A 71, 062503 (2005). Variational two-electron reduced-density-matrix theory: Partial 3-positivity conditions for N-representability.


Computational Chemistry Beyond Petascale - Talk to ASCAC at the November 2010 meeting.

I try to contribute useful tutorial code to hpcinchemistrytutorial.

NWChem tutorial for LCRC users

WOLFHPC 2012 Slides- Evolving the Tensor Contraction Engine for Next-Generation Multi-petaflop Supercomputers (5/31/2011).

ALCF Getting Started Workshop

This is related content. Please also see the rest of this Wiki for information on Blue Gene/P and Blue Gene/Q.



  • I am involved in both ASCR Leadership Computing Challenge (ALCC) and INCITE projects in computer science and chemistry.
  • OSHMPI - OpenSHMEM over MPI-3
  • OSPRI (follow link for details)
  • ARMCI-MPI - Jim Dinan developed a portable, high-performance implementation of ARMCI using MPI-2 RMA with limited help from me. I am working on the MPI-3 implementation.
  • NWChem - I developed the coupled-cluster response property capability, among other features, during graduate school. Static partitioning (load-balancing), threading, vectorization and accelerator integrations for NWChem are currently under investigation.


  • BG/Q ESP - Robert, Curt, Ed and I have a Blue Gene/Q Early Science Program project entitled Accurate Numerical Simulations of Chemical Phenomena Involved in Energy Production and Storage with MADNESS and MPQC.
  • Unistack - Unified runtime systems for parallel programming models.
  • MPQC - I helped port and optimize MPQC for Blue Gene/P, with kind support from Curt Janssen and Ed Valeev, who are the lead developers of this code.
  • A1 (follow link for details)
  • GPU-CC - Eugene DePrince developed a coupled-cluster code for GPUs with help from me. This code is now part of PSI4. This project belongs to Eugene now although I'm still making use of lessons learned from it.
  • TAU-ARMCI - I contributed to the development of TAU profiling capability for the ARMCI communication library. This was a joint project with Sriram Krisnamoorthy and the Sameer Shende that is now complete.
  • CECC - Chemistry Exascale Codesign Center. Described here, here, and here.
Personal tools