Skip to main content

Users' Reseach Project Highlights

HPC@LSU and LONI specifically seeks out computationally intensive, large-scale research projects with the potential to significantly advance key areas in science and engineering. We encourage large allocation proposals from universities, research institutions, and national labs. Our user community continues to expand, with current research applications in Chemistry, Biochemistry, Physics, Astronomy, Chem/Elec/Mech Engineering, Atmos/Earth/Ocean Science, Material Science, Math and Computer Science. Accomplishments from various applications are highlighted in these project summaries.

  • Coastal Emergency Risks Assessment (CERA)--Twilley, Suhayda, Dill, Kaiser, Loffler, Owens, Estrade

    Real-time / operational forecasts of impending hurricane storm surge are very demanding of HPC resources, but in a different way than traditional academic needs. Large-scale and real-time operational needs can arise almost immediately, with directives from groups such as the Louisiana Governor’s Office of Homeland Security and Emergency Preparedness (GOHSEP) as well as the National Weather Service. The Coastal Emergency Risks Assessment (CERA) team at Louisiana State University has helped fulfill these types of real-time needs during the past several hurricane seasons, and this type of work would not be possible without high performance computational resources provided by LSU HPC.

    Fortunately, no hurricanes threatened the Louisiana coast during the 2009 hurricane season; however, the LSU HPC resources, specifically Tezpur, proved to be a crucial aspect of the ADCIRC Surge Guidance System Demonstration which occurred on October 21, 2009. The CERA team conducted a mock real-time operational forecast of 2008 Hurricane Ike, picking up National Hurricane Center Advisories 42, 44, 46, and 48. The team used 512 processors on Tezpur, and each advisory run was complete in a very impressive timeframe, ranging from 89-102 minutes. The timeliness of this HPC resource ensures the ability of the CERA team to meet very tight and critical deadlines for updating GOHSEP regarding impending storm surge during a real-time tropical / hurricane event (typically every six hours). These ADCIRC run times are comparable to similar run times on other impressive high performance computers including ‘Ranger’ at University of Texas – Austin and ‘Jade’ and ‘Sapphire’ at the U.S. Army Corps of Engineers – Engineer Research and Development Center.

    Each CERA ADCIRC run is concluded with a series of automated notification emails. One notification is sent to alert internal reviewers that the post-processing (image creation) is underway, one is sent when output is ready for internal review, and the final notification alerts the user community that output from the latest advisory is available for viewing.

  • Turbulent combustion--Dr. Sumanta Acharya

    The proposed research focuses on parallel computations for turbulent reacting flows. Such flows are encountered in a variety of applications including gas turbine combustion, furnaces, boilers, and automobile engines. To improve design, higher fidelity, low-cost computational procedures are needed. In the proposed research, we are further developing the LES methodology for reacting flows. The fundamental process being considered is the ability to simulate fuel-air mixing and combustion in complex environments and geometries. We are currently considering premixed combustion of natural gas/hydrogen and air, since such systems are being proposed by the department of energy for use in future land based gas turbines for power generation. A key technology that needs development here is the design of premixed burners that producle low emissions and are yet able to operate at lean conditions with no flashback. These issues are being studied with the simulations. A key goal is to enable computations with high accuracy and short turnaround times. Both these metrics are important from an industrial perspective. Our approach here is to use a thickened flame model for resolving the chemistry and parallel computations for reducing turnaround times. Our group uses the chem3D code, developed in house, and this runs in a scalable parallel manner on parallel Linux clusters. The research is of interest to Department of Defense, the Department of Energy and the National Science Foundation. It is currently funded through the Clean Power and Energy Research Consortium.

  • Multi-Scale Many-Body Formalism--Jarrell, Tomko, Maier, D'Azevedo,Scalettar, Moreno

    The goal of our DOE SciDAC funded collaboration is the development of a Multi-Scale Many-Body approach which circumvents exponential scaling common to simulations of strongly correlated systems by separating the correlations by length scale, and using an appropriate approximation for each. Over the last several years, our group, and others, have developed the algorithms required for the next generation of multi-scale codes which will be a significant step forward in treating the complete set of length and energy scales at a quantitative, material-specific level.

  • High Energy Physics--Dick Greenwood

    An important requirement of large experiments in High Energy Physics (HEP) such as the DØ Experiment at the Fermilab Tevatron is the generation of large amounts of simulated data in order to compare experimental data to theoretical predictions. We propose to apply the Open Science Grid (OSG) software that we successfully employed on the LONI 5 Tflop cluster at LSU during Mar-April, 2007 for several months of Monte Carlo production for DØ. In addition, we would like to commence the deployment of grid production of data analysis as part of the US-ATLAS Tier-3 production effort for the ATLAS Experiment underway at CERN in Geneva, Switzerland. This can be done in parallel with the DØ production, and employ the same OSG Compute Element (CE) as the DØ work.

  • Stirred Tank Mixing--Dr. Sumanta Acharya

    The proposed research focuses on parallel computations for stirred tank mixing. Stirred tanks are widely used by the chemical and petrochemical industry, and mixing efficiency is critical to product yield. Through detailed flow simulations, we are examining strategies for improved misxing in stirred tanks. This work is currently funded, in part, by DOW Chemical. The computations are based on the solution of Navier STokes Equations. Large Eddy Simulation is used to model turbulence. Immersed Boundary Method is used to capture complex moving interfaces. The code used is called Chem3D and has been developed to a large extent in house by our group. It is fully parallel, and has been extensively tested on Mike and Eric platforms.

  • Modeling General Relativistic Astrophysics--Luis Lehner

    Studies of binary neutron star dynamics and magnetic field influences

  • Molecular Simulations for Long-Time Scale Events--Bin Chen

    This proposal requests CPU time (0.9 million SU's) on the terascale computers at LONI in order to carry out large-scale simulation on long time-scale events of chemical, biological, and environmental interests. These include formation of nano-structured materials, peptide/protein folding, and self-assembled porphyrins on a gold surface. This work has been funded by the National Science Foundation through an Early Career Development Award, the American Chemical Society Petroleum Research Fund, and the Louisiana Board of Regents.

  • Storm Surge Hurricane Modeling--Robert R. Twilley

    This allocation will be used to simulate various finite element ADCIRC grids that will be used by LSU operationally during actual hurricane emergencies to model storm surge. This allocation is for the actual simulation during a hurricane event. ORED will activate CERA based on advice from SRCC and LSU Hurricane Center. Robert Twilley will notify LONI administration of request to initiate high priority allocation of computational resources sufficient to simulate hurricane tracks and forecast storm surge on regular (3 hour) hurricane projections using research to operations model configurations.

  • In Search of New Biomolecular Motors--Dorel Moldovan, Brian Novak, Marcio de Queiroz

    A biomolecular machine is an assembly of biological, chemical, mechanical, electrical, and/or optical components that transduce chemical free energy into mechanical work in a controllable manner. This emerging technology will enable a new generation of integrated devices at the nano and microscale with important advantages over traditional engineering systems in terms of performance, size, power consumption, efficiency, and ease of fabrication. Biomolecular assemblies could be engineered to function at the nano/microscale like any familiar macroscale machine, e.g., actuators, switches, sensors, transmission elements, mechanical joints, transportation systems, electric power sources, robots, and even computers. Such devices are expected to have a significant impact on the future of humanity. Visionary scholars believe intelligent biomolecular machines could one day be utilized to cure diseases, clean the environment, and facilitate space travel. Biomolecular devices are expected to impact a wide range of industries, including medical, biotechnology, materials, energy, and aerospace. Some cellular processes are natural biomolecular machines, and therefore can serve as building blocks for the development of engineered biomolecular devices. Proteins such as myosin, kinesin, dynein, RNA polymerase, adenosine triphosphate synthase, and bacterial flagellum have been extensively studied due to their ability to produce linear, rotary, or oscillatory motion. As a result, they are commonly referred to as molecular motors. Two prerequisites must be satisfied for a protein to be successfully engineered into a biomolecular machine: (a) the protein must be biochemically stable and robust, and readily available in large quantities at a low cost; and (b) the protein’s biomechanical properties must be completely characterized. Nano/microdevices that utilize molecules as a driving force have seldom been realized beyond very rudimentary machines (e.g., a spinning “rotor”). To fulfill the promises of this exciting field, multidisciplinary research efforts are imperative. These efforts should be aimed at identifying new molecular motors, as well as addressing some fundamental issues currently preventing one from realizing even simple biomolecular machines. The PIs of this proposal are assembling a multidisciplinary team of LSU researchers to study the design, fabrication, modeling, and control of a class of nanoscale biomolecular machines. We envision developing in the future several innovative machines such as a mixer, a conveyor belt, and a valve for microfluidic applications, and an electric power source for NEMS/MEMS. The team will be composed of faculty and research associates from Mechanical Engineering (Marcio de Queiroz, Dorel Moldovan, Brian Novak, and Sunggook Park), Biological Sciences (Grover Waldrop), and Chemistry (Robin McCarley). Our goal is to introduce a new candidate for a molecular motor: the enzyme biotin carboxylase. Previous experimental work by Waldrop fully characterized the basic biochemical properties of the enzyme. These biochemical properties strongly suggest that biotin carboxylase has the potential to be a valuable addition to the growing toolbox of building blocks for biomolecular machines. That is, the enzyme has so far satisfied prerequisite (a) listed above with “flying colors.” More important, it possesses several advantageous biochemical characteristics when compared to other molecular motors. To be at the forefront of this emerging field, we feel that, in addition to preliminary experimental results, it is indispensable to have theoretical and computational results to support our claims. Therefore, the next logical step in this long-term research is to investigate and characterize the biomechanical properties of biotin carboxylase (i.e., prerequisite (b) above). This will allow us to predict the enzyme’s dynamic behavior in typical biomolecular machine applications. We propose to conduct this study using molecular dynamics (MD) and/or mesoscopic-level numerical simulation methods. The computational resources being requested will strengthen the foundation of our multidisciplinary research effort in the long term. We are confident that the preliminary results acquired under this grant will improve our chances of obtaining million-dollar grants from NSF (e.g., the NIRT program), NIH (R01 grants), and DoD. Since our collaborative effort is directly in line with the mission of CBM2, its success will benefit the center’s long-term goal of becoming an NSF ERC or STC. It could also advance the CoE’s role in the multidisciplinary hiring initiatives in Materials Science and Computational Science.

  • Finite Element Modeling of the Optic Nerve Head--Sanjay Kodiyalam

    Though glaucoma is usually associated with elevated intraocular pressure (IOP), there is wide disagreement over the role of IOP in the development and progression of the disease. The LSU-Tulane ONH Biomechanics Laboratory is currently funded by the NIH to determine the biomechanical mechanisms underlying damage to the load-bearing connective tissues in the optic nerve head (ONH) in glaucoma through studies of IOP-related stress (force/cross sectional area) and strain (local deformation) within the connective tissues of the ONH in normal and glaucomatous monkey eyes. We believe that IOP-related stress and strain likely underlie the onset and progression of glaucomatous connective tissue and axonal damage regardless of the level of IOP at which the damage occurs or of the several mechanisms which may contribute. Initially, the project team will focus on sharing of data and results through co-developed file formats, validation and verification of test data sets, and small-scale visualization of results. We will then shift to computational implementation of the complex boundary conditions present in full-scale models of the ONH microstructure. We will then construct static, full-scale finite element models of an ONH, first with raw voxels then with smoothed surface geometries. After integrating nonlinearity and viscoelasticity into the material property specifications, we will simulate time-dependent IOP loadings typical under clinical and physiologic conditions. Finally, large-scale, smoothed-surface, static and time-dependent finite element models of individual monkey ONHs will be run on LSU’s supercomputers and the results shared across the Access Grid network.

  • Lithium Ion Conduction and Asymmetric Catalysis--Collin Wick

    The proposed research will cover two main subject areas. The first will be the computational study of lithium ion conduction through polymer electrolyte membranes for their optimization for rechargeable lithium batteries. The second will investigate interactions of chiral molecules with modified platinum surfaces to optimize their ability to create one chiral product over the other for pharmaceutical synthesis. Significant computational resources will be required to carry out the proposed research, including serial Monte Carlo and Langevin dynamics simulations, parallel molecular dynamics simulations, and parallel Car-Parinello molecular dynamics simulations, which are exceptionally computationally intensive.

  • Biomolecular Sims and Dist. Computing--Shantenu Jha

    ver the past twelve to eighteen months, we have been involved i n a wide range of computational science and computer science projects, requiring a range of simu lations. In this proposal we request 0.9M SUs for three distinct projects: (i) understanding translocation of nucleic acid in $\alpha$-Hemolysin protein pores; (i i) conformational switching of S-box riboswitch and, (iii) for developing and deploying distributed applications and the infrastructure required to facilitate their development. The projects for which computer time is being requested are all funded projects -- some at the national level, and some by local resources; those that are not funded by NSF/NIH form the basis for upcoming si gnificant NSF/NIH proposals. Additionally, the request for 0.9M SUs in this proposal is based upon the projected science problems as outlined below as well as a proven track record of {\it successfully} utilising approximately 350,000SUs in the last 9-10 months (in incre ments of several and concurrent 50,000 chunks).

  • Dirhodium DFT catalyst calculations--George Stanley

    Continued work into understanding bimetallic cooperativity in homogeneous catalysis is proposed. The catalyst system being studied is based on a tetraphosphine ligand that was designed to bridge and chelate two metal atoms. The dicationic dirhodium complex of this tetraphosphine ligand is a remarkably active and selective hydroformylation catalyst. This dinuclear catalyst complex continues to be one of the most dramatic examples of bimetallic cooperativity in homogeneous catalysis. Continued studies into the nature of the catalytic mechanism and exactly how the two metal atoms cooperate are proposed.

  • Signal extraction for SNO neutrino datasets--Thomas Kutter

    The Sudbury Neutrino Observatory (SNO) through its measurements of solar 8B neutrino flux via charge current (CC) and neutral current (NC) interactions has demonstrated successfully that solar neutrino undergrounds flavor transformation. Current investigation, include the CC spectra distortion (MSW effect) and a more precise measurement of NC flux where the ratio of CC to NC is related to the neutrino mixing angle. In order to observe spectra distortion of CC predicted by the MSW theory, signal extraction via a maximum likelihood fit with ``floating systematics'' is required. ``Floating systematics'' employs the ``smearing integration method'' which is a rigorous method of determining systematics. For a series of fits to the same data set, the ``smearing integration method'' applies shifts to all PDF observables. The magnitude of each systematic shifts are randomly selected from a known distribution and the width of fitted values gave the systematic uncertainties. This method is a brute-force method of getting around the problem of rebuilding PDF ``on the fly'', consequent of varying nuisance parameters during fits.

  • CFD and optimization--Frank Muldoon

    Computational Fluid Dynamics (CFD) in conjunction with optimization for the control of unsteady fluid and heat transfer problems.

  • Numerical Simulations of Compact Object Binaries--Patrick Motl

    We will use numerical simulations to study the dynamical evolution of binaries composed of either (a) two white dwarf stars or (b) two neutron stars. In both cases, the binaries are important sources of gravitational radiation for space-based missions such as LISA (in the white dwarf case) and ground based observatories such as LIGO (in the double neutron star case). Aside from being sources of gravitational radiation, such compact object binaries are of importance for their connection to type Ia supernovae (in the white dwarf case) and short duration gamma ray bursts (in the neutron star case).

  • Statistical and Quantum Mechanics--Randall Hall

    The aspect of the work covered in this proposal for computer time is the simulation of atomic and molecular glasses. The important feature of our work, as compared to other studies, is that we are simulating for very long times and at temperatures well below the glass transition temperature. We are simulating o-terphenyl and o-terphenyl/benzene mixtures as examples of molecular glasses and glassy mixtures that will be used for comparison to polystyrene. We will also use Gaussian 03 or NWChem (as available) to perform ab initio Monte Carlo and conventional ab initio calculations of the structure and reactivity of small Cu-O clusters.

  • Molecular Dynamics Study of NucleosomejStability and Receptor Binding--Thomas C Bishop

    We study nucleosome dynamics and histone-DNA interactions using NAMD and AMBER Molecular Dynamics simulations.

  • Simulations of Einstein's equations--Peter Diener

    This renewal proposal plans to support the LSU numerical relativity group at large in its computational needs, complementing our LRAC allocation. The members of our group include faculty, postdocs and graduate students from the Physics and Astronomy department, the Computer Science department, and the Center for Computation and Technology at LSU. The main topic of the proposal is numerical solutions of the Einstein equations. From the technical point of view, these constitute a set of non-linear hyperbolic equations, whose solutions, four-dimensional metric tensors, describe the curvature of spacetime. From the physical point of view, our simulations will study the physics of black holes and neutron stars. Most of the requested supercomputer time is for production, scientific runs, though we also include a fraction of time for infrastructure development and testing. Over the years, our group has been at the forefront of numerical relativity, with successful and productive NRAC, MRAC, and LRAC proposals at NCSA, PSC, SDSC, and elsewhere, leading to over 100 publications in astrophysics, numerical relativity, computational science, and numerical analysis. The codes to be used in this proposal have been thoroughly tested on many supercomputer architectures and shown to have good scaling properties.

  • Material science simulation--Ping Du

    Calculation of the properties of some material and simulation of the evolution of the nanowire to see the stability problems.