John Baugh
Bio
John Baugh is a Professor of Civil Engineering and Operations Research at North Carolina State University. He received the Ph.D. from Carnegie Mellon University in 1989.
In addition to carrying out research and instructional programs, Dr. Baugh works broadly within the university to improve the educational opportunities and outcomes of its students. Included among those efforts is leadership in creating a new graduate program in the areas of computing and systems engineering, and previously serving as Director of the NC Japan Center.
Before pursuing the Ph.D., he worked as a Research Engineer in Applied Mechanics and Structures at Battelle’s Pacific Northwest National Laboratory. He is a member of ACM, INFORMS, Phi Kappa Phi, Tau Beta Pi, and Chi Epsilon.
Education
Ph. D. Civil Engineering Carnegie Mellon University 1989
M.S. Civil Engineering Carnegie Mellon University 1984
B.C.E. Civil Engineering Auburn University 1983
Area(s) of Expertise
Dr. Baugh's research interests include scientific computing and cyber-physical systems, formal methods for safe and trustworthy software, verification and validation, mathematical optimization and control, and applications across civil engineering including coastal, ocean, transportation, and structural systems.
Current and past funding sources include the US Department of Energy, National Science Foundation, Department of Homeland Security, Federal Transit Administration, US Environmental Protection Agency, and North Carolina Supercomputing Center.
Publications
- The ‘Causality’ Quagmire for Formalised Bond Graphs , Graph Transformation (2024)
- An HPC Practitioner’s Workbench for Formal Refinement Checking , Languages and Compilers for Parallel Computing (2023)
- Automatic modelling and verification of AUTOSAR architectures , Journal of Systems and Software (2023)
- Formalisation, Abstraction and Refinement of Bond Graphs , Graph Transformation (2023)
- Verifying ParamGen: A case study in scientific software abstraction and modeling , Proceedings of the 2023 Improving Scientific Software Conference (2023)
- Identifying cyber-physical vulnerabilities of water distribution systems using finite state processes , 2nd International Joint Conference on Water Distribution System Analysis & Computing and Control in the Water Industry, WDSA CCWI 2022 (2022)
- Industrial Symbiosis Waste Exchange Identification and Optimization , Proceedings of the Annual Hawaii International Conference on System Sciences (2021)
- Significance of multi-hazard risk in design of buildings under earthquake and wind loads , ENGINEERING STRUCTURES (2021)
- Sterling: A Web-Based Visualizer for Relational Modeling Languages , Rigorous State-Based Methods. ABZ 2021 (2021)
- A simple Hybrid Event-B model of an active control system for earthquake protection , From Astrophysics to Unconventional Computation (2019)
Grants
It takes a delicate balancing act plus a significant amount of new research to develop advanced machine learning based systems that are correct at the user level (e.g., fair treatment of the images they classify) and run reliably and efficiently on modern hardware (consume less energy and do not contain software defects). The dependency chain is a balancing act between what exists and what will be researched and developed in this project. First, higher level correctness often calls for advanced network structures such as transformers. There is significant amount of ongoing research one can draw upon. Second, such network structures cannot be implemented as such, and must be suitably sparsified (or compressed to remove unimportant weights). This is because all fully trained "original" networks are, by nature, overparameterized (finding the minima of loss functions is ideally done a higher dimensional space with more weights). Keeping the redundant weights exacerbates FLOP as well as memory requirements, and therefore one must discard unimportant weights before deployment. There are many excellent compression systems??????????????????including our own system??????????????????one can use or extend. Third, pruning merely to achieve metrics such as top-1% classification accuracy are insufficiently incisive to guarantee semantic goals (e.g., fairness). New research is needed to prune in such a way that higher level semantic goals are also met. Fourth, when sparsified in interesting ways, networks exhibit sparsity patterns that are completely data driven. One cannot find these sparse structures in existing libraries or in familiar forms such as compressed sparse row (CSR), coordinate list (COO), or ELLPACK (ELL) formats. Therefore, new research is needed in mapping these "data driven" sparsity patterns efficiently onto state-of-the-art-and fast-moving software/hardware combinations. Fifth, the sparsity patterns desired (or demanded) by top-down analysis very often does not meet what can be efficiently realized in software/hardware. Finding these efficient patterns is something for which significant expertise is needed. We find that recommendations coming from "bottom-up" analysis are often accepted without question, as ML experts working at higher levels do not have the wherewithal to deviate. So finally, there must be mechanisms to formally describe these patterns either with the view of mapping them onto the best available patterns to available or to provide the starting specifications for correctly and efficiently realizing these patterns (say in custom CUDA code or hardware). By ensuring that the top-down-desired and the bottom-up-provided "meet in the middle," the proposed research will enable the development of advanced machine learning based systems that are correct at the user level and run reliably and efficiently on modern hardware.
This project will develop methodology for verification and validation of advanced computer models used in Risk-Informed Safety Margin Characterization (RISMC) of nuclear power plants. The project will apply the methodology to selected problems in nuclear reactor safety. The focus is placed on computer codes that simulate external hazards that have impact on plant safety, including severe accident management and emergency response. The project will collect, characterize, archive, and use data from plant measurements, integral-effect and separate-effect tests to support code validation. The methodology will bring to use probabilistic risk assessment (PRA) to guide a risk-informed validation approach implementation.
The dynamic evolution of landforms under stress can lead to catastrophic loss of either functionality or of mass itself. This project will examine the dynamics of landforms undergoing a transition from one state to another (e.g., barrier island collapse, wetland loss, dune erosion) in order to determine critical defining features of the resilient natural and developed landforms. This descriptive dynamic will be translated into design parameters for restoration of protective or beneficial landforms (e.g., beaches, dunes, barrier islands, wetlands). In addition, this analysis will be used to provide improved metrics for communicating hazard and risk as well as incorporating hazard and risk into land use plans. This project lies at the interface between Coastal Hazards Science and Planning for Resilience focus areas and has the potential to provide insights to the Hazards, Human Behavior and Economic Resilience focus area.
The purpose of this proposal is to establish a graduate research fellowship program to train students to be future leaders in the area of engineering of resilient civil infrastructure systems for coastal regions considering natural hazards. This program will be conducted in coordination with the ongoing DHS Center of Excellence on Natural Disasters, Coastal Infrastructure and Emergency Management.
The project will develop computer software to implement mathematical models designed to represent civil engineering infrastructure problems. In addition, the software will implement optimization techniques, such as mathematical programming and heuristic search methods. The techniques will be implemented so that alternative solutions, which meet given constraints on modeled objectives, will be obtained and provided to a user of the software.