Andrew W. Appel. Modern Compiler Implementation in ML. Cambridge University Press, 1998. The opposite of Scott: focuses on compiler construction, not languagedesign issues.It uses the functional language ML, which is closely related to O'Caml,but just different enough to be annoying.
Solution manual of compiler design aho ullman
Steven S. Muchnick Advanced Compiler Design and Implementation. Morgan Kaufmann, 1997. A very extensive book on many aspects of compiler design. Startsabout halfway through Appel and goes much farther. Recommended forserious compiler hackers only.
The focus of 4115 is the design and implementation of a little language. You will divide into teams and design the goals, syntax, and semantics of your language, and implement a compiler for your language.
PETSc (Balay et al., 2010a,b), the Portable Extensible Toolkit for Scientific Computation (PETSc), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It employs the MPI standard for all message-passing communication. Being written in C and based on MPI, PETSc is a highly portable software library. PETScbased applications can run in almost all modern parallel environment, ranging from distributed memory architectures (Balay et al., 1997) (with standard networks as well as specialized communication hardware) to multi-processor (and multi-core) shared memory machines. PETSc library provides to its users a platform to develop applications exploiting fully parallelism and the flexibility to experiment many different models, linear and nonlinear large system solving methods avoiding explicit calls to MPI library. It is a freely available library usable from C/C++, Fortran 77/90 and Python (Dalcin, 2010). An overview of some of the components of PETSc can be seen in Fig. 2. An important feature of the package is the possibility to write applications at a high level and work the way down in level of abstraction (including explicit calls to MPI). As PETSc employs the distributed memory model, each process has its own address space. Data is communicated using MPI when required. For instance, in a linear (or nonlinear) system solution stage (a common case in FEM applications) each process will own a contiguous subset of rows of the system matrix (in the C implementation) and will primarily work on this subset, sending (or receiving) information to (or from) other processes. PETSc interface allows users an agile development of parallel applications. PETSc provides sequential/ distributed matrix and vector data structures, efficient parallel matrix/vector assembly operations using an object oriented style. Also, several iterative methods for linear/nonlinear solvers are designed in the same way.
REFERENCES 1. Aho, A.V., J.D. Ullman and J.E. Hopcroft. Data structures and algorithms, Addison Wesley (1983). 2. Balay, S., W.D. Gropp, L. Curfman McInnes and B.F. Smith, Efficient management of parallelism in object oriented numerical software libraries, 163-202 (1997). 3. Balay, S., K. Buschelman, V. Eijkhout, W.D. Gropp, D. Kaushik, M.G. Knepley, L. Curfman McInnes, B.F. Smith and H. Zhang, PETSc users manual. technical report ANL-95/11 (2010a). 4. Balay, S., K. Buschelman, W.D. Gropp, D.Kaushik, M.G. Knepley, L.Curfman McInnes, B.F. Smith and H. Zhang. PETSc Web page. URL (2010b). 5. Behara, S. and S. Mittal, "Parallel finite element computation of incompressible flows," Parallel Computing, 35, 195- 212 (2009). 6. BLAS, BLAS-Basic Linear Algebra Subprograms, URL (2010). 7. Dalcin, L.D., PETSc for Python, URL (2010). 8. Donea, J. and A. Huerta, Finite element methods for flow problems, Wiley and Sons (2003). 9. Gropp, W., S. Huss-Lederman, A. Lumsdaine, E. Lusk, B. Nitzberg, W. Saphir and M. Snir, The MPI-2 Extensions, volume 2 of MPI -The Complete Reference. MIT Press, Cambridge, 2 edition (1998). 10. Henty, D.S., "Performance of hybrid message-passing and shared-memory parallelism for discrete element modeling," Proceedings of Supercomputing 00 (2000). 11. Hwloc, Portable hardware locality (hwloc), URL -mpi.org/projects/hwloc (2010). 12. Jost, G. and H. Jin, "Comparing the OpenMP, MPI, and hybrid programming paradigms on an SMP cluster," Fifth European Workshop on OpenMP (2003). 13. Lapack, LAPACK -Linear Algebra PACKage, URL (2010). 14. MPI, MPI Web page, URL -forum.org (2010). 15. OpenMP, OpenMP specification, URL http:// openmp. org/wp/ openmp-specifications (2010). 16. Paz, R.R., N.M. Nigro and M.A.Storti, "On the efficiency and quality of numerical solutions in CFD problems using the interface strip preconditioner for domain decomposition methods," International Journal for Numerical Methods in Fluids, 51, 89-118 (2006). 17. Saad, Y., Iterative Methods for Sparse Linear Systems. PWS Publishing Co. (2000). 18. Smith, L. and M. Bull, "Development of mixed mode MPI/OpenMP applications," Scientific Programming, 9, 83-98 (2001). 19. Snir, M., S. Otto, S. Huss-Lederman, D. Walker and J. Dongarra, The MPI Core, volume 1 of MPI -The Complete Reference. MIT Press, Cambridge, 2 edition (1998). 20. Sonzogni, V., A. Yommi, N.M. Nigro and M.A. Storti, "A parallel finite element program on a Beowulf cluster," Advances in Engineering Software, 33, 427-443 (2002). 21. Storti, M.A., N.M. Nigro, R.R. Paz and L.D. Dalcin, PETSc-FEM: A General Purpose, Parallel, Multi-Physics FEM Program, URL (2010). 22. Tezduyar, T. and Y. Osawa, "Finite element stabilization parameters computed from element matrices and vectors," Computer Methods in Applied Mechanics and Engineering, 190, 411-430 (2000). 23. Whaley, R.C., A. Petitet and J. Dongarra, "Practical experience in the numerical dangers of heterogeneous computing," Parallel Computing, 27, 3-35 (2001). 2ff7e9595c
Comments