Changes: 3.16#
General:
Change
MPIU_Allreduce()to always returns a MPI error code that should be checked withCHKERRMPI(ierr)Add support for A64FX and Cray compilers
Add support for ESSL 5.2 and later; drop support for ESSL <=5.1
Add support for NEC-SX Tsubasa Vector Engine
Add support for NVIDIA HPC SDK
Configure/Build:
Configure requires Python-2.7 or Python-3.4+.
Remove
--with-kokkos-cuda-arch. One can use-with-cuda-gencodearchto specify the cuda arch for Kokkos. Usually not needed since PETSc auto detects thatFor
--download-hdf5, disable--download-hdf5-fortran-bindingsby defaultAdd OpenCascade package to PETSc and allow
--download-opencascadeAdd support for hypre in device mode for both NVIDIA and AMD GPUs
Extend detection of C++ dialect to C++17. Configure now also takes minimum and maximum required C++ dialect of packages into account when choosing the C++ dialect
Sys:
Add
PetscDeviceclass to manage discovered GPU devicesAdd
PetscDeviceKindAdd
PetscDeviceCreate(),PetscDeviceConfigure(), andPetscDeviceDestroy()Add
PetscDeviceContextclass to manage asynchronous GPU compute support via a fork-join modelAdd
PetscDeviceContextCreate(),PetscDeviceContextDestroy(),PetscDeviceContextSetDevice(),PetscDeviceContextGetDevice(),PetscDeviceContextSetStreamType(),PetscDeviceContextGetStreamType(),PetscDeviceContextSetUp(),PetscDeviceContextDuplicate(),PetscDeviceContextQueryIdle(),PetscDeviceContextWaitForContext(),PetscDeviceContextFork(),PetscDeviceContextJoin(),PetscDeviceContextSynchronize(),PetscDeviceContextGetCurrentContext(),PetscDeviceContextSetCurrentContext(), andPetscDeviceContextSetFromOptions()Deprecate
petsccublas.handpetschipblas.hin favor ofpetscdevice.handpetscdevicetypes.hAdd GPU event timers to capture kernel execution time accurately
Remove
WaitForCUDA()andWaitForHIP()beforePetscLogGpuTimeEnd()Add MPIU_REAL_INT and MPIU_SCALAR_INT datatypes to be used for reduction operations
Add MPIU_MAXLOC and MPIU_MINLOC operations
Add
CHKERRCXX()to catch C++ exceptions and return a PETSc error codeRemove
PetscStackroutines from public headers, this class should now be considered private
PetscViewer:
PetscViewerHDF5PushGroup(): if input path begins with/, it is taken as absolute, otherwise relative to the current groupPetscViewerHDF5HasAttribute(),PetscViewerHDF5ReadAttribute(),PetscViewerHDF5WriteAttribute(),PetscViewerHDF5HasDataset()andPetscViewerHDF5HasGroup()support absolute paths (starting with/) and paths relative to the current pushed groupAdd input argument to
PetscViewerHDF5ReadAttribute()for default value that is used if attribute is not found in the HDF5 fileAdd
PetscViewerHDF5PushTimestepping(),PetscViewerHDF5PopTimestepping()andPetscViewerHDF5IsTimestepping()to control timestepping mode.One can call
PetscViewerHDF5IncrementTimestep(),PetscViewerHDF5SetTimestep()orPetscViewerHDF5GetTimestep()only if timestepping mode is activeError if timestepped dataset is read/written out of timestepping mode, or vice-versa
PetscDraw:
AO:
IS:
VecScatter / PetscSF:
PF:
Vec:
Add
VecMean()to calculate arithmetic mean of elements of a vectorAdd
VecBoundToCPU()to query information set withVecBindToCPU()
PetscSection:
Extend
PetscSectionView()for section saving to HDF5Add
PetscSectionLoad()for section loading from HDF5
PetscPartitioner:
Mat:
MATMPIKAIJnow tracks the object state of the AIJ matrix describing the blockwise action of the KAIJ matrix and automatically rebuilds internal data structures before executing operations with the KAIJ matrix if the state has changedFactorization types now provide their preferred ordering (which may be
MATORDERINGEXTERNAL) to prevent PETSc PCFactor from, by default, picking an ordering when it is not idealDeprecate
MatFactorGetUseOrdering(); UseMatFactorGetCanUseOrdering()insteadAdd
--download-htoolto use hierarchical matrices with the new typeMATHTOOLAdd
MATCENTERINGspecial matrix type that implements action of the centering matrixRemove -mat_mumps_icntl_7 1 option, use -pc_factor_mat_ordering_type <type> to have PETSc perform the ordering (sequential only)
Add
MATSOLVERSPQR- interface to SuiteSparse QR factorizationAdd
MatSeqAIJKron()- Kronecker product of twoMatSeqAIJAdd
MatNormalGetMat()to retrieve the underlyingMatof aMATNORMALAdd
MatNormalHermitianGetMat()to retrieve the underlyingMatof aMATNORMALHERMITIANAdd
VecCreateMPICUDA()andVecCreateMPIHIP()to create MPI device vectorsAdd accessor routines for device index data of
MATSEQAIJCUSPARSEmatrices:MatSeqAIJCUSPARSEGetIJ()andMatSeqAIJCUSPARSERestoreIJ()Add accessor routines for device data of
MATSEQAIJCUSPARSEmatrices:MatSeqAIJCUSPARSEGetArray(),MatSeqAIJCUSPARSERestoreArray(),MatSeqAIJCUSPARSEGetArrayRead(),MatSeqAIJCUSPARSERestoreArrayRead().MatSeqAIJCUSPARSEGetArrayWrite(),MatSeqAIJCUSPARSERestoreArrayWrite()Add support for
MATHYPREmatrices on NVIDIA and AMD GPUsMatPreallocatorPreallocateperformance significantly improvedAdd
MatGetColumnReductions()developer routine to calculate reductions over columns of a matrixAdd
MatGetColumnSums(),MatGetColumnSumsRealPart(),MatGetColumnSumsImaginaryPart()to compute sums over matrix columnsAdd
MatGetColumnMeans(),MatGetColumnMeansRealPart(),MatGetColumnMeansImaginaryPart()to compute arithmetic means over matrix columnsAdd
MatBoundToCPU()to query information set withMatBindToCPU()Rename
MATHARAinMATH2OPUS, supporting distributed memory operations with hierarchical matrices
PC:
Add
PCSetPreSolve()Add
PCQR- interface to SuiteSparse QR factorization forMatSeqAIJ,MATNORMAL, andMATNORMALHERMITIANAdd support for BoomerAMG from
PCHYPREto run on NVIDIA and AMD GPUsPCShellGetContext()now takesvoid*as return argumentRename
PCHARAinPCH2OPUS, supporting distributed memory operations with hierarchical matrices
KSP:
KSPGetMonitorContext()now takesvoid*as return argumentKSPGetConvergenceContext()now takesvoid*as return argument
SNES:
Add
SNESSetComputeMFFunction()Add support for
-snes_mf_operatorfor use withSNESSetPicard()SNESShellGetContext()now takesvoid*as return argument
SNESLineSearch:
TS:
Add
-ts_type irk- fully implicit Runge-Kutta solversAdd
TSTrajectoryinterface to the CAMS library for optimal offline checkpointing for multistage time stepping schemesAdd option
-ts_trajectory_memory_type <revolve | cams | petsc>to switch checkpointing schedule softwareAdd option
-ts_trajectory_max_units_ramto specify the maximum number of allowed checkpointing units
TAO:
TaoShellGetContext()now takesvoid*as return argument
DM/DA:
Change management of auxiliary data in DM from object composition to
DMGetAuxiliaryVec()/DMSetAuxiliaryVec(),DMCopyAuxiliaryVec()Remove
DMGetNumBoundary()andDMGetBoundary()in favor of DS counterpartsRemove
DMCopyBoundary()Change interface for
DMAddBoundary(),PetscDSAddBoundary(),PetscDSGetBoundary(),PetscDSUpdateBoundary()Add
DMDAVecGetArrayDOFWrite()andDMDAVecRestoreArrayDOFWrite()DMShellGetContext()now takesvoid*as return argument
DMSwarm:
Add
DMSwarmGetCellSwarm()andDMSwarmRestoreCellSwarm()
DMPlex:
Add a
PETSCVIEWEREXODUSIIviewer type forDMView()/DMLoad()andVecView()/VecLoad(). Note that not all DMPlex can be saved in exodusII format since this file format requires that the numbering of cell sets be compactAdd
PetscViewerExodusIIOpen()convenience functionAdd
PetscViewerExodusIISetOrder()to generate “2nd order” elements (i.e. tri6, tet10, hex27) when usingDMViewwith aPETSCVIEWEREXODUSIIviewerChange
DMPlexComputeBdResidualSingle()andDMPlexComputeBdJacobianSingle()to take a form keyAdd
DMPlexTopologyLoad(),DMPlexCoordinatesLoad(), andDMPlexLabelsLoad()for incremental loading of aDMPlexobject from an HDF5 fileAdd
DMPlexTopologyView(),DMPlexCoordinatesView(), andDMPlexLabelsView()for incremental saving of aDMPlexobject to an HDF5 fileAdd
DMPlexSectionView()saving aPetscSectionin association with aDMPlexmeshAdd
DMPlexSectionLoad()loading aPetscSectionin association with aDMPlexmeshAdd
DMPlexGlobalVectorView()andDMPlexLocalVectorView()saving global and local vectors in association with a data layout on aDMPlexmeshAdd
DMPlexGlobalVectorLoad()andDMPlexLocalVectorLoad()loading global and local vectors in association with a data layout on aDMPlexmeshAdd
DMPlexIsSimplex()to check the shape of the first cellAdd
DMPlexShapeto describe prebuilt mesh domainsAdd
DMPlexCreateCoordinateSpace()to make an FE space for the coordinatesAdd the automatic creation of a Plex from options, see
DMSetFromOptions()The old options for
DMPlexCreateBoxMesh()NO LONGER WORK. They have been changed to make the interface more uniformReplace
DMPlexCreateSquareBoundary()andDMPlexCreateCubeBoundary()withDMPlexCreateBoxSurfaceMesh()Remove
DMPlexCreateReferenceCellByType()The number of refinements is no longer an argument to
DMPlexCreateHexCylinderMesh()Add
DMSetLabel()Replace
DMPlexComputeJacobianAction()withDMSNESComputeJacobianAction()Add
DMSNESCreateJacobianMF()Change
DMPlexComputeBdResidualSingle()to takePetscFormKeyinstead of explicit label/value/field argumentsAdd
DMPlexInflateToGeomModel()which pushes refined points out to a geometric boundarySeparate EGADS and EGADSLite functionality, add
DMPlexCreateEGADSLiteFromFile()Remove
DMPlexReverseCell()andDMPlexOrientCell()in favor ofDMPlexOrientPoint()Remove
DMPlexCompareOrientations()in favor ofDMPolytopeMatchOrientation()Add
DMPlexGetCompressedClosure()andDMPlexRestoreCompressedClosure()Add
DMPlexMetricCreateas a helper function for creating a (P1) Riemannian metric.Add
DMPlexMetricCreateUniformas a helper function for creating a uniform metric.Add
DMPlexMetricCreateIsotropicas a helper function for creating an isotropic metric.Add
DMPlexMetricEnforceSPDfor enforcing that a metric is symmetric positive-definite.Add
DMPlexMetricNormalizeto apply L-p metric normalization.Add
DMPlexMetricAverageto average an arbitrary number of metrics.Add
DMPlexMetricAverage2to average two metrics.Add
DMPlexMetricAverage3to average three metrics.Add
DMPlexMetricIntersectionto intersect an arbitrary number of metrics.Add
DMPlexMetricIntersection2to intersect two metrics.Add
DMPlexMetricIntersection3to intersect three metrics.
FE/FV:
Change
PetscFEIntegrateBdResidual()andPetscFEIntegrateBdJacobian()to take bothPetscWeakFormand form keyAdd
PetscFEGeomGetPoint()andPetscFEGeomGetCellPointto package up geometry handling
DMNetwork:
Add
DMNetworkCreateIS()andDMNetworkCreateLocalIS()Remove nv from
DMNetworkAddSubnetwork()
DMStag:
DT:
Add
PetscWeakFormCopy(),PetscWeakFormClear(),PetscWeakFormRewriteKeys()andPetscWeakFormClearIndex()Add
PetscDSDestroyBoundary()andPetscDSCopyExactSolutions()PetscDSGetContext()now takesvoid*as return argumentAdd
PetscWeakFormReplaceLabel()to change labels after mesh modification
Fortran:
Add support for
PetscInitialize(filename,help,ierr),PetscInitialize(ierr)in addition to currentPetscInitialize(filename,ierr)