Path to this page:
Subject: CVS commit: pkgsrc/devel/hdf5
From: Adam Ciarcinski
Date: 2022-11-06 18:00:57
Message id: 20221106170057.2439CFA90@cvs.NetBSD.org
Log Message:
hdf5 hdf5-c++: updated to 1.10.9
HDF5 version 1.10.9
New Features
============
Configuration:
-------------
- Added new option to the h5cc scripts produced by CMake.
Add -showconfig option to h5cc scripts to cat the
libhdf5-settings to the standard output.
(ADB - 2022/03/11)
- HDF5 memory allocation sanity checking is now off by default for
Autotools debug builds
HDF5 can be configured to perform sanity checking on internal memory
allocations by adding heap canaries to these allocations. However,
enabling this option can cause issues with external filter plugins
when working with (reallocating/freeing/allocating and passing back)
buffers.
Previously, this option was off by default for all CMake build types,
but only off by default for non-debug Autotools builds. Since debug
is the default build mode for HDF5 when built from source with
Autotools, this can result in surprising segfaults that don't occur
when an application is built against a release version of HDF5.
Therefore, this option is now off by default for all build types
across both CMake and Autotools.
(JTH - 2022/03/01)
- Refactored the utils folder.
Added subfolder test and moved the 'swmr_check_compat_vfd.c file'
from test into utils/test. Deleted the duplicate swmr_check_compat_vfd.c
file in hl/tools/h5watch folder. Also fixed vfd check options.
(ADB - 2021/10/18)
- Changed autotools and CMake configurations to derive both
compilation warnings-as-errors and warnings-only-warn configurations
from the same files, 'config/*/*error*'. Removed redundant files
'config/*/*noerror*'.
(DCY - 2021/09/29)
Library:
--------
- Several improvements to parallel compression feature, including:
* Improved support for collective I/O (for both writes and reads)
* Significant reduction of memory usage for the feature as a whole
* Reduction of copying of application data buffers passed to H5Dwrite
* Addition of support for incremental file space allocation for filtered
datasets created in parallel. Incremental file space allocation is the
default for these types of datasets (early file space allocation is
also still supported), while early file space allocation is still the
default (and only supported allocation time) for unfiltered datasets
created in parallel. Incremental file space allocation should help with
parallel HDF5 applications that wish to use fill values on filtered
datasets, but would typically avoid doing so since dataset creation in
parallel would often take an excessive amount of time. Since these
datasets previously used early file space allocation, HDF5 would
allocate space for and write fill values to every chunk in the dataset
at creation time, leading to noticeable overhead. Instead, with
incremental file space allocation, allocation of file space for chunks
and writing of fill values to those chunks will be delayed until each
individual chunk is initially written to.
* Addition of support for HDF5's "don't filter partial edge \
chunks" flag
(https://portal.hdfgroup.org/display/HDF5/H5P_SET_CHUNK_OPTS)
* Addition of proper support for HDF5 fill values with the feature
* Addition of 'H5_HAVE_PARALLEL_FILTERED_WRITES' macro to H5pubconf.h
so HDF5 applications can determine at compile-time whether the feature
is available
* Addition of simple examples (ph5_filtered_writes.c and
ph5_filtered_writes_no_sel.c) under examples directory to demonstrate
usage of the feature
* Improved coverage of regression testing for the feature
(JTH - 2022/2/23)
Fortran Library:
----------------
- None
C++ Library:
------------
- None
Java Library:
-------------
- None
Tools:
------
- h5repack added an optional verbose value for reporting R/W timing.
In addition to adding timing capture around the read/write calls in
h5repack, added help text to indicate how to show timing for read/write;
-v N, --verbose=N Verbose mode, print object information.
N - is an integer greater than 1, 2 displays read/write timing
(ADB - 2022/04/01)
High-Level APIs:
----------------
- None
C Packet Table API:
-------------------
- None
Internal header file:
---------------------
- None
Documentation:
--------------
- None
New platforms, languages and compilers tested
==================================================
- macOS Apple M1 11.6 Darwin 20.6.0 arm64 with Apple clang version 12.0.5
- Fedora35 Linux 5.16.14-200.fc35 with GCC 11.2.1 and clang version 13.0.0
Bug Fixes since HDF5-1.10.8 release
===================================
Library
-------
- Fixed a metadata cache bug when resizing a pinned/protected cache entry
When resizing a pinned/protected cache entry, the metadata
cache code previously would wait until after resizing the
entry to attempt to log the newly-dirtied entry. This would
cause H5C_resize_entry to mark the entry as dirty and make
H5AC_resize_entry think that it doesn't need to add the
newly-dirtied entry to the dirty entries skiplist.
Thus, a subsequent H5AC__log_moved_entry would think it
needs to allocate a new entry for insertion into the dirty
entry skip list, since the entry doesn't exist on that list.
This causes an assertion failure, as the code to allocate a
new entry assumes that the entry is not dirty.
(JRM - 2022/02/28)
- Issue 1436 identified a problem with the H5_VERS_RELEASE check in the
H5check_version function.
Investigating the original fix, 812, we discovered some inconsistencies
with a new block added to check H5_VERS_RELEASE for incompatibilities.
This new block was not using the new warning text dealing with the
H5_VERS_RELEASE check and would cause the warning to be duplicated.
By removing the H5_VERS_RELEASE argument in the first check for
H5_VERS_MAJOR and H5_VERS_MINOR, the second check would only check
the H5_VERS_RELEASE for incompatible release versions. This adheres
to the statement that except for the develop branch, all release versions
in a major.minor maintenance branch should be compatible. The prerequisite
is that an application will not use any APIs not present in all release \
versions.
- Unified handling of collective metadata reads to correctly fix old bugs
Due to MPI-related issues occurring in HDF5 from mismanagement of the
status of collective metadata reads, they were forced to be disabled
during chunked dataset raw data I/O in the HDF5 1.10.5 release. This
wouldn't generally have affected application performance because HDF5
already disables collective metadata reads during chunk lookup, since
it is generally unlikely that the same chunks will be read by all MPI
ranks in the I/O operation. However, this was only a partial solution
that wasn't granular enough.
This change now unifies the handling of the file-global flag and the
API context-level flag for collective metadata reads in order to
simplify querying of the true status of collective metadata reads. Thus,
collective metadata reads are once again enabled for chunked dataset
raw data I/O, but manually controlled at places where some processing
occurs on MPI rank 0 only and would cause issues when collective
metadata reads are enabled.
(JTH - 2021/11/16, HDFFV-10501/HDFFV-10562)
- Fixed several potential MPI deadlocks in library failure conditions
In the parallel library, there were several places where MPI rank 0
could end up skipping past collective MPI operations when some failure
occurs in rank 0-specific processing. This would lead to deadlocks
where rank 0 completes an operation while other ranks wait in the
collective operation. These places have been rewritten to have rank 0
push an error and try to cleanup after the failure, then continue to
participate in the collective operation to the best of its ability.
(JTH - 2021/11/09)
- Fixed an issue with collective metadata reads being permanently disabled
after a dataset chunk lookup operation. This would usually cause a
mismatched MPI_Bcast and MPI_ERR_TRUNCATE issue in the library for
simple cases of H5Dcreate() -> H5Dwrite() -> H5Dcreate().
(JTH - 2021/11/08, HDFFV-11090)
Java Library
------------
- None
Configuration
-------------
- Reworked corrected path searched by CMake find_package command
The install path for cmake find_package files had been changed to use
"share/cmake"
for all platforms. However setting the HDF5_ROOT variable failed to locate
the configuration files. The build variable HDF5_INSTALL_CMAKE_DIR is now
set to the <INSTALL_DIR>/cmake folder. The location of the configuration
files can still be specified by the "HDF5_DIR" variable.
Files: