./devel/hdf5, Hierarchical Data Format (new generation)

[ CVSweb ] [ Homepage ] [ RSS ] [ Required by ] [ Add to tracker ]


Branch: CURRENT, Version: 1.12.2, Package name: hdf5-1.12.2, Maintainer: pkgsrc-users

HDF5 is a data model, library, and file format for storing and
managing data. It supports an unlimited variety of datatypes, and
is designed for flexible and efficient I/O and for high volume and
complex data. HDF5 is portable and is extensible, allowing applications
to evolve in their use of HDF5. The HDF5 Technology suite includes
tools and applications for managing, manipulating, viewing, and
analyzing data in the HDF5 format.


Required to run:
[archivers/libaec]

Required to build:
[pkgtools/cwrappers]

Package options: szip

Master sites:

Filesize: 10248.305 KB

Version history: (Expand)


CVS history: (Expand)


   2023-12-18 11:33:28 by Dr. Thomas Orgis | Files touched by this commit (1)
Log message:
devel/hdf5: adding unsafe-threads option

This is an option certain scientific users request for building their code.
They explicitly need the unsupported configuration, that somehow seems to
work for them.
   2023-08-31 13:57:27 by Adam Ciarcinski | Files touched by this commit (4) | Package updated
Log message:
hdf5 hdf5-c++: updated to 1.12.2

https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.12/hdf5-1.12.2/src/hdf5-1.12.2-RELEASE.txt
   2022-11-06 18:00:57 by Adam Ciarcinski | Files touched by this commit (5) | Package updated
Log message:
hdf5 hdf5-c++: updated to 1.10.9

HDF5 version 1.10.9

New Features
============

    Configuration:
    -------------
    - Added new option to the h5cc scripts produced by CMake.

      Add -showconfig option to h5cc scripts to cat the
      libhdf5-settings to the standard output.

      (ADB - 2022/03/11)

    - HDF5 memory allocation sanity checking is now off by default for
      Autotools debug builds

      HDF5 can be configured to perform sanity checking on internal memory
      allocations by adding heap canaries to these allocations. However,
      enabling this option can cause issues with external filter plugins
      when working with (reallocating/freeing/allocating and passing back)
      buffers.

      Previously, this option was off by default for all CMake build types,
      but only off by default for non-debug Autotools builds. Since debug
      is the default build mode for HDF5 when built from source with
      Autotools, this can result in surprising segfaults that don't occur
      when an application is built against a release version of HDF5.
      Therefore, this option is now off by default for all build types
      across both CMake and Autotools.

      (JTH - 2022/03/01)

    - Refactored the utils folder.

      Added subfolder test and moved the 'swmr_check_compat_vfd.c file'
      from test into utils/test. Deleted the duplicate swmr_check_compat_vfd.c
      file in hl/tools/h5watch folder. Also fixed vfd check options.

      (ADB - 2021/10/18)

    - Changed autotools and CMake configurations to derive both
      compilation warnings-as-errors and warnings-only-warn configurations
      from the same files, 'config/*/*error*'.  Removed redundant files
      'config/*/*noerror*'.

      (DCY - 2021/09/29)

    Library:
    --------
    - Several improvements to parallel compression feature, including:

      * Improved support for collective I/O (for both writes and reads)

      * Significant reduction of memory usage for the feature as a whole

      * Reduction of copying of application data buffers passed to H5Dwrite

      * Addition of support for incremental file space allocation for filtered
        datasets created in parallel. Incremental file space allocation is the
        default for these types of datasets (early file space allocation is
        also still supported), while early file space allocation is still the
        default (and only supported allocation time) for unfiltered datasets
        created in parallel. Incremental file space allocation should help with
        parallel HDF5 applications that wish to use fill values on filtered
        datasets, but would typically avoid doing so since dataset creation in
        parallel would often take an excessive amount of time. Since these
        datasets previously used early file space allocation, HDF5 would
        allocate space for and write fill values to every chunk in the dataset
        at creation time, leading to noticeable overhead. Instead, with
        incremental file space allocation, allocation of file space for chunks
        and writing of fill values to those chunks will be delayed until each
        individual chunk is initially written to.

      * Addition of support for HDF5's "don't filter partial edge \ 
chunks" flag
        (https://portal.hdfgroup.org/display/HDF5/H5P_SET_CHUNK_OPTS)

      * Addition of proper support for HDF5 fill values with the feature

      * Addition of 'H5_HAVE_PARALLEL_FILTERED_WRITES' macro to H5pubconf.h
        so HDF5 applications can determine at compile-time whether the feature
        is available

      * Addition of simple examples (ph5_filtered_writes.c and
        ph5_filtered_writes_no_sel.c) under examples directory to demonstrate
        usage of the feature

      * Improved coverage of regression testing for the feature

      (JTH - 2022/2/23)

    Fortran Library:
    ----------------
    - None

    C++ Library:
    ------------
    - None

    Java Library:
    -------------
    - None

    Tools:
    ------
    - h5repack added an optional verbose value for reporting R/W timing.

      In addition to adding timing capture around the read/write calls in
      h5repack, added help text to indicate how to show timing for read/write;
           -v N, --verbose=N       Verbose mode, print object information.
              N - is an integer greater than 1, 2 displays read/write timing
      (ADB - 2022/04/01)

    High-Level APIs:
    ----------------
    - None

    C Packet Table API:
    -------------------
    - None

    Internal header file:
    ---------------------
    - None

    Documentation:
    --------------
    - None

New platforms, languages and compilers tested
==================================================
    - macOS Apple M1 11.6 Darwin 20.6.0 arm64 with Apple clang version 12.0.5

    - Fedora35 Linux 5.16.14-200.fc35 with GCC 11.2.1 and clang version 13.0.0

Bug Fixes since HDF5-1.10.8 release
===================================
    Library
    -------
    - Fixed a metadata cache bug when resizing a pinned/protected cache entry

      When resizing a pinned/protected cache entry, the metadata
      cache code previously would wait until after resizing the
      entry to attempt to log the newly-dirtied entry. This would
      cause H5C_resize_entry to mark the entry as dirty and make
      H5AC_resize_entry think that it doesn't need to add the
      newly-dirtied entry to the dirty entries skiplist.

      Thus, a subsequent H5AC__log_moved_entry would think it
      needs to allocate a new entry for insertion into the dirty
      entry skip list, since the entry doesn't exist on that list.
      This causes an assertion failure, as the code to allocate a
      new entry assumes that the entry is not dirty.

      (JRM - 2022/02/28)

    - Issue 1436 identified a problem with the H5_VERS_RELEASE check in the
      H5check_version function.

      Investigating the original fix, 812, we discovered some inconsistencies
      with a new block added to check H5_VERS_RELEASE for incompatibilities.
      This new block was not using the new warning text dealing with the
      H5_VERS_RELEASE check and would cause the warning to be duplicated.

      By removing the H5_VERS_RELEASE argument in the first check for
      H5_VERS_MAJOR and H5_VERS_MINOR, the second check would only check
      the H5_VERS_RELEASE for incompatible release versions. This adheres
      to the statement that except for the develop branch, all release versions
      in a major.minor maintenance branch should be compatible. The prerequisite
      is that an application will not use any APIs not present in all release \ 
versions.

    - Unified handling of collective metadata reads to correctly fix old bugs

      Due to MPI-related issues occurring in HDF5 from mismanagement of the
      status of collective metadata reads, they were forced to be disabled
      during chunked dataset raw data I/O in the HDF5 1.10.5 release. This
      wouldn't generally have affected application performance because HDF5
      already disables collective metadata reads during chunk lookup, since
      it is generally unlikely that the same chunks will be read by all MPI
      ranks in the I/O operation. However, this was only a partial solution
      that wasn't granular enough.

      This change now unifies the handling of the file-global flag and the
      API context-level flag for collective metadata reads in order to
      simplify querying of the true status of collective metadata reads. Thus,
      collective metadata reads are once again enabled for chunked dataset
      raw data I/O, but manually controlled at places where some processing
      occurs on MPI rank 0 only and would cause issues when collective
      metadata reads are enabled.

      (JTH - 2021/11/16, HDFFV-10501/HDFFV-10562)

    - Fixed several potential MPI deadlocks in library failure conditions

      In the parallel library, there were several places where MPI rank 0
      could end up skipping past collective MPI operations when some failure
      occurs in rank 0-specific processing. This would lead to deadlocks
      where rank 0 completes an operation while other ranks wait in the
      collective operation. These places have been rewritten to have rank 0
      push an error and try to cleanup after the failure, then continue to
      participate in the collective operation to the best of its ability.

      (JTH - 2021/11/09)

    - Fixed an issue with collective metadata reads being permanently disabled
      after a dataset chunk lookup operation. This would usually cause a
      mismatched MPI_Bcast and MPI_ERR_TRUNCATE issue in the library for
      simple cases of H5Dcreate() -> H5Dwrite() -> H5Dcreate().

      (JTH - 2021/11/08, HDFFV-11090)

    Java Library
    ------------
    - None

    Configuration
    -------------
    - Reworked corrected path searched by CMake find_package command

      The install path for cmake find_package files had been changed to use
        "share/cmake"
      for all platforms. However setting the HDF5_ROOT variable failed to locate
      the configuration files. The build variable HDF5_INSTALL_CMAKE_DIR is now
      set to the <INSTALL_DIR>/cmake folder. The location of the configuration
      files can still be specified by the "HDF5_DIR" variable.
   2022-07-22 16:50:33 by Dr. Thomas Orgis | Files touched by this commit (3)
Log message:
hdf5 and hdf5-c++: fix up the threads option

Explicitly disable the hl interface of hdf5 with threads, removing
files from PLIST, and also preventing support for the C++ interface.

That is also why we won't enable threads by default anytime soon.
It is a specific option needed for some users. The jury is still out
if the threadsafe option or the C++ API has less users.

Not incrementing PKGREVISION, as build of hdf5-c++ with threads option
was broken anyway, as was PLIST of hdf5. Default builds without threads
are unaffected.
   2022-03-26 22:52:36 by Tobias Nygren | Files touched by this commit (1)
Log message:
hdf5: fix build on SunOS
   2021-10-26 12:20:11 by Nia Alarie | Files touched by this commit (3016)
Log message:
archivers: Replace RMD160 checksums with BLAKE2s checksums

All checksums have been double-checked against existing RMD160 and
SHA512 hashes

Could not be committed due to merge conflict:
devel/py-traitlets/distinfo

The following distfiles were unfetchable (note: some may be only fetched
conditionally):

./devel/pvs/distinfo pvs-3.2-solaris.tgz
./devel/eclipse/distinfo eclipse-sourceBuild-srcIncluded-3.0.1.zip
   2021-10-07 15:44:44 by Nia Alarie | Files touched by this commit (3017)
Log message:
devel: Remove SHA1 hashes for distfiles
   2021-06-07 13:52:48 by Adam Ciarcinski | Files touched by this commit (16) | Package removed
Log message:
hdf5: updated to 1.10.7

HDF5 version 1.10.7 released on 2020-09-11
================================================================================

INTRODUCTION

This document describes the differences between this release and the previous
HDF5 release. It contains information on the platforms tested and known
problems in this release. For more details check the HISTORY*.txt files in the
HDF5 source.

Note that documentation in the links below will be updated at the time of each
final release.

Links to HDF5 documentation can be found on The HDF5 web page:

     https://portal.hdfgroup.org/display/HDF5/HDF5

The official HDF5 releases can be obtained from:

     https://www.hdfgroup.org/downloads/hdf5/

     HDF5 binaries provided are fully tested with ZLIB and the free
     Open Source SZIP successor Libaec (with BSD license).
     The official ZLIB and SZIP/Libaec pages are at:

        ZLIB: http://www.zlib.net/
            http://www.zlib.net/zlib_license.html
        SZIP/Libaec: https://gitlab.dkrz.de/k202009/libaec
            https://gitlab.dkrz.de/k202009/libaec/-/blob/master/Copyright.txt

Changes from Release to Release and New Features in the HDF5-1.10.x release series
can be found at:

     https://portal.hdfgroup.org/display/HDF5/HDF5+Application+Developer%27s+Guide

If you have any questions or comments, please send them to the HDF Help Desk:

     help@hdfgroup.org

CONTENTS

- New Features
- Support for new platforms and languages
- Bug Fixes since HDF5-1.10.6
- Supported Platforms
- Tested Configuration Features Summary
- More Tested Platforms
- Known Problems
- CMake vs. Autotools installations

New Features
============

    Configuration:
    -------------
    - Disable memory sanity checks in the Autotools in release branches

      The library can be configured to use internal memory sanity checking,
      which replaces C API calls like malloc(3) and free(3) with our own calls
      which add things like heap canaries. These canaries can cause problems
      when external filter plugins reallocate canary-marked buffers.

      For this reason, the default will be to not use the memory allocation
      sanity check feature in release branches (e.g., hdf5_1_10_7).
      Debug builds in development branches (e.g., develop, hdf5_1_10) will
      still use them by default.

      This change only affects Autotools debug builds. Non-debug autotools
      builds and all CMake builds do not enable this feature by default.

      (DER - 2020/08/19)

    - Add file locking configure and CMake options

      HDF5 1.10.0 introduced a file locking scheme, primarily to help
      enforce SWMR setup. Formerly, the only user-level control of the scheme
      was via the HDF5_USE_FILE_LOCKING environment variable.

      This change introduces configure-time options that control whether
      or not file locking will be used and whether or not the library
      ignores errors when locking has been disabled on the file system
      (useful on some HPC Lustre installations).

      In both the Autotools and CMake, the settings have the effect of changing
      the default property list settings (see the H5Pset/get_file_locking()
      entry, below).

      The yes/no/best-effort file locking configure setting has also been
      added to the libhdf5.settings file.

      Autotools:

        An --enable-file-locking=(yes|no|best-effort) option has been added.

        yes:          Use file locking.
        no:           Do not use file locking.
        best-effort:  Use file locking and ignore "disabled" errors.

      CMake:

        Two self-explanatory options have been added:

        HDF5_USE_FILE_LOCKING
        HDF5_IGNORE_DISABLED_FILE_LOCKS

        Setting both of these to ON is the equivalent to the Autotools'
        best-effort setting.

      NOTE:
      The precedence order of the various file locking control mechanisms is:

        1) HDF5_USE_FILE_LOCKING environment variable (highest)

        2) H5Pset_file_locking()

        3) configure/CMake options (which set the property list defaults)

        4) library defaults (currently best-effort)

      (DER - 2020/07/30, HDFFV-11092)

    - CMake option to link the generated Fortran MOD files into the include
      directory.

      The Fortran generation of MOD files by a Fortran compile can produce
      different binary files between SHARED and STATIC compiles with different
      compilers and/or different platforms. Note that it has been found that
      different versions of Fortran compilers will produce incompatible MOD
      files. Currently, CMake will locate these MOD files in subfolders of
      the include directory and add that path to the Fortran library target
      in the CMake config file, which can be used by the CMake find library
      process. For other build systems using the binary from a CMake install,
      a new CMake configuration can be used to copy the pre-chosen version
      of the Fortran MOD files into the install include directory.

      The default will depend on the configuration of
      BUILD_STATIC_LIBS and BUILD_SHARED_LIBS:
            YES                   YES         Default to SHARED
            YES                   NO          Default to STATIC
            NO                    YES         Default to SHARED
            NO                    NO          Default to SHARED
      The defaults can be overriden by setting the config option
         HDF5_INSTALL_MOD_FORTRAN to one of NO, SHARED, or STATIC

      (ADB - 2020/07/09, HDFFV-11116)

    - CMake option to use AEC (open source SZip) library instead of SZip

      The open source AEC library is a replacement library for SZip. In
      order to use it for hdf5, the libaec CMake source was changed to add
      "-fPIC" and exclude test files. A new option USE_LIBAEC is required
      to compensate for the different files produced by AEC build.

      Autotools does not build the compression libraries within hdf5 builds,
      but will use an installed libaec when configured as before with the
      option --with-libsz=<path to libaec directory>.

      (ADB - 2020/04/22, OESS-65)

    - CMake ConfigureChecks.cmake file now uses CHECK_STRUCT_HAS_MEMBER

      Some handcrafted tests in HDFTests.c have been removed and the CMake
      CHECK_STRUCT_HAS_MEMBER module has been used.

      (ADB - 2020/03/24, TRILAB-24)

    - Both build systems use same set of warnings flags

      GNU C, C++ and gfortran warnings flags were moved to files in a config
      sub-folder named gnu-warnings. Flags that only are available for a specific
      version of the compiler are in files named with that version.
      Clang C warnings flags were moved to files in a config sub-folder
      named clang-warnings.
      Intel C, Fortran warnings flags were moved to files in a config sub-folder
      named intel-warnings.

      There are flags in named "error-xxx" files with warnings that may
      be promoted to errors. Some source files may still need fixes.

      There are also pairs of files named "developer-xxx" and \ 
"no-developer-xxx"
      that are chosen by the CMake option:HDF5_ENABLE_DEV_WARNINGS or the
      configure option:--enable-developer-warnings.

      In addition, CMake no longer applies these warnings for examples.

      (ADB - 2020/03/24, TRILAB-192)

    - Update CMake minimum version to 3.12

      Updated CMake minimum version to 3.12 and added version checks
      for Windows features.

      (ADB - 2020/02/05, TRILABS-142)

    - Fixed CMake include properties for Fortran libraries

      Corrected the library properties for Fortran to use the
      correct path for the Fortran module files.

      (ADB - 2020/02/04, HDFFV-11012)

    - Added common warnings files for gnu and intel

      Added warnings files to use one common set of flags
      during configure for both autotools and CMake build
      systems. The initial implementation only affects a
      general set of flags for gnu and intel compilers.

      (ADB - 2020/01/17)

    - Added new options to CMake for control of testing

      Added CMake options (default ON);
          HDF5_TEST_SERIAL AND/OR HDF5_TEST_PARALLEL
          combined with:
            HDF5_TEST_TOOLS
            HDF5_TEST_EXAMPLES
            HDF5_TEST_SWMR
            HDF5_TEST_FORTRAN
            HDF5_TEST_CPP
            HDF5_TEST_JAVA

      (ADB - 2020/01/15, HDFFV-11001)

    - Added Clang sanitizers to CMake for analyzer support if compiler is clang.

      Added CMake code and files to execute the Clang sanitizers if
      HDF5_ENABLE_SANITIZERS is enabled and the USE_SANITIZER option
      is set to one of the following:
          Address
          Memory
          MemoryWithOrigins
          Undefined
          Thread
          Leak
          'Address;Undefined'

      (ADB - 2019/12/12, TRILAB-135)

    Library:
    --------
    - Add metadata cache optimization to reduce skip list usage

      On file flush or close, the metadata cache attempts to write out
      all dirty entries in increasing address order.  To do this, it needs
      an address sorted list of metadata entries.  Further, since flushing
      one metadata cache entry can dirty another, this list must support
      efficient insertion and deletion.

      The metadata cache uses a skip list of all dirty entries for this
      purpose.  Before this release, this skip list was maintained at all
      times.  However, since profiling indicates that this imposes a
      significant cost, we now construct and maintain the skip list only
      when needed.  Specifically, we enable the skip list and load it with
      a list of all dirty entries in the metadata cache just before a flush,
      and disable it after the flush.

      (JRM - 2020/08/17, HDFFV-11034)

    - Add BEST_EFFORT value to HDF5_USE_FILE_LOCKING environment variable

      This change adds a BEST_EFFORT to the TRUE/FALSE, 1/0 settings that
      were previously accepted. This option turns on file locking but
      ignores locking errors when the library detects that file locking
      has been disabled on a file system (useful on some HPC Lustre
      installations).

      The capitalization of BEST_EFFORT is mandatory.

      See the configure option discussion for HDFFV-11092 (above) for more
      information on the file locking feature and how it's controlled.

      (DER - 2020/07/30, HDFFV-11092)

    - Add H5Pset/get_file_locking() API calls

      This change adds new API calls which can be used to set or get the
      file locking parameters. The single API call sets both the "use file
      locking" flag and the "ignore disabled file locking" flag.

      When opening a file multiple times without closing, the file MUST be
      opened with the same file locking settings. Opening a file with different
      file locking settings will fail (this is similar to the behavior of
      H5Pset_fclose_degree()).

      See the configure option discussion for HDFFV-11092 (above) for more
      information on the file locking feature and how it's controlled.

      (DER - 2020/07/30, HDFFV-11092)

    - Add Mirror VFD

      Use TCP/IP sockets to perform write-only (W/O) file I/O on a remote
      machine. Must be used in conjunction with the Splitter VFD.

      (JOS - 2020/03/13, TBD)

    - Add Splitter VFD

      Maintain separate R/W and W/O channels for "concurrent" file writes
      to two files using a single HDF5 file handle.

      (JOS - 2020/03/13, TBD)

    - Fixed an assertion failure in the parallel library when collectively
      filling chunks. As it is required that chunks be written in
      monotonically non-decreasing order of offset in the file, this assertion
      was being triggered when the list of chunk file space allocations being
      passed to the collective chunk filling routine was not sorted according
      to this particular requirement.

      The addition of a sort of the out of order chunks trades a bit of
      performance for the elimination of this assertion and of any complaints
      from MPI implementations about the file offsets used being out of order.

      (JTH - 2019/10/07)

    Fortran Library:
    ----------------
    - Add wrappers for H5Pset/get_file_locking() API calls

      h5pget_file_locking_f()
      h5pset_file_locking_f()

      See the configure option discussion for HDFFV-11092 (above) for more
      information on the file locking feature and how it's controlled.

      (DER - 2020/07/30, HDFFV-11092)

    - Added new Fortran parameters:

        H5F_LIBVER_ERROR_F
        H5F_LIBVER_NBOUNDS_F
        H5F_LIBVER_V18_F
        H5F_LIBVER_V110_F

    - Added new Fortran API: h5pget_libver_bounds_f

      (MSB - 2020/02/11, HDFFV-11018)

    C++ Library:
    ------------
    - Add wrappers for H5Pset/get_file_locking() API calls

      FileAccPropList::setFileLocking()
      FileAccPropList::getFileLocking()

      See the configure option discussion for HDFFV-11092 (above) for more
      information on the file locking feature and how it's controlled.

      (DER - 2020/07/30, HDFFV-11092)

    Java Library:
    ----------------
    - Add wrappers for H5Pset/get_file_locking() API calls

      H5Pset_file_locking()
      H5Pget_use_file_locking()
      H5Pget_ignore_disabled_file_locking()

      Unlike the C++ and Fortran wrappers, there are separate getters for the
      two file locking settings, each of which returns a boolean value.

      See the configure option discussion for HDFFV-11092 (above) for more
      information on the file locking feature and how it's controlled.

      (DER - 2020/07/30, HDFFV-11092)

    Tools:
    ------
    - h5repack added options to control how external links are handled.

      Currently h5repack preserves external links and cannot copy and merge
      data from the external files. Two options, merge and prune, were added to
      control how to merge data from an external link into the resulting file.
       --merge             Follow external soft link recursively and merge data.
       --prune             Do not follow external soft links and remove link.
       --merge --prune     Follow external link, merge data and remove dangling link.

      (ADB - 2020/08/05, HDFFV-9984)

    High-Level APIs:
    ---------------
    - None

    C Packet Table API
    ------------------
    - None

    Internal header file
    --------------------
    - None

    Documentation
    -------------
    - None

Support for new platforms, languages and compilers.
=======================================
    - None

Bug Fixes since HDF5-1.10.6 release
==================================

    Library
    -------
    - Fix bug and simplify collective metadata write operation when some ranks
        have no entries to contribute.  This fixes parallel regression test
        failures with IBM SpectrumScale MPI on the Summit system at ORNL.

      (QAK - 2020/09/02)

    - Avoid setting up complex MPI types with 0-length vectors, which some
        MPI implementations don't handle well.  (In particular, IBM
        SpectrumScale MPI on the Summit system at ORNL)

      (QAK - 2020/08/21)

    - Fixed use-of-uninitialized-value error

      Appropriate initialization of local structs was added to remove the
      use-of-uninitialized-value errors reported by MemorySanitizer.

      (BMR - 2020/8/13, HDFFV-11101)

    - Creation of dataset with optional filter

      When the combination of type, space, etc doesn't work for filter
      and the filter is optional, it was supposed to be skipped but it was
      not skipped and the creation failed.

      A fix is applied to allow the creation of a dataset in such
      situation, as specified in the user documentation.

      (BMR - 2020/8/13, HDFFV-10933)

    - Explicitly declared dlopen to use RTLD_LOCAL

      dlopen documentation states that if neither RTLD_GLOBAL nor
      RTLD_LOCAL are specified, then the default behavior is unspecified.
      The default on linux is usually RTLD_LOCAL while macos will default
      to RTLD_GLOBAL.

      (ADB - 2020/08/12, HDFFV-11127)

    - Fixed issues CVE-2018-13870 and CVE-2018-13869

      When a buffer overflow occurred because a name length was corrupted
      and became very large, h5dump crashed on memory access violation.

      A check for reading past the end of the buffer was added to multiple
      locations to prevent the crashes and h5dump now simply fails with an
      error message when this error condition occurs.

      (BMR - 2020/7/31, HDFFV-11120 and HDFFV-11121)

    - H5Sset_extent_none() sets the dataspace class to H5S_NO_CLASS which
      causes asserts/errors when passed to other dataspace API calls.

      H5S_NO_CLASS is an internal class value that should not have been
      exposed via a public API call.

      In debug builds of the library, this can cause asserts to trip. In
      non-debug builds, it will produce normal library errors.

      The new library behavior is for H5Sset_extent_none() to convert
      the dataspace into one of type H5S_NULL, which is better handled
      by the library and easier for developers to reason about.

      (DER - 2020/07/27, HDFFV-11027)

    - Fixed the segmentation fault when reading attributes with multiple threads

      It was reported that the reading of attributes with variable length string
      datatype will crash with segmentation fault particularly when the number
      of threads is high (>16 threads).  The problem was due to the file pointer
      that was set in the variable length string datatype for the attribute.
      That file pointer was already closed when the attribute was accessed.

      The problem was fixed by setting the file pointer to the current opened
      file pointer when the attribute was accessed.  Similar patch up was done
      before when reading dataset with variable length string datatype.

      (VC - 2020/07/13, HDFFV-11080)

    -  Fixed issue CVE-2018-17438

      A division by zero was discovered in H5D__select_io() of H5Dselect.c.
        https://security-tracker.debian.org/tracker/CVE-2018-17438

      A check was added to protect against division by zero.  When such
      situation occurs again, the normal HDF5 error handling will be invoked,
      instead of segmentation fault.

      (BMR, DER - 2020/07/09, HDFFV-10587)

    - Fixed CVE-2018-17435

      The tool h52gif produced a segfault when the size of an attribute message
      was corrupted and caused a buffer overflow.

      The problem was fixed by verifying the attribute message's size against the
      buffer size before accessing the buffer.  h52gif was also fixed to display
      the failure instead of silently exiting after the segfault was eliminated.

      (BMR - 2020/6/19, HDFFV-10591)

    - Don't allocate an empty (0-dimensioned) chunked dataset's chunk
      index, until the dataset's dimensions are increased.

      (QAK - 2020/05/07)

    Configuration
    -------------
    - Stopped addition of szip header and include directory path for
      incompatible libsz

      szlib.h is the same for both 32-bit and 64-bit szip, and the header file
      and its path were added to the HDF5 binary even though the configure
      check of a function in libsz later failed and szip compression was not
      enabled.  The header file and include path are now added only when the
      libsz function passes the configure check.

      (LRK - 2020/08/17, HDFFV-10830)

    - Added -fsanitize=address autotools configure option for Clang compiler

      Clang sanitizer options were also added for Clang compilers with CMake.

      (LRK, 2020/08/05, HDFFV-10836)

    - Updated testh5cc.sh.in for functions versioned in HDF5 1.10.

      testh5cc.sh previously tested that the correct version of a function
      versioned in HDF5 1.6 or 1.8 was compiled when one of
      H5_NO_DEPRECATED_SYMBOLS or H5_USE_16_API_DEFAULT were defined.  This
      test was extended for additional testing with H5_USE_18_API_DEFAULT.

      (LRK, 2020/06/22, HDFFV-11000)

    - Fixed CMake include properties for Fortran libraries

      Corrected the library properties for Fortran to use the
      correct path for the Fortran module files.

      (ADB - 2020/02/04, HDFFV-11012)

    Performance
    -------------
    - None

    Java Library:
    ----------------
    - None

    Fortran
    --------
    - Corrected INTERFACE INTENT(IN) to INTENT(OUT) for buf_size in \ 
h5fget_file_image_f.

      (MSB - 2020/2/18, HDFFV-11029)

    - Fixed configure issue when building HDF5 with NAG Fortran 7.0.

      HDF5 now accounts for the addition of half-precision floating-point
      in NAG 7.0 with a KIND=16.

      (MSB - 2020/02/28, HDFFV-11033)

    Tools
    -----
    - The tools library was updated by standardizing the error stack process.

      General sequence is:
          h5tools_setprogname(PROGRAMNAME);
          h5tools_setstatus(EXIT_SUCCESS);
          h5tools_init();
          ... process the command-line (check for error-stack enable) ...
          h5tools_error_report();
          ... (do work) ...
          h5diff_exit(ret);

      (ADB - 2020/07/20, HDFFV-11066)

    - h5diff fixed a command line parsing error.

      h5diff would ignore the argument to -d (delta) if it is smaller than \ 
DBL_EPSILON.
      The macro H5_DBL_ABS_EQUAL was removed and a direct value comparision was used.

      (ADB - 2020/07/20, HDFFV-10897)

    - h5diff added a command line option to ignore attributes.

      h5diff would ignore all objects with a supplied path if the exclude-path \ 
argument is used.
      Adding the exclude-attribute argument will only eclude attributes, with \ 
the supplied path,
      from comparision.

      (ADB - 2020/07/20, HDFFV-5935)

    - h5diff added another level to the verbose argument to print filenames.

      Added verbose level 3 that is level 2 plus the filenames. The levels are:
          0 : Identical to '-v' or '--verbose'
          1 : All level 0 information plus one-line attribute status summary
          2 : All level 1 information plus extended attribute status report
          3 : All level 2 information plus file names

      (ADB - 2020/07/20, HDFFV-10005)

    - h5repack was fixed to repack the reference attributes properly.
      The code line that checks if the update of reference inside a compound
      datatype is misplaced outside the code block loop that carries out the
      check. In consequence, the next attribute that is not the reference
      type was repacked again as the reference type and caused the failure of
      repacking. The fix is to move the corresponding code line to the correct
      code block.

      (KY -2020/02/10, HDFFV-11014)

    High-Level APIs:
    ------
    - The H5DSis_scale function was updated to return "not a dimension \ 
scale" (0)
      instead of failing (-1), when CLASS or DIMENSION_SCALE attributes are
      not written according to Dimension Scales Specification.

     (EIP - 2020/08/12, HDFFV-10436)

    Fortran High-Level APIs:
    ------
    - None

    Documentation
    -------------
    - None

    F90 APIs
    --------
    - None

    C++ APIs
    --------
    - None

    Testing
    -------
    - Stopped java/test/junit.sh.in installing libs for testing under ${prefix}

      Lib files needed are now copied to a subdirectory in the java/test
      directory, and on Macs the loader path for libhdf5.xxxs.so is changed
      in the temporary copy of libhdf5_java.dylib.

      (LRK, 2020/7/2, HDFFV-11063)